id
stringlengths
25
96
input
stringlengths
137
1.08M
output
stringlengths
501
1.6k
instruction
stringclasses
5 values
num_tokens
int64
73
522
arxiv-format/1612_08549v2.md
# Rank-One NMF-Based Initialization for NMF and Relative Error Bounds under a Geometric Assumption Zhaoqiang Liu and Vincent Y. F. Tan, This paper was presented in part at ICASSP 2017.The authors are with the Department of Mathematics, National University of Singapore (NUS). The second author is also with the Department of Electrical and Computer Engineering, NUS.The work of Z. Liu ([email protected]) is supported by an NUS Research Scholarship. The work of V. Y. F. Tan ([email protected]) is supported in part by an NUS grant (R-263-000-B37-133). ## I Introduction The nonnegative matrix factorization (NMF) problem can be formulated as follows: Given a nonnegative data matrix \\(\\mathbf{V}\\in\\mathbb{R}_{+}^{F\\times N}\\) and a positive integer \\(K\\), we seek nonnegative factor matrices \\(\\mathbf{W}\\in\\mathbb{R}_{+}^{F\\times K}\\) and \\(\\mathbf{H}\\in\\mathbb{R}_{+}^{K\\times N}\\), such that the distance (measured in some norm) between \\(\\mathbf{V}\\) and \\(\\mathbf{W}\\mathbf{H}\\) is minimized. Due to its non-subtructive, parts-based property which enhances interpretability, NMF has been widely used in machine learning [1] and signal processing [2] among others. In addition, there are many fundamental algorithms to approximately solve the NMF problem, including the multiplicative update algorithms [3], the alternating (nonnegative) least-squares-type algorithms [4, 5, 6, 7], and the hierarchical alternating least square algorithms [8] (also called the rank-one residual iteration [9]). However, it is proved in [10] that NMF problem is NP-hard and all the basic algorithms simply either ensure that the sequence of objective functions is non-increasing or that the algorithm converges to the set of stationary points [11, 9, 12]. To the best of our knowledge, none of these algorithms is suitable for analyzing a bound on the approximation error of NMF. In an effort to find computationally tractable algorithms for NMF and to provide theoretical guarantees on the errors of these algorithms, researchers have revisited the so-called _separability assumption_ proposed by Donoho and Stodden [13]. An exact nonnegative factorization \\(\\mathbf{V}=\\mathbf{W}\\mathbf{H}\\) is _separable_ if for any \\(k\\in\\{1,2,\\ldots,K\\}\\), there is an \\(n(k)\\in\\{1,2,\\ldots,F\\}\\) such that \\(\\mathbf{W}(n(k),j)=0\\) for all \\(j\ eq k\\) and \\(\\mathbf{W}(n(k),k)>0\\). That is, an exact nonnegative factorization is separable if all the features can be represented as nonnegative linear combinations of \\(K\\) features. It is proved in [14] that under the separability condition, there is an algorithm that runs in time polynomial in \\(F\\), \\(N\\) and \\(K\\) and outputs a separable nonnegative factorization \\(\\mathbf{V}=\\mathbf{W}^{*}\\mathbf{H}^{*}\\) with the number of columns of \\(\\mathbf{W}^{*}\\) being at most \\(K\\). Furthermore, to handle noisy data, a perturbation analysis of their algorithm is presented. The authors assumed that \\(\\mathbf{V}\\) is normalized such that every row of it has unit \\(\\ell_{1}\\) norm and \\(\\mathbf{V}\\) has a separable nonnegative factorization \\(\\mathbf{V}=\\mathbf{W}\\mathbf{H}\\). In addition, each row of \\(\\mathbf{V}\\) is perturbed by adding a vector of small \\(\\ell_{1}\\) norm to obtain a new data matrix \\(\\mathbf{V}^{\\prime}\\). With additional assumptions on the noise and \\(\\mathbf{H}\\), their algorithm leads to an approximate nonnegative matrix factorization \\(\\mathbf{W}^{\\prime}\\mathbf{H}^{\\prime}\\) of \\(\\mathbf{V}^{\\prime}\\) with a provable error bound for the \\(\\ell_{1}\\) norm of each row of \\(\\mathbf{V}^{\\prime}-\\mathbf{W}^{\\prime}\\mathbf{H}^{\\prime}\\). To develop more efficient algorithms and to extend the basic formulation to more general noise models, a collection of elegant papers dealing with NMF under various separability conditions has emerged [15, 16, 17, 18, 19]. ### _Main Contributions_ #### I-A1 Theoretical Contributions We introduce a geometric assumption on the data matrix \\(\\mathbf{V}\\) that allows us to correctly group columns of \\(\\mathbf{V}\\) into disjoint subsets. This naturally suggests an algorithm that first clusters the columns and subsequently uses a rank-one approximate NMF algorithm [20] to obtain the final decomposition. We analyze the error performance and provide a deterministic upper bound on the relative error. We also consider a random data generation model and provide a probabilistic relative error bound. Our geometric assumption can be considered as a special case of the separability (or, more precisely, the near-separability) assumption [13]. However, there are certain key differences: First, because our assumption is based on a notion of clusterability [21], our proof technique is different from those in the literature that leverage the separability condition. Second, unlike most works that assume separability [15, 16, 17, 18, 19], we exploit the \\(\\ell_{2}\\) norm of vectors instead of the \\(\\ell_{1}\\) norm of vectors/matrices. Third,does not need to be assumed to be normalized. As pointed out in [17], normalization, especially in the \\(\\ell_{1}\\)-norm for the rows of data matrices may deteriorate the clustering performance for text datasets significantly. Fourth, we provide an upper bound for relative error instead of the absolute error. Our work is the first to provide theoretical analyses for the relative error for near-separable-type NMF problems. Finally, we assume all the samples can be approximately represented by certain special samples (e.g., centroids) instead of using a small set of salient features to represent all the features. Mathematically, these two approximations may appear to be equivalent. However, our assumption and analysis techniques enable us to provide an efficient algorithm and tight probabilistic relative error bounds for the NMF approximation (cf. Theorem 6). #### I-A2 Experimental Evaluations Empirically, we show that this algorithm performs well in practice. When applied to data matrices generated from our statistical model, our algorithm yields comparable relative errors vis-a-vis several classical NMF algorithms including the multiplicative algorithm, the alternating nonnegative least square algorithm with block pivoting, and the hierarchical alternating least square algorithm. However, our algorithm is _significantly faster_ as it simply involves calculating rank-one SVDs. It is also well-known that NMF is sensitive to initializations. The authors in [22, 23] use spherical k-means and an SVD-based technique to initialize NMF. We verify on several image and hyperspectral datasets that our algorithm, when combined with several classical NMF algorithms, achieves the best convergence rates and/or the smallest final relative errors. We also provide intuition for why our algorithm serves as an effective initializer for other NMF algorithms. Finally, combinations of our algorithm and several NMF algorithms achieve the best clustering performance for several face and text datasets. These experimental results substantiate that our algorithm can be used as a good initializer for standard NMF techniques. ### _Related Work_ We now describe some works that are related to ours. #### I-B1 Near-Separable NMF Arora et al. [14] provide an algorithm that runs in time polynomial in \\(F\\), \\(N\\) and \\(K\\) to find the correct factor matrices under the separability condition. Furthermore, the authors consider the near-separable case and prove an approximation error bound when the original data matrix \\(\\mathbf{V}\\) is slightly perturbed from being separable. The algorithm and the theorem for near-separable case is also presented in [16]. The main ideas behind the theorem are as follows: first, \\(\\mathbf{V}\\) must be normalized such that every row of it has unit \\(\\ell_{1}\\) norm; this assumption simplifies the conical hull for exact NMF to a convex hull. Second, the rows of \\(\\mathbf{H}\\) need to be robustly simplicial, i.e., every row of \\(\\mathbf{H}\\) should not be contained in the convex hull of all other rows and the largest perturbation of the rows of \\(\\mathbf{V}\\) should be bounded by a function of the smallest distance from a row of \\(\\mathbf{H}\\) to the convex hull of all other rows. Later we will show in Section II that our geometric assumption stated in inequality (2) is similar to this key idea in [14]. Although we impose a clustering-type generating assumption for data matrix, we do not need the normalization assumption in [14], which is stated in [17] that may lead to bad clustering performance for text datasets. In addition, because we do not impose this normalization assumption, instead of providing an upper bound on the approximation error, we provide the upper bound for relative error, which is arguably more natural. #### I-B2 Initialization Techniques for NMF Similar to k-means, NMF can easily be trapped at bad local optima and is sensitive to initialization. We find that our algorithm is particularly amenable to provide good initial factor matrices for subsequently applying standard NMF algorithms. Thus, here we mention some works on initialization for NMF. Spherical k-means (spkm) is a simple clustering method and it is shown to be one of the most efficient algorithms for document clustering [24]. The authors in [22] consider using spkm for initializing the left factor matrix \\(\\mathbf{W}\\) and observe a better convergence rate compared to random initialization. Other clustering-based initialization approaches for NMF including divergence-based k-means [25] and fuzzy clustering [26]. It is also natural to consider using singular value decomposition (SVD) to initialize NMF. In fact, if there is no nonnegativity constraint, we can obtain the best rank-\\(K\\) approximation of a given matrix directly using SVD, and there are strong relations between NMF and SVD. For example, we can obtain the best rank-one NMF from the best rank-one SVD (see Lemma 3), and if the best rank-two approximation matrix of a nonnegative data matrix is also nonnegative, then we can also obtain best rank-two NMF [20]. Moreover, for a general positive integer \\(K\\), it is shown in [23] that nonnegative double singular value decomposition (nndsvd), a deterministic SVD-based approach, can be used to enhance the initialization of NMF, leading to a faster reduction of the approximation error of many NMF algorithms. The CUR decomposition-based initialization method [27] is another factorization-based initialization approach for NMF. We compare our algorithm to widely-used algorithms for initializing NMF in Section VI-B3. ### _Notations_ We use upper case boldface letters to denote matrices and we use lower case boldface letters to denote vectors. We use Matlab-style notation for indexing, e.g., \\(\\mathbf{V}(i,j)\\) denotes the entry of \\(\\mathbf{V}\\) in the \\(i\\)-th row and \\(j\\)-th column, \\(\\mathbf{V}(i,:)\\) denotes the \\(i\\)-th row of \\(\\mathbf{V}\\), \\(\\mathbf{V}(:,j)\\) denotes the \\(j\\)-th column of \\(\\mathbf{V}\\) and \\(\\mathbf{V}(:,\\mathscr{K})\\) denotes the columns of \\(\\mathbf{V}\\) indexed by \\(\\mathscr{K}\\). \\(\\|\\mathbf{V}\\|_{\\mathrm{F}}\\) represents the Frobenius norm of \\(\\mathbf{V}\\) and \\([N]:=\\{1,2,\\ldots,N\\}\\) for any positive integer \\(N\\). Inequalities \\(\\mathbf{v}\\geq 0\\) or \\(\\mathbf{V}\\geq 0\\) denote element-wise nonnegativity. Let \\(\\mathbf{V}_{1}\\in\\mathbb{R}^{F\\times N_{1}}\\) and \\(\\mathbf{V}_{2}\\in\\mathbb{R}^{F\\times N_{2}}\\), we denote by \\([\\mathbf{V}_{1},\\mathbf{V}_{2}]\\) the horizontal concatenation of the two matrices. Similarly, let \\(\\mathbf{V}_{1}\\in\\mathbb{R}^{F_{1}\\times N}\\) and \\(\\mathbf{V}_{2}\\in\\mathbb{R}^{F_{2}\\times N}\\). We denote by \\([\\mathbf{V}_{1};\\mathbf{V}_{2}]\\) the vertical concatenation of the two matrices. We use \\(\\mathbb{R}_{+}\\) and \\(\\mathbb{R}_{++}\\) to represent the set of nonnegative and positive numbers respectively. We denote the nonnegative orthant \\(\\mathbb{R}_{+}^{F}\\) as \\(\\mathcal{P}\\). We use \\(\\xrightarrow{\\mathcal{P}}\\) to denote convergence in probability. ## II Problem Formulation In this section, we first present our geometric assumption and prove that the exact clustering can be obtained for the normalized data points under the geometric assumption. Next, we introduce several useful lemmas in preparation for the proofs of the main theorems in subsequent sections. ### _Our Geometric Assumption on \\(\\mathbf{V}\\)_ We assume the columns of \\(\\mathbf{V}\\) lie in \\(K\\) circular cones which satisfy a geometric assumption presented in (2) to follow. We define _circular cones_ as follows: **Definition 1**: _Given \\(\\mathbf{u}\\in\\mathbb{R}_{+}^{F}\\) with unit \\(\\ell_{2}\\) norm and an angle \\(\\alpha\\in(0,\\pi/2)\\), the circular cone with respect to (w.r.t.) \\(\\mathbf{u}\\) and \\(\\alpha\\) is defined as_ \\[\\mathcal{C}(\\mathbf{u},\\alpha):=\\Big{\\{}\\mathbf{x}\\in\\mathbb{R}^{F}\\setminus \\{0\\}:\\frac{\\mathbf{x}^{T}\\mathbf{u}}{\\|\\mathbf{x}\\|_{2}}\\geq\\cos\\alpha\\Big{\\}}. \\tag{1}\\] _In other words, \\(\\mathcal{C}(\\mathbf{u},\\alpha)\\) contains all \\(\\mathbf{x}\\in\\mathbb{R}^{F}\\setminus\\{0\\}\\) for which the angle between \\(\\mathbf{u}\\) and \\(\\mathbf{x}\\) is not larger than \\(\\alpha\\). We say that \\(\\alpha\\) and \\(\\mathbf{u}\\) are respectively the size angle and basis vector of the circular cone. In addition, the corresponding truncated circular cone in nonnegative orthant is \\(\\mathcal{C}(\\mathbf{u},\\alpha)\\cap\\mathcal{P}\\)._ We assume that there are \\(K\\) truncated circular cones \\(C_{1}\\cap\\mathcal{P},\\ldots,C_{K}\\cap\\mathcal{P}\\) with corresponding basis vectors and size angles, i.e., \\(C_{k}:=\\mathcal{C}\\left(\\mathbf{u}_{k},\\alpha_{k}\\right)\\) for \\(k\\in[K]\\). Let \\(\\beta_{ij}:=\\arccos\\left(\\mathbf{u}_{i}^{T}\\mathbf{u}_{j}\\right)\\). We make the geometric assumption that the columns of our data matrix \\(\\mathbf{V}\\) lie in \\(K\\) truncated circular cones which satisfy \\[\\min_{i,j\\in[K],i\ eq j}\\beta_{ij}>\\max_{i,j\\in[K],i\ eq j}\\{\\max\\{\\alpha_{i }+3\\alpha_{j},3\\alpha_{i}+\\alpha_{j}\\}\\}. \\tag{2}\\] If we sort \\(\\alpha_{1},\\ldots,\\alpha_{K}\\) as \\(\\hat{\\alpha}_{1},\\ldots,\\hat{\\alpha}_{K}\\) such that \\(\\hat{\\alpha}_{1}\\geq\\hat{\\alpha}_{2}\\geq\\ldots\\geq\\hat{\\alpha}_{K}\\), (2) is equivalent to \\[\\min_{i,j\\in[K],i\ eq j}\\beta_{ij}>3\\hat{\\alpha}_{1}+\\hat{\\alpha}_{2} \\tag{3}\\] The size angle \\(\\alpha_{k}\\) is a measure of perturbation in \\(k\\)-th circular cone and \\(\\beta_{ij},i\ eq j\\) is a measure of distance between the \\(i\\)-th basis vector and the \\(j\\)-th basis vector. Thus, (2) is similar to the second idea in [14] (cf. Section I-B1), namely, that the largest perturbation of the rows of \\(\\mathbf{V}\\) is bounded by a function of the smallest distance from a row of \\(\\mathbf{H}\\) to the convex hull of all other rows. This assumption is realistic for datasets whose samples can be clustered into distinct types; for example, image datasets in which images either contain a distinct foreground (e.g., a face) embedded on a background, or they only comprise a background. See Figure 1 for an illustration of the geometric assumption in (2) and refer to Figure 1 in [16] for an illustration of the separability condition. Now we discuss the relation between our geometric assumption and the separability and near-separability [14, 16] conditions that have appeared in the literature (and discussed in Section I). Consider a data matrix \\(\\mathbf{V}\\) generated under the extreme case of our geometric assumption that all the size angles of the \\(K\\) circular cones are zero. Then every column of \\(\\mathbf{V}\\) is a nonnegative multiple of a basis vector of a circular cone. This means that all the columns of \\(\\mathbf{V}\\) can be represented as nonnegative linear combinations of \\(K\\) columns, i.e., the \\(K\\) basis vectors \\(\\mathbf{u}_{1},\\ldots,\\mathbf{u}_{K}\\). This can be considered as a special case of separability assumption. When the size angles are not all zero, our geometric assumption can be considered as a special case of the near-separability assumption. In Lemma 1, we show that Algorithm 1, which has time complexity \\(O(KFN)\\), correctly clusters the columns of \\(\\mathbf{V}\\) under the geometric assumption. **Lemma 1**: _Under the geometric assumption on \\(\\mathbf{V}\\), if Algorithm 1 is applied to \\(\\mathbf{V}\\), then the columns of \\(\\mathbf{V}\\) are partitioned into \\(K\\) subsets, such that the data points in the same subset are generated from the same truncated circular cone._ We normalize \\(\\mathbf{V}\\) to obtain \\(\\mathbf{V}^{\\prime}\\), such that all the columns of \\(\\mathbf{V}^{\\prime}\\) have unit \\(\\ell_{2}\\) norm. From the definition, we know if a data point is in a truncated circular cone, then the normalized data point is also in the truncated circular cone. Then for any two columns \\(\\mathbf{x}\\), \\(\\mathbf{y}\\) of \\(\\mathbf{V}^{\\prime}\\) that are in the same truncated circular cone \\(C_{k}\\cap\\mathcal{P},k\\in[K]\\), the largest possible angle between them is \\(\\min\\{2\\alpha_{k},\\pi/2\\}\\), and thus the distance \\(\\|\\mathbf{x}-\\mathbf{y}\\|_{2}\\) between these two data points is not larger than \\(\\sqrt{2\\left(1-\\cos\\left(2\\alpha_{k}\\right)\\right)}\\). On the other hand, for any two columns \\(\\mathbf{x}\\), \\(\\mathbf{y}\\) of \\(\\mathbf{V}^{\\prime}\\) that are in two truncated circular cones \\(C_{i}\\cap\\mathcal{P},C_{j}\\cap\\mathcal{P},i\ eq j\\), the smallest possible angle between them is \\(\\beta_{ij}-\\alpha_{i}-\\alpha_{j}\\), thus the smallest possible distance between them is \\(\\sqrt{2\\left(1-\\cos\\left(\\beta_{ij}-\\alpha_{i}-\\alpha_{j}\\right)\\right)}\\). Then under the geometric assumption (2), the distance between any two unit data points in distinct truncated circular cones is larger than the distance between any two unit data points in the same truncated circular cone. Hence, Algorithm 1 returns the correct clusters. Now we present the following two useful lemmas. Lemma 2 provides an upper bound for perturbations of singular values. Lemma 3 shows that we can directly obtain the best rank-one nonnegative matrix factorization from the best rank-one SVD. **Lemma 2** (Perturbation of singular values [28]): _If \\(\\mathbf{A}\\) and \\(\\mathbf{A}+\\mathbf{E}\\) are in \\(\\mathbb{R}^{F\\times N}\\), then_ \\[\\sum_{p=1}^{P}\\left(\\sigma_{p}(\\mathbf{A}+\\mathbf{E})-\\sigma_{p}(\\mathbf{A}) \\right)^{2}\\leq\\|\\mathbf{E}\\|_{\\mathrm{F}}^{2}, \\tag{5}\\] _where \\(P=\\min\\{F,N\\}\\) and \\(\\sigma_{p}(\\mathbf{A})\\) is the \\(p\\)-th largest singular value of \\(\\mathbf{A}\\). In addition, we have_ \\[|\\sigma_{p}(\\mathbf{A}+\\mathbf{E})-\\sigma_{p}(\\mathbf{A})|\\leq\\sigma_{1}( \\mathbf{E})=\\|\\mathbf{E}\\|_{2} \\tag{6}\\] _for any \\(p\\in[P]\\)._ **Lemma 3** (Rank-One Approximate NMF [20]): _Let \\(\\sigma\\mathbf{u}\\mathbf{v}^{T}\\) be the rank-one singular value decomposition of a matrix \\(\\mathbf{V}\\in\\mathbb{R}_{+}^{F\\times N}\\). Then \\(\\mathbf{u}^{\\prime}:=\\sigma|\\mathbf{u}|\\), \\(\\mathbf{v}^{\\prime}:=|\\mathbf{v}|\\) solves_ \\[\\min_{\\mathbf{x}\\in\\mathbb{R}_{+}^{F},\\mathbf{y}\\in\\mathbb{R}_{+}^{N}}\\| \\mathbf{V}-\\mathbf{x}\\mathbf{y}^{T}\\|_{\\mathrm{F}}. \\tag{7}\\] ## III Non-Probabilistic Theorems In this section, we first present a deterministic theorem concerning an upper bound for the relative error of NMF. Subsequently, we provide several extensions of this theorem. **Theorem 4**: _Suppose all the data points in data matrix \\(\\mathbf{V}\\in\\mathbb{R}_{+}^{F\\times N}\\) are drawn from \\(K\\) truncated circular cones \\(C_{1}\\cap\\mathcal{P},\\ldots,C_{K}\\cap\\mathcal{P}\\), where \\(C_{k}:=\\mathcal{C}\\left(\\mathbf{u}_{k},\\alpha_{k}\\right)\\) for \\(k\\in[K]\\). Then there is a pair of factor matrices \\(\\mathbf{W}^{*}\\in\\mathbb{R}_{+}^{F\\times K}\\), \\(\\mathbf{H}^{*}\\in\\mathbb{R}_{+}^{K\\times N}\\), such that_ \\[\\frac{\\|\\mathbf{V}-\\mathbf{W}^{*}\\mathbf{H}^{*}\\|_{\\mathrm{F}}}{\\|\\mathbf{V} \\|_{\\mathrm{F}}}\\leq\\max_{k\\in[K]}\\{\\sin\\alpha_{k}\\}. \\tag{8}\\] Proof:: Define \\(\\mathscr{I}_{k}:=\\{n\\in[N]:\\mathbf{v}_{n}\\in C_{k}\\cap\\mathcal{P}\\}\\) (if a data point \\(\\mathbf{v}_{n}\\) is contained in more than one truncated circular cones, we arbitrarily assign any one it is contained in). Then \\(\\mathscr{I}_{1},\\mathscr{I}_{2},\\ldots,\\mathscr{I}_{K}\\subseteq[N]\\) are disjoint index sets and their union is \\([N]\\). Any two data points \\(\\mathbf{V}\\left(:,j_{1}\\right)\\) and \\(\\mathbf{V}\\left(:,j_{2}\\right)\\) are in the same circular cones if \\(j_{1}\\) and \\(j_{2}\\) are in the same index set. Let \\(\\mathbf{V}_{k}=\\mathbf{V}\\left(:,\\mathscr{I}_{k}\\right)\\) and without loss of generality, suppose that \\(\\mathbf{V}_{k}\\in C_{k}\\) for \\(k\\in[K]\\). For any \\(k\\in[K]\\) and any column \\(\\mathbf{z}\\) of \\(\\mathbf{V}_{k}\\), suppose the angle between \\(\\mathbf{z}\\) and \\(\\mathbf{u}_{k}\\) is \\(\\beta\\), we have \\(\\beta\\leq\\alpha_{k}\\) and \\(\\mathbf{z}=\\|\\mathbf{z}\\|_{2}(\\cos\\beta)\\mathbf{u}_{k}+\\mathbf{y}\\), with \\(\\|\\mathbf{y}\\|_{2}=\\|\\mathbf{z}\\|_{2}(\\sin\\beta)\\leq\\|\\mathbf{z}\\|_{2}(\\sin \\alpha_{k})\\). Thus \\(\\mathbf{V}_{k}\\) can be written as the sum of a rank-one matrix \\(\\mathbf{A}_{k}\\) and a perturbation matrix \\(\\mathbf{E}_{k}\\). By Lemma 3, we can find the best rank-one approximate NMF of \\(\\mathbf{V}_{k}\\) from the singular value decomposition of \\(\\mathbf{V}_{k}\\). Suppose \\(\\mathbf{w}_{k}^{*}\\in\\mathbb{R}_{+}^{F}\\) and \\(\\mathbf{h}_{k}\\in\\mathbb{R}_{+}^{\\mathscr{I}_{k}}\\) solve the best rank-one approximate NMF. Let \\(\\mathbf{S}_{k}:=\\mathbf{w}_{k}^{*}\\mathbf{h}_{k}^{T}\\) be the best rank-one approximation matrix of \\(\\mathbf{V}_{k}\\). Let \\(P_{k}=\\min\\{F,|\\mathscr{I}_{k}|\\}\\), then by Lemma 2, we have \\[\\|\\mathbf{V}_{k}-\\mathbf{S}_{k}\\|_{\\mathrm{F}}^{2}=\\sum_{p=2}^{P_{k}}\\sigma_{p }^{2}\\left(\\mathbf{V}_{k}\\right)=\\sum_{p=2}^{P_{k}}\\sigma_{p}^{2}\\left( \\mathbf{A}_{k}+\\mathbf{E}_{k}\\right)\\leq\\|\\mathbf{E}_{k}\\|_{\\mathrm{F}}^{2}. \\tag{9}\\] From the previous result, we know that \\[\\frac{\\|\\mathbf{E}_{k}\\|_{\\mathrm{F}}^{2}}{\\|\\mathbf{V}_{k}\\|_{ \\mathrm{F}}^{2}}=\\frac{\\sum_{\\mathbf{z}\\in\\mathbf{V}_{k}}\\|\\mathbf{z}\\|_{2}^{2 }\\sin^{2}\\beta_{\\mathbf{z}}}{\\sum_{\\mathbf{z}\\in\\mathbf{V}_{k}}\\|\\mathbf{z}\\|_ {2}^{2}}\\leq\\sin^{2}\\alpha_{k}, \\tag{10}\\] where \\(\\beta_{\\mathbf{z}}\\) denotes the angle between \\(\\mathbf{z}\\) and \\(\\mathbf{u}_{k}\\), \\(\\beta_{\\mathbf{z}}\\leq\\alpha_{k}\\), and \\(\\mathbf{z}\\in\\mathbf{V}_{k}\\) runs over all columns of the matrix \\(\\mathbf{V}_{k}\\). Define \\(\\mathbf{h}_{k}^{*}\\in\\mathbb{R}_{+}^{N}\\) as \\(\\mathbf{h}_{k}^{*}(n)=\\mathbf{h}_{k}(n)\\), if \\(n\\in\\mathscr{I}_{k}\\) and \\(\\mathbf{h}_{k}^{*}(n)=0\\) if \\(n\ otin\\mathscr{I}_{k}\\). Let \\(\\mathbf{W}^{*}:=\\left[\\mathbf{w}_{1}^{*},\\mathbf{w}_{2}^{*},\\ldots,\\mathbf{w}_ {k}^{*}\\right]\\) and \\(\\mathbf{H}^{*}:=\\left[\\left(\\mathbf{h}_{1}^{*}\\right)^{T};\\left(\\mathbf{h}_ {2}^{*}\\right)^{T}\\cdots;\\left(\\mathbf{h}_{K}^{*}\\right)^{T}\\right]\\), then we have \\[\\frac{\\|\\mathbf{V}-\\mathbf{W}^{*}\\mathbf{H}^{*}\\|_{\\mathrm{F}}^{2}}{\\| \\mathbf{V}\\|_{\\mathrm{F}}^{2}} =\\frac{\\sum_{k=1}^{K}\\|\\mathbf{V}_{k}-\\mathbf{w}_{k}^{*}\\mathbf{h}_{k }^{T}\\|_{\\mathrm{F}}^{2}}{\\|\\mathbf{V}\\|_{\\mathrm{F}}^{2}} \\tag{11}\\] \\[\\leq\\frac{\\sum_{k=1}^{K}\\|\\mathbf{V}_{k}\\|_{\\mathrm{F}}^{2}\\sin^{ 2}\\alpha_{k}}{\\sum_{k=1}^{K}\\|\\mathbf{V}_{k}\\|_{\\mathrm{F}}^{2}}. \\tag{12}\\] Thus we have (8) as desired. In practice, to obtain the tightest possible upper bound for (8), we need to solve the following optimization problem: \\[\\min\\max_{k\\in[K]}\\alpha(\\mathbf{V}_{k}), \\tag{13}\\] where \\(\\alpha(\\mathbf{V}_{k})\\) represents the smallest possible size angle corresponding to \\(\\mathbf{V}_{k}\\) (defined in (18)) and the minimization is taken over all possible clusterings of the columns of \\(\\mathbf{V}\\). We consider finding an optimal size angle and a corresponding basis vector for any data matrix, which we hereby write as \\(\\mathbf{X}:=[\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{M}]\\in\\mathbb{R}_{+}^{F\\times M}\\) where \\(M\\in\\mathbb{N}_{+}\\). This is solved by the following optimization problem: \\[\\begin{array}{ll}\\text{minimize}&\\alpha\\\\ \\text{subject to}&\\mathbf{x}_{m}^{T}\\mathbf{u}\\geq\\cos\\alpha,\\quad m\\in[M],\\\\ &\\mathbf{u}\\geq 0,\\quad\\|\\mathbf{u}\\|_{2}=1,\\quad\\alpha\\geq 0,\\end{array} \\tag{14}\\] where the decision variables are \\((\\alpha,\\mathbf{u})\\). Alternatively, consider \\[\\begin{array}{ll}\\text{maximize}&\\cos\\alpha\\\\ \\text{subject to}&\\mathbf{x}_{m}^{T}\\mathbf{u}\\geq\\cos\\alpha,\\quad m\\in[M],\\\\ &\\mathbf{u}\\geq 0,\\quad\\|\\mathbf{u}\\|_{2}=1.\\end{array} \\tag{15}\\] Similar to the primal optimization problem for linearly separable support vector machines [29], we can obtain the optimal \\(\\mathbf{u}\\) and \\(\\alpha\\) for (15) by solving \\[\\begin{array}{ll}\\text{minimize}&\\frac{1}{2}\\|\\mathbf{u}\\|_{2}^{2}\\\\ \\text{subject to}&\\mathbf{x}_{m}^{T}\\mathbf{u}\\geq 1,\\quad m\\in[M],\\quad \\mathbf{u}\\geq 0,\\end{array} \\tag{16}\\] where the decision variable here is only \\(\\mathbf{u}\\). Note that (16) is a quadratic programming problem and can be easily solved by standard convex optimization software. Suppose \\(\\hat{\\mathbf{u}}\\) is the optimal solution of (16), then \\(\\mathbf{u}^{*}:=\\hat{\\mathbf{u}}/\\|\\hat{\\mathbf{u}}\\|_{2}\\) and \\(\\alpha^{*}:=\\arccos\\left(1/\\|\\hat{\\mathbf{u}}\\|_{2}\\right)\\) is the optimal basis vector and size angle. We now state and prove a relative error bound of the proposed approximate NMF algorithm detailed in Algorithm 2 under our geometric assumption. We see that if the size angles of all circular cones are small compared to the angle between the basis vectors of any two circular cones, then exact clustering is possible, and thus the relative error of the best approximate NMF of an arbitrary nonnegative matrix generated from these circular cones can be appropriatelycontrolled by these size angles. Note that rank-one SVD can be implemented by the power method efficiently [28]. Recall that as mentioned in Section II-A, Theorem 5 is similar to the corresponding theorem for the near-separable case in [14] in terms of the geometric condition imposed. **Theorem 5**: _Under the geometric assumption given in Section II-A for generating \\(\\mathbf{V}\\in\\mathbb{R}_{+}^{F\\times N}\\), Algorithm 2 outputs \\(\\mathbf{W}^{*}\\in\\mathbb{R}_{+}^{F\\times K}\\), \\(\\mathbf{H}^{*}\\in\\mathbb{R}_{+}^{K\\times N}\\), such that_ \\[\\frac{\\|\\mathbf{V}-\\mathbf{W}^{*}\\mathbf{H}^{*}\\|_{\\mathrm{F}}}{\\|\\mathbf{V}\\| _{\\mathrm{F}}}\\leq\\max_{k\\in[K]}\\{\\sin\\alpha_{k}\\}. \\tag{17}\\] Proof:: From Lemma 1, under the geometric assumption in Section II-A, we can obtain a set of non-empty, pairwise disjoint index sets \\(\\mathscr{I}_{1},\\mathscr{I}_{2},\\ldots,\\mathscr{I}_{K}\\subseteq[N]\\) such that their union is \\([N]\\) and two data points \\(\\mathbf{V}\\left(:,j_{1}\\right)\\) and \\(\\mathbf{V}\\left(:,j_{2}\\right)\\) are in the same circular cones if and only if \\(j_{1}\\) and \\(j_{2}\\) are in the same index set. Then from Theorem 4, we can obtain \\(\\mathbf{W}^{*}\\) and \\(\\mathbf{H}^{*}\\) with the same upper bound on the relative error. ## IV Probabilistic Theorems We now provide a tighter relative error bound by assuming a probabilistic model. For simplicity, we assume a straightforward and easy-to-implement statistical model for the sampling procedure. We first present the proof of the tighter relative error bound corresponding to the probabilistic model in Theorem 6 to follow, then we show that the upper bound for relative error is tight if we assume all the circular cones are contained in nonnegative orthant in Theorem 8. We assume the following generating process for each column \\(\\mathbf{v}\\) of \\(\\mathbf{V}\\) in Theorem 6 to follow. 1. sample \\(k\\in[K]\\) with equal probability \\(1/K\\); 2. sample the squared length \\(l\\) from the exponential distribution1\\(\\mathrm{Exp}(\\lambda_{k})\\) with parameter (inverse of the expectation) \\(\\lambda_{k}\\); Footnote 1: \\(\\mathrm{Exp}(\\lambda)\\) is the function \\(x\\mapsto\\lambda\\exp(-\\lambda x)1\\{x\\geq 0\\}\\). 3. uniformly sample a unit vector \\(\\mathbf{z}\\in C_{k}\\) w.r.t. the angle between \\(\\mathbf{z}\\) and \\(\\mathbf{u}_{k}\\);2 Footnote 2: This means we first uniformly sample an angle \\(\\beta\\in[0,\\alpha_{k}]\\) and subsequently uniformly sample a vector \\(\\mathbf{z}\\) from the set \\(\\{\\mathbf{x}\\in\\mathbb{R}^{F}:\\|\\mathbf{x}\\|_{2}=1,\\mathbf{x}^{T}\\mathbf{u}_{ k}=\\cos\\beta\\}\\). 4. if \\(\\mathbf{z}\ otin\\mathbb{R}_{+}^{F}\\), set all negative entries of \\(\\mathbf{z}\\) to zero, and rescale \\(\\mathbf{z}\\) to be a unit vector; 5. let \\(\\mathbf{v}=\\sqrt{\\mathrm{I}\\mathbf{z}}\\); **Theorem 6**: _Suppose the \\(K\\) truncated circular cones \\(C_{k}\\cap\\mathcal{P}\\) with \\(C_{k}:=\\mathcal{C}(\\mathbf{u}_{k},\\alpha_{k})\\in\\mathbb{R}^{F}\\) for \\(k\\in[K]\\) satisfy the geometric assumption given by (2). Let \\(\\boldsymbol{\\lambda}:=(\\lambda_{1};\\lambda_{2};\\ldots;\\lambda_{K})\\in\\mathbb{R }_{++}^{K}\\). We generate the columns of a data matrix \\(\\mathbf{V}\\in\\mathbb{R}_{+}^{F\\times N}\\) from the above generative process. Let \\(f(\\alpha):=\\frac{1}{2}-\\frac{\\sin 2\\alpha}{4\\alpha}\\), then for a small \\(\\epsilon>0\\), with probability at least \\(1-8\\exp(-\\xi N\\epsilon^{2})\\), one has_ \\[\\frac{\\|\\mathbf{V}-\\mathbf{W}^{*}\\mathbf{H}^{*}\\|_{\\mathrm{F}}}{\\|\\mathbf{V}\\| _{\\mathrm{F}}}\\leq\\sqrt{\\frac{\\sum_{k=1}^{K}f(\\alpha_{k})/\\lambda_{k}}{\\sum_{ k=1}^{K}1/\\lambda_{k}}}+\\epsilon, \\tag{22}\\] _where the constant \\(\\xi>0\\) depends only on \\(\\lambda_{k}\\) and \\(f(\\alpha_{k})\\) for all \\(k\\in[K]\\)._ **Remark 1**: _The assumption in Step 1 in the generating process that the data points are generated from \\(K\\) circular cones with equal probability can be easily generalized to unequal probabilities. The assumption in Step 2 that the square of the length of a data point is sampled from an exponential distribution can be easily extended any nonnegative sub-exponential distribution (cf. Definition 2 below), or equivalently, the length of a data point is sampled from a nonnegative sub-gaussian distribution (cf. Definition 3 in Appendix A)._ The relative error bound produced by Theorem 6 is better than that of Theorem 5, i.e., the former is more conservative. This can be seen from (26) to follow, or from the inequality \\(\\alpha\\leq\\tan\\alpha\\) for \\(\\alpha\\in[0,\\pi/2)\\). We also observe this in the experiments in Section VI-A1. Before proving Theorem 6, we define sub-exponential random variables and present a useful lemma. **Definition 2**: _A sub-exponential random variable \\(X\\) is one that satisfies one of the following equivalent properties 1. Tails: \\(\\mathbb{P}(|X|>t)\\leq\\exp(1-t/K_{1})\\) for all \\(t\\geq 0\\); 2. Moments: \\(\\left(\\mathbb{E}|X|^{p}\\right)^{1/p}\\leq K_{2}p\\) for all \\(p\\geq 1\\); 3. \\(\\mathbb{E}\\left[\\exp(X/K_{3})\\right]\\leq e\\); where \\(K_{i},i=1,2,3\\) are positive constants. The sub-exponential norm of \\(X\\), denoted \\(\\|X\\|_{\\Psi_{1}}\\), is defined to be_ \\[\\|X\\|_{\\Psi_{1}}:=\\sup_{p\\geq 1}p^{-1}\\left(\\mathbb{E}|X|^{p}\\right)^{1/p}. \\tag{23}\\] **Lemma 7**: _(Bernstein-type inequality) [30] Let \\(X_{1},\\ldots,X_{N}\\) be independent sub-exponential random variables with zero expectations, and \\(M=\\max_{i}\\|X_{i}\\|_{\\Psi_{1}}\\). Then for every \\(\\epsilon\\geq 0\\), we have_ \\[\\mathbb{P}\\!\\left(\\Big{|}\\sum_{i=1}^{N}X_{i}\\Big{|}\\!\\geq\\!\\epsilon N\\right)\\! \\leq\\!2\\exp\\left[-c\\cdot\\min\\left(\\frac{\\epsilon^{2}}{M^{2}},\\frac{ \\epsilon}{M}\\right)N\\right], \\tag{24}\\] _where \\(c>0\\) is an absolute constant._ Theorem 6 is proved by combining the large deviation bound in Lemma 7 with the deterministic bound on the relative error in Theorem 5. Proof:: From (9) and (10) in the proof of Theorem 5, to obtain an upper bound for the square of the relative error, we consider the following random variable \\[D_{N}:=\\frac{\\sum_{n=1}^{N}L_{n}^{2}\\sin^{2}B_{n}}{\\sum_{n=1}^{N}L_{n}^{2}}, \\tag{25}\\]where \\(L_{n}\\) is the random variable corresponding to the length of the \\(n\\)-th point, and \\(B_{n}\\) is the random variable corresponding to the angle between the \\(n\\)-th point and \\(\\mathbf{u}_{k}\\) for some \\(k\\in[K]\\) such that the point is in \\(C_{k}\\cap\\mathcal{P}\\). We first consider estimating the above random variable with the assumption that all the data points are generated from a single truncated circular cone \\(C\\cap\\mathcal{P}\\) with \\(C:=\\mathcal{C}(\\mathbf{u},\\alpha)\\) (i.e., assume \\(K=1\\)), and the square of lengths are generated according to the exponential distribution \\(\\operatorname{Exp}(\\lambda)\\). Because we assume each angle \\(\\beta_{n}\\) for \\(n\\in[N]\\) is sampled from a uniform distribution on \\([0,\\alpha]\\), the expectation of \\(\\sin^{2}B_{n}\\) is \\[\\operatorname{\\mathbb{E}}\\left[\\sin^{2}B_{n}\\right]=\\int_{0}^{\\alpha}\\frac{1 }{\\alpha}\\sin^{2}\\beta\\,\\mathrm{d}\\beta=\\frac{1}{2}-\\frac{\\sin 2\\alpha}{4 \\alpha}=f(\\alpha). \\tag{26}\\] Here we only need to consider vectors \\(\\mathbf{z}\\in\\mathbb{R}_{+}^{F}\\) whose angles with \\(\\mathbf{u}\\) are not larger than \\(\\alpha\\). Otherwise, we have \\(\\operatorname{\\mathbb{E}}[\\sin^{2}B_{n}]\\leq f(\\alpha)\\). Our probabilistic upper bound also holds in this case. Since the length and the angle are independent, we have \\[\\operatorname{\\mathbb{E}}\\left[D_{N}\\right]=\\operatorname{\\mathbb{E}}\\left[ \\operatorname{\\mathbb{E}}\\left[D_{N}|L_{1},\\ldots,L_{N}\\right]\\right]=f(\\alpha), \\tag{27}\\] and we also have \\[\\operatorname{\\mathbb{E}}\\left[L_{n}^{2}\\sin^{2}B_{n}\\right]=\\operatorname{ \\mathbb{E}}\\left[L_{n}^{2}\\right]\\operatorname{\\mathbb{E}}\\left[\\sin^{2}B_{n} \\right]=\\frac{f(\\alpha)}{\\lambda}. \\tag{28}\\] Define \\(X_{n}:=L_{n}^{2}\\) for all \\(n\\in[N]\\). Let \\[H_{N}:=\\frac{\\sum_{n=1}^{N}X_{n}}{N},\\ \\ \\text{and}\\ G_{N}:=\\frac{\\sum_{n=1}^{N}X_{n} \\sin^{2}B_{n}}{N}. \\tag{29}\\] We have for all \\(n\\in[N]\\), \\[\\operatorname{\\mathbb{E}}[X_{n}^{p}]=\\lambda^{-p}\\Gamma(p+1)\\leq\\lambda^{-p}p^ {p},\\qquad\\forall\\,p\\geq 1, \\tag{30}\\] where \\(\\Gamma(\\cdot)\\) is the gamma function. Thus \\(\\|X_{n}\\|_{\\Psi_{1}}\\leq\\lambda^{-1}\\), and \\(X_{n}\\) is sub-exponential. By the triangle inequality, we have \\(\\|X_{n}-\\operatorname{\\mathbb{E}}X_{n}\\|_{\\Psi_{1}}\\leq\\|X_{n}\\|_{\\Psi_{1}}+ \\|\\operatorname{\\mathbb{E}}X_{n}\\|_{\\Psi_{1}}\\leq 2\\|X_{n}\\|_{\\Psi_{1}}\\). Hence, by Lemma 7, for all \\(\\epsilon>0\\), we have (24) where \\(M\\) can be taken as \\(M=2/\\lambda\\). Because \\[\\left(\\operatorname{\\mathbb{E}}\\!\\left[\\left(X_{n}\\sin^{2}B_{n}\\right)^{p} \\right]\\right)^{1/p}\\leq\\lambda^{-1}p\\sin^{2}\\alpha\\leq\\lambda^{-1}p, \\tag{31}\\] we have a similar large deviation result for \\(G_{N}\\). On the other hand, for all \\(\\epsilon>0\\) \\[\\operatorname{\\mathbb{P}}\\left(|D_{N}-f(\\alpha)|\\geq\\epsilon\\right) =\\operatorname{\\mathbb{P}}\\left(\\left|\\frac{G_{N}}{H_{N}}-f(\\alpha)\\right| \\geq\\epsilon\\right) \\tag{32}\\] \\[\\leq\\operatorname{\\mathbb{P}}\\left(|\\lambda G_{N}\\!-\\!f(\\alpha)| \\!\\geq\\!\\frac{\\epsilon}{2}\\right)+\\operatorname{\\mathbb{P}}\\left(\\left|\\frac{ G_{N}}{H_{N}}\\!-\\!\\lambda G_{N}\\right|\\!\\geq\\!\\frac{\\epsilon}{2}\\right). \\tag{33}\\] For the second term, by fixing small constants \\(\\delta_{1},\\delta_{2}>0\\), we have \\[\\operatorname{\\mathbb{P}}\\left(\\left|\\frac{G_{N}}{H_{N}}-\\lambda G _{N}\\right|\\geq\\frac{\\epsilon}{2}\\right)=\\operatorname{\\mathbb{P}}\\left(\\frac{ 1-\\lambda H_{N}|G_{N}}{H_{N}}\\geq\\frac{\\epsilon}{2}\\right) \\tag{34}\\] \\[\\leq\\operatorname{\\mathbb{P}}\\left(\\frac{|1\\!-\\!\\lambda H_{N}|G_{N }}{H_{N}}\\geq\\frac{\\epsilon}{2},H_{N}\\geq\\frac{1}{\\lambda}-\\delta_{1},G_{N}\\! \\leq\\!\\frac{f(\\alpha)}{\\lambda}\\!+\\!\\delta_{2}\\right)\\] \\[\\qquad\\quad+\\operatorname{\\mathbb{P}}\\left(H_{N}<\\frac{1}{ \\lambda}-\\delta_{1}\\right)+\\operatorname{\\mathbb{P}}\\left(G_{N}>\\frac{f(\\alpha) }{\\lambda}+\\delta_{2}\\right). \\tag{35}\\] Combining the large deviation bounds for \\(H_{N}\\) and \\(G_{N}\\) in (24) with the above inequalities, if we set \\(\\delta_{1}=\\delta_{2}=\\epsilon\\) and take \\(\\epsilon\\) sufficiently small, \\[\\operatorname{\\mathbb{P}}\\left(|D_{N}-f(\\alpha)|\\geq\\epsilon\\right)\\leq 8 \\exp\\left(-\\xi N\\epsilon^{2}\\right), \\tag{36}\\] where \\(\\xi\\) is a positive constant depending on \\(\\lambda\\) and \\(f(\\alpha)\\). Now we turn to the general case in which \\(K\\in\\mathbb{N}\\). We have \\[\\operatorname{\\mathbb{E}}\\left[X_{n}\\right] =\\frac{\\sum_{k=1}^{K}1/\\lambda_{k}}{K},\\ \\ \\text{and} \\tag{37}\\] \\[\\operatorname{\\mathbb{E}}\\left[X_{n}\\sin^{2}B_{n}\\right] =\\frac{\\sum_{k=1}^{K}f(\\alpha_{k})/\\lambda_{k}}{K}, \\tag{38}\\] and for all \\(p\\geq 1\\), \\[(\\operatorname{\\mathbb{E}}[X_{n}^{p}])^{1/p}=\\left(\\frac{\\sum_{k=1}^{K}\\lambda _{k}^{-p}\\Gamma(p+1)}{K}\\right)^{1/p}\\leq\\frac{p}{\\min_{k}\\lambda_{k}}. \\tag{39}\\] Similar to (36), we have \\[\\operatorname{\\mathbb{P}}\\left(\\left|D_{N}\\!-\\!\\frac{\\sum_{k=1}^{K}f(\\alpha_{k} /\\lambda_{k})}{\\sum_{k=1}^{K}1/\\lambda_{k}}\\right|\\!\\geq\\!\\epsilon\\right)\\leq 8 \\exp\\left(-\\xi N\\epsilon^{2}\\right), \\tag{40}\\] and thus, if we let \\(\\Delta:=\\sqrt{\\frac{\\sum_{k=1}^{K}f(\\alpha_{k})/\\lambda_{k}}{\\sum_{k=1}^{K}1/ \\lambda_{k}}}\\), we have \\[\\operatorname{\\mathbb{P}}\\left(\\left|\\sqrt{D_{N}}-\\Delta\\right| \\leq\\epsilon\\right) \\geq\\operatorname{\\mathbb{P}}\\left(\\left|D_{N}-\\Delta^{2}\\right| \\leq\\Delta\\epsilon\\right) \\tag{41}\\] \\[\\geq 1-8\\exp\\left(-\\xi N\\Delta^{2}\\epsilon^{2}\\right). \\tag{42}\\] This completes the proof of (22). Furthermore, if the \\(K\\) circular cones \\(\\mathcal{C}_{1},\\ldots,\\mathcal{C}_{K}\\) are contained in the nonnegative orthant \\(\\mathcal{P}\\), we do not need to project the data points not in \\(\\mathcal{P}\\) onto \\(\\mathcal{P}\\). Then we can prove that the upper bound in Theorem 6 is asymptotically tight, i.e., \\[\\frac{\\|\\mathbf{V}-\\mathbf{W}^{*}\\mathbf{H}^{*}\\|_{\\operatorname{\\mathbb{P}}}}{\\| \\mathbf{V}\\|_{\\operatorname{\\mathbb{F}}}}\\ \\sqrt{\\frac{\\sum_{k=1}^{K}f(\\alpha_{k})/\\lambda_{k}}{\\sum_{k=1}^{K}1/\\lambda_{k}}},\\ \\text{as}\\ N\\to\\infty. \\tag{43}\\] **Theorem 8**: _Suppose the data points of \\(\\mathbf{V}\\in\\mathbb{R}_{+}^{F\\times N}\\) are generated as given in Theorem 6 with all the circular cones being contained in the nonnegative orthant, then Algorithm 2 produces \\(\\mathbf{W}^{*}\\in\\mathbb{R}_{+}^{F\\times K}\\) and \\(\\mathbf{H}^{*}\\in\\mathbb{R}_{+}^{K\\times N}\\) with the property that for any \\(\\epsilon\\in(0,1)\\) and \\(t\\geq 1\\), if \\(N\\geq c(t/\\epsilon)^{2}F\\), then with probability at least \\(1-6K\\exp(-t^{2}F)\\) one has_ \\[\\left|\\frac{\\|\\mathbf{V}-\\mathbf{W}^{*}\\mathbf{H}^{*}\\|_{\\operatorname{\\mathbb{F}}}}{\\| \\mathbf{V}\\|_{\\operatorname{\\mathbb{F}}}}-\\sqrt{\\frac{\\sum_{k=1}^{K}f( \\alpha_{k})/\\lambda_{k}}{\\sum_{k=1}^{K}1/\\lambda_{k}}}\\right|\\leq c\\epsilon \\tag{44}\\] _where \\(c\\) is a constant depending on \\(K\\) and \\(\\alpha_{k}\\), \\(\\lambda_{k}\\) for \\(k\\in[K]\\)._ Proof:: Since the proof of Theorem 8 is somewhat similar to that of Theorem 6, we defer it to Appendix A. ## V Automatically Determining \\(K\\) Automatically determining the latent dimensionality \\(K\\) is an important problem in NMF. Unfortunately, the usual and popular approach for determining the latent dimensionality of nonnegative data matrices based on Bayesian automatic relevance determination by Tan and Fevotte [31] does not work well for data matrices generated under the geometric assumption given in Section II-A. matrix with columns that are \\(1\\)-sparse (i.e., only contains one non-zero entry). Thus, \\(\\mathbf{W}\\) and \\(\\mathbf{H}\\) have very different statistics. While there are many approaches [32, 33, 34] to learn the number of clusters in clustering problems, most methods lack strong theoretical guarantees. By assuming the generative procedure for \\(\\mathbf{V}\\) proposed in Theorem 6, we consider a simple approach for determining \\(K\\) based on the maximum of the ratios between adjacent singular values. We provide a theoretical result for the correctness of this approach. Our method consists in estimating the correct number of circular cones \\(\\hat{K}\\) as follows: \\[\\hat{K}:=\\operatorname*{arg\\,max}_{k\\in\\{K_{\\min},\\ldots,K_{\\max}\\}}\\frac{ \\sigma_{k}(\\mathbf{V})}{\\sigma_{k+1}(\\mathbf{V})}. \\tag{45}\\] Here \\(K_{\\min}>1\\) and \\(K_{\\max}<\\operatorname{rank}(\\mathbf{V})\\) are selected based on domain knowledge. The main ideas that underpin (45) are (i) the approximation error for the best rank-\\(k\\) approximation of a data matrix in the Frobenius norm and (ii) the so-called elbow method [35] for determining the number of clusters. More precisely, let \\(\\mathbf{V}_{k}\\) be the best rank-\\(k\\) approximation of \\(\\mathbf{V}\\). Then \\(\\|\\mathbf{V}-\\mathbf{V}_{k}\\|_{\\mathrm{F}}^{2}=\\sum_{j=k+1}^{r}\\sigma_{j}^{2 }(\\mathbf{V})\\), where \\(r\\) is the rank of \\(\\mathbf{V}\\). If we increase \\(k\\) to \\(k+1\\), the square of the best approximation error decreases by \\(\\sigma_{k+1}^{2}(\\mathbf{V})\\). The elbow method chooses a number of clusters \\(k\\) so that the decrease in the objective function value from \\(k\\) clusters to \\(k+1\\) clusters is small compared to the decrease in the objective function value from \\(k-1\\) clusters to \\(k\\) clusters. Although this approach seems to be simplistic, interestingly, the following theorem tells that under appropriate assumptions, we can correctly find the number of circular cones with high probability. **Theorem 9**: _Suppose that the data matrix \\(\\mathbf{V}\\in\\mathbb{R}_{+}^{F\\times N}\\) is generated according to the generative process given in Theorem 6 where \\(K\\) is the true number of circular cones. Further assume that the size angles for \\(K\\) circular cones are all equal to \\(\\alpha\\), the angles between distinct basis vectors of the circular cones are all equal to \\(\\beta\\), and the parameters (inverse expectations) for the exponential distributions are all equal to \\(\\lambda\\). In addition, we assume all the circular cones are contained in the nonnegative orthant \\(\\mathcal{P}\\) (cf. Theorem 8) and \\(K\\in\\{K_{\\min},\\ldots,K_{\\max}\\}\\) with \\(K_{\\min}>1\\) and \\(K_{\\max}<\\operatorname{rank}(\\mathbf{V})\\). Then, for any \\(t\\geq 1\\), and sufficiently small \\(\\epsilon\\) satisfying (92) in Appendix B, if \\(N\\geq c(t/\\epsilon)^{2}F\\) (for a constant \\(c>0\\) depending only on \\(\\lambda\\), \\(\\alpha\\) and \\(\\beta\\)), with probability at least \\(1-2\\left(K_{\\max}-K_{\\min}+1\\right)\\exp\\left(-t^{2}F\\right)\\),_ \\[\\frac{\\sigma_{K}(\\mathbf{V})}{\\sigma_{K+1}(\\mathbf{V})}=\\max_{j\\in\\{K_{\\min},\\ldots,K_{\\max}\\}}\\frac{\\sigma_{j}(\\mathbf{V})}{\\sigma_{j+1}(\\mathbf{V})}. \\tag{46}\\] Proof:: Please refer to Appendix B for the proof. In Section VI-A2, we show numerically that the proposed method in (45) works well even when the geometric assumption is only _approximately satisfied_ (see Section VI-A2 for a formal definition) assuming that \\(N\\) is sufficiently large. This shows that the determination of the correct number of clusters is robust to noise. **Remark 2**: _The conditions of Theorem 9 may appear to be rather restrictive. However, we make them only for the sake of convenience in presentation. We do not need to assume that the parameters of the exponential distribution are equal if, instead of \\(\\sigma_{j}(\\mathbf{V})\\), we consider the singular values of a normalized version of \\(\\mathbf{V}\\). The assumptions that all the size angles are the same and the angles between distinct basis vectors are the same can also be relaxed. The theorem continues to hold even when the geometric assumption in (2) is not satisfied, i.e., \\(\\beta\\leq 4\\alpha\\). However, we empirically observe in Section VI-A2 that if \\(\\mathbf{V}\\) satisfies the geometric assumption (even approximately), the results are superior compared to the scenario when the assumption is significantly violated._ **Remark 3**: _We may replace the assumption that the circular cones are contained in the nonnegative orthant by removing Step 4 in the generating process (projection onto \\(\\mathcal{P}\\)) in the generative procedure in Theorem 6. Because we are concerned with finding the number of clusters (or circular cones) rather than determining the true latent dimensionality of an NMF problem (cf. [31]), we can discard the nonnegativity constraint. The number of clusters serves as a proxy for the latent dimensionality of NMF._ ## VI Numerical Experiments ### _Experiments on Synthetic Data_ To verify the correctness of our bounds, to observe the computational efficiency of the proposed algorithm, and to check if the procedure for estimating \\(K\\) is effective, we first perform numerical simulations on synthetic datasets. All the experiments were executed on a Windows machine whose processor is an Intel(R) Core(TM) i5-3570, the CPU speed is 3.40 GHz, and the installed memory (RAM) is 8.00 GB. The Matlab version is 7.11.0.584 (R2010b). The Matlab codes for running the experiments can be found at [https://github.com/zhaoqiangliu/cr1-mmf](https://github.com/zhaoqiangliu/cr1-mmf). #### Vi-A1 Comparison of Relative Errors and Running Times To generate the columns of \\(\\mathbf{V}\\), given an integer \\(k\\in[K]\\) and an angle \\(\\beta\\in[0,\\alpha_{k}]\\), we uniformly sample a vector \\(\\mathbf{z}\\) from \\(\\{\\mathbf{x}:\\mathbf{x}^{T}\\mathbf{u}_{k}=\\cos\\beta\\}\\), i.e., \\(\\mathbf{z}\\) is a unit vector such that the angle between \\(\\mathbf{z}\\) and \\(\\mathbf{u}_{k}\\) is \\(\\beta\\). To achieve this, note that if \\(\\mathbf{u}_{k}=\\mathbf{e}_{f},f\\in[F]\\) (\\(\\mathbf{e}_{f}\\) is the vector with only the \\(f\\)-th entry being \\(1\\)), this uniform sampling can easily be achieved. For example, we can take \\(\\mathbf{x}=(\\cos\\beta)\\mathbf{e}_{f}+(\\sin\\beta)\\mathbf{y}\\), where \\(y(f)=0\\), \\(y(i)=s(i)/\\sqrt{\\sum_{j\ eq f}s(j)^{2}},i\ eq f\\), and \\(s(i)\\sim\\mathcal{N}(0,1),i\ eq f\\). We can then use a Householder transformation [36] to map the unit vector generated from the circular cone with basis vector \\(\\mathbf{e}_{f}\\) to the unit vector generated from the circular cone with basis vector \\(\\mathbf{u}_{k}\\). The corresponding Householder transformation matrix is (if \\(\\mathbf{u}_{k}=\\mathbf{e}_{f}\\), \\(\\mathbf{P}_{k}\\) is set to be the identity matrix \\(\\mathbf{I}\\)) \\[\\mathbf{P}_{k}=\\mathbf{I}-2\\mathbf{z}_{\\mathbf{k}}\\mathbf{z}_{k}^{T},\\quad \\text{where}\\quad\\mathbf{z}_{k}=\\frac{\\mathbf{e}_{f}-\\mathbf{u}_{k}}{\\|\\mathbf{ e}_{f}-\\mathbf{u}_{k}\\|_{2}}. \\tag{47}\\] In this set of experiments, we set the size angles \\(\\alpha\\) to be the same for all the circular cones. The angle between any two basis vectors is set to be \\(4\\alpha+\\Delta\\alpha\\) where \\(\\Delta\\alpha:=0.01\\). The parameter for the exponential distribution \\(\\boldsymbol{\\lambda}:=1./(1:K)\\). We increase \\(N\\) from \\(10^{2}\\) to \\(10^{4}\\) logarithmically. We fix the parameters \\(F=1600\\), \\(K=40\\) and \\(\\alpha=0.2\\) or \\(0.3\\). The results shown in Figure 2. In the left plot of Figure 2, we compare the relative errors of Algorithm 2 (cr1-nmf) with the derived relative error bounds. In the right plot, we compare the relative errors of our algorithm with the relative errors of three classical algorithms: (i) the multiplicative update algorithm [3] (mult); (ii) the alternating nonnegative least-squares algorithm with block-pivoting (nnlsb), which is reported to be one of the best alternating nonnegative least-squares-type algorithm for NMF in terms of both running time and approximation error [6]; (iii) and the hierarchical alternating least squares algorithm [8] (hals). In contrast to these three algorithms, our algorithm is not iterative. The iteration numbers for mult and hals are set to 100, while the iteration number for nnlsb is set to 20, which is sufficient (in our experiments) for approximate convergence. For statistical soundness of the results of the plots on the left, \\(50\\) data matrices \\(\\mathbf{V}\\in\\mathbb{R}_{+}^{F\\times 1000}\\) are independently generated and for each data matrix \\(\\mathbf{V}\\), we run our algorithm for \\(20\\) runs. For the plots on the right, \\(10\\) data matrices \\(\\mathbf{V}\\) are independently generated and all the algorithms are run for \\(10\\) times for each \\(\\mathbf{V}\\). We also compare the running time for these algorithms when they first achieve the approximation error smaller than or equal the approximation error of Algorithm 2. The running times are shown in Table I. Because the running times for \\(\\alpha=0.2\\) and \\(\\alpha=0.3\\) are similar, we only present the running times for the former. From Figure 2, we observe that the relative errors obtained from Algorithm 2 are smaller than the theoretical relative error bounds. When \\(\\alpha=0.2\\), the relative error of Algorithm 2 appears to converge to the probabilistic relative error bound as \\(N\\) becomes large, but when \\(\\alpha=0.3\\), there is a gap between the relative error and the probabilistic relative error bound. From Theorems 6 and 8, we know that this difference is due to the projection of the cones to the nonnegative orthant. If there is no projection (this may violate the nonnegative constraint), the probabilistic relative error bound is tight as \\(N\\) tends to infinity. We conclude that when the size angle \\(\\alpha\\) is large, the projection step causes a larger gap between the relative error and the probabilistic relative error bound. We observe from Figure 2 that there are large oscillations for mult. Other algorithms achieve similar approximation errors. Table I shows that classical NMF algorithms require significantly more time (at least an order of magnitude for large \\(N\\)) to achieve the same relative error compared to our algorithm. #### Vi-A2 Automatically Determining \\(K\\) We now verify the efficacy and the robustness of the proposed method in (45) for automatically determining the correct number of circular cones. We generated the data matrix \\(\\hat{\\mathbf{V}}:=[\\mathbf{V}+\\delta\\mathbf{E}]_{+}\\), where each entry of \\(\\mathbf{E}\\) is sampled i.i.d. from the standard normal distribution, \\(\\delta>0\\) corresponds to the noise magnitude, and \\([\\cdot]_{+}\\) represents the projection to nonnegative orthant operator. We generated the nominal/noiseless data matrix \\(\\mathbf{V}\\) by setting \\(\\alpha=0.3\\), the true number of circular cones \\(K=40\\), and other parameters similarly to the procedure in Section VI-A1. The noise magnitude \\(\\delta\\) is set to be either \\(0.1\\) or \\(0.5\\); the former simulates a relatively clean setting in which the geometric assumption is approximately satisfied, while in the latter, \\(\\hat{\\mathbf{V}}\\) is far from a matrix that satisfies the geometric assumption, i.e., a very noisy scenario. We generated \\(1000\\) perturbed data matrices \\(\\hat{\\mathbf{V}}\\) independently. From Figure 3 in which the true \\(K=40\\), we observe that, as expected, the method in (45) works well if the noise level is small. Somewhat surprisingly, it also works well even when the noise level is relatively high (e.g., \\(\\delta=0.5\\)) if the number of data points \\(N\\) is also commensurately large (e.g., \\(N\\geq 5\\times 10^{3}\\)). ### _Experiments on Real Datasets_ #### Vi-B1 Initialization Performance in Terms of the Relative Error Because real datasets do not, in general, strictly satisfy the geometric assumption, our algorithm crl-nmf, does not achieve as low a relative error compared to other NMF Fig. 3: Estimated number of circular cones \\(K\\) with different noise levels. The error bars denote one standard deviation away from the mean. Fig. 2: Errors and performances of various algorithms. On the left plot, we compare the empirical performance to the theoretical non-probabilistic and probabilistic bounds given by Theorems 5 and 6 respectively. On the right plot, we compare the empirical performance to other NMF algorithms. algorithms. However, similar to the popular spherical k-means (spkm; we use \\(10\\) iterations to produce its initial left factor matrix \\(\\mathbf{W}\\)) algorithm [22], our algorithm may be used as _initialization method_ for NMF. In this section, we compare crl-nmf to other classical and popular initialization approaches for NMF. These include random initialization (rand), spkm, and the nndsvd initialization method [23] (nndsvd). We empirically show that our algorithm, when used as an initializer, achieves the best performance when combined with classical NMF algorithms. The specifications of the real datasets and the running times for the initialization methods are presented in Tables II and III respectively. spkm as initialization method and the corresponding results obtained from using our initialization approach appears to be rather insignificant. However, from Table V, which reports the running time to _first_ achieve specified relative errors \\(\\epsilon>0\\) for the initialization methods combined with nnlsb (note that nnlsb only needs to use the initial left factor matrix, and thus we can compare the initial estimated basis vectors obtained by spkm and crl-nmf directly), we see that our initialization approach is clearly faster than spkm. In addition, consider the scenario where there are duplicate or near-duplicate samples. Concretely, assume the data matrix \\(\\mathbf{V}:=\\begin{bmatrix}1&1&0\\\\ 0&0&1\\end{bmatrix}\\in\\mathbb{R}_{+}^{2\\times 3}\\) and \\(K=1\\). Then the left factor matrix produced by rank-one NMF is \\(\\mathbf{w}=[1;0]\\) and the normalized mean vector (centroid for spkm) is \\(\\bar{\\mathbf{u}}:=[\\frac{2}{\\sqrt{5}};\\frac{1}{\\sqrt{5}}]\\). The approximation error w.r.t. \\(\\mathbf{w}\\) is \\(\\|\\mathbf{V}-\\mathbf{w}\\mathbf{w}^{T}\\mathbf{V}\\|_{F}=\\|\\), while the approximation error w.r.t. \\(\\bar{\\mathbf{u}}\\) is \\(\\|\\mathbf{V}-\\bar{\\mathbf{u}}\\bar{\\mathbf{u}}^{T}\\mathbf{V}\\|_{F}\\approx 1.0954\\). Note that spkm is more constrained since it implicitly outputs a binary right factor matrix \\(\\mathbf{H}\\in\\{0,1\\}^{K\\times N}\\) while rank-one NMF (cf. Lemma 3) does not impose this stringent requirement. Hence crl-nmf generally leads to a smaller relative error compared to spkm. #### Vi-B3 Initialization Performance in Terms of Clustering We now compare clustering performances using various initialization methods. To obtain a comprehensive evaluation, we use three widely-used evaluation metrics, namely, the normalized mutual information [37] (nmi), the Dice coefficient [38] (Dice) and the purity [39, 40]. The clustering results for the CK and tr117 datasets are shown in Tables VI and VII respectively. Clustering results for other datasets are shown in the supplementary material (for space considerations). We run the standard k-means and spkm clustering algorithms for at most \\(1000\\) iterations and terminate the algorithm if the cluster memberships do not change. All the classical NMF algorithms are terminated if the variation of the product of factor matrices is small over \\(10\\) iterations. Note that nndsvd is a deterministic initialization method, so its clustering results are the same across different runs. We observe from Tables VI and VII and those in the supplementary material that our initialization approach almost always outperforms all others (under all the three evaluation metrics). Footnote 7: The tr11 dataset can be found at [http://glaros.dtc.umn.edu/gkhome/fetch/sw/cluto/datasets.tar.gz](http://glaros.dtc.umn.edu/gkhome/fetch/sw/cluto/datasets.tar.gz). It is a canonical example of a text dataset and contains \\(6429\\) terms and \\(414\\) documents. The number of clusters/topics is \\(K=9\\). choosing the number of clusters (i.e., the number of circular cones) \\(K\\). We showed experimentally on synthetic datasets that satisfy the geometric assumption that our algorithm performs exceedingly well in terms of accuracy and speed. Our method also serves a fast and effective initializer for running NMF on real datasets. Finally, it outperforms other competing methods on various clustering tasks. ### _Future Work and Open Problems_ We plan to explore the following extensions. 1. First, we hope to prove theoretical guarantees for the scenario when \\(\\mathbf{V}\\) only satisfies an _approximate version_ of the geometric assumption, i.e., we only have access to \\(\\hat{\\mathbf{V}}:=[\\mathbf{V}+\\delta\\mathbf{E}]_{+}\\) (cf. Section VI-A2) where \\(\\delta\\approx 0\\). 2. Second, here we focused on upper bounds on the relative error. To assess the tightness of these bounds, we hope to prove _minimax lower_ bounds on the relative error similarly to Jung et al. [41]. 3. Third, as mentioned in Section I-A1, our geometric assumption in (2) can be considered as a special case of the near-separability assumption for NMF [13]. To the best of our knowledge, there is no theoretical guarantee for the relative error under the near-separability assumption. 4. For large-scale data, it is often desirable to perform NMF in an _online_ fashion [42, 43], i.e., each data point \\(\\mathbf{v}_{n}\\) arrives in a sequential manner. We would like to develop online versions of the algorithm herein. 5. It would be fruitful to leverage the theoretical results for \\(k\\)-means++ [44] to provide guarantees for a probabilistic version of our initialization method. Note that our method is deterministic while \\(k\\)-means++ is probabilistic, so a probabilistic variant of Algorithm 2 may have to be developed for fair comparisons with \\(k\\)-means++. 6. We may also extend our Theorem 9 to near-separable data matrices, possibly with additional assumptions. ## Appendix A Proof of Theorem 8 To prove Theorem 8, we first provide a few definitions and lemmas. Consider the following condition that ensures that the circular cone \\(C(\\mathbf{u},\\alpha)\\) is entirely contained in the non-negative orthant \\(\\mathcal{P}\\). **Lemma 10**: _If \\(\\mathbf{u}=(u(1),u(2),\\ldots,u(F))\\) is a positive unit vector and \\(\\alpha>0\\) satisfies_ \\[\\alpha\\leq\\arccos\\sqrt{1-u_{\\min}^{2}}, \\tag{48}\\] Figure 6: Basis images of 3 individuals in Georgia Tech dataset obtained at the \\(20^{\\mathrm{th}}\\) iteration. The first to fourth rows pertain to rand, crl-mff, spkm, and nndsvd initializations respectively. where \\(u_{\\min}:=\\min_{f}u(f)\\), then \\(\\mathcal{C}(\\mathbf{u},\\alpha)\\subseteq\\mathcal{P}\\). Proof:: Because any nonnegative vector \\(\\mathbf{x}\\) is spanned by basis vectors \\(\\mathbf{e}_{1},\\mathbf{e}_{2},\\ldots,\\mathbf{e}_{F}\\), given a positive unit vector \\(\\mathbf{u}\\), to find the largest size angle, we only need to consider the angle between \\(\\mathbf{u}\\) and \\(\\mathbf{e}_{f},f\\in[F]\\). Take any \\(f\\in[F]\\), if the angle \\(\\beta\\) between \\(\\mathbf{u}\\) and \\(\\mathbf{e}_{f}\\) is not larger than \\(\\pi/4\\), we can obtain the unit vector symmetric to \\(\\mathbf{e}_{f}\\) w.r.t. \\(\\mathbf{u}\\) in the plane spanned by \\(\\mathbf{u}\\) and \\(\\mathbf{e}_{f}\\) is also nonnegative. In fact, the vector is \\(2(\\cos\\beta)\\mathbf{u}-\\mathbf{e}_{f}\\). Because \\(u(f)=\\cos\\beta\\) and \\(\\beta\\leq\\pi/4\\), we have \\(2\\cos^{2}\\beta\\geq 1\\) and the vector is nonnegative. If \\(\\beta>\\pi/4\\), i.e., \\(u(f)<1/\\sqrt{2}\\), we can take the extreme nonnegative unit vector \\(\\mathbf{z}\\) in the span of \\(\\mathbf{u}\\) and \\(\\mathbf{e}_{f}\\), i.e., \\[\\mathbf{z}=\\frac{\\mathbf{u}-u(f)\\mathbf{e}_{f}}{\\|\\mathbf{u}-u(f)\\mathbf{e}_{ f}\\|_{2}}, \\tag{49}\\] and it is easy to see \\(\\mathbf{u}^{T}\\mathbf{z}=\\sqrt{1-u(f)^{2}}\\). Hence the angle between \\(\\mathbf{z}\\) and \\(\\mathbf{u}\\) is \\(\\pi/2-\\beta<\\pi/4\\). Therefore, the largest size angle \\(\\alpha_{\\mathbf{e}_{f}}\\) w.r.t. \\(\\mathbf{e}_{f}\\) is \\[\\alpha_{\\mathbf{e}_{f}}:=\\left\\{\\begin{array}{cl}\\arccos u(f),&\\mbox{if }u(f) \\geq 1/\\sqrt{2}\\\\ \\arccos\\sqrt{1-u(f)^{2}},&\\mbox{if }u(f)<1/\\sqrt{2}\\end{array}\\right. \\tag{50}\\] or equivalently, \\(\\alpha_{\\mathbf{e}_{f}}=\\min\\{\\arccos u(f),\\arccos\\sqrt{1-u(f)^{2}}\\}\\). Thus, the largest size angle corresponding to \\(\\mathbf{u}\\) is \\[\\min_{f}\\big{\\{}\\min\\{\\arccos u(f),\\arccos\\sqrt{1-u(f)^{2}}\\}\\big{\\}} \\tag{51}\\] Let \\(u_{\\max}:=\\max_{f}u(f)\\) and \\(u_{\\min}:=\\min_{f}u(f)\\). Then the largest size angle corresponding to \\(\\mathbf{u}\\) is \\[\\min\\big{\\{}\\arccos u_{\\max},\\arccos\\sqrt{1-u_{\\min}^{2}}\\big{\\}}. \\tag{52}\\] Because \\(u_{\\max}^{2}+u_{\\min}^{2}\\leq 1\\) for \\(F>1\\), the expression in (52) equals \\(\\arccos\\sqrt{1-u_{\\min}^{2}}\\) and this completes the proof. **Lemma 11**: _Define \\(f(\\beta):=\\frac{1}{2}-\\frac{\\sin(2\\beta)}{4\\beta}\\) and \\(g(\\beta):=\\frac{1}{2}+\\frac{\\sin(2\\beta)}{4\\beta}\\) for \\(\\beta\\in\\big{(}0,\\frac{\\pi}{2}\\big{]}\\). Let \\(\\mathbf{e}_{f}\\), \\(f\\in[F]\\) be the unit vector with only the \\(f\\)-th entry being 1, and \\(C\\) be the circular cone with basis vector \\(\\mathbf{u}=\\mathbf{e}_{f}\\), size angle being \\(\\alpha\\), and the inverse expectation parameter for the exponential distribution being \\(\\lambda\\). Then if the columns of the data matrix \\(\\mathbf{V}\\in\\mathbb{R}^{F\\times N}\\) are generated as in Theorem 6 from \\(C\\) (\\(K=1\\)) and with no projection to the nonnegative orthant (Step 4 in the generating process), we have_ \\[\\mathbb{E}\\left(\\frac{\\mathbf{V}\\mathbf{V}^{T}}{N}\\right)=\\frac{\\mathbf{D}_{f} }{\\lambda} \\tag{53}\\] _where \\(\\mathbf{D}_{f}\\) is a diagonal matrix with the \\(f\\)-th diagonal entry being \\(g(\\alpha)\\) and other diagonal entries being \\(f(\\alpha)/(F-1)\\)._ Proof:: Each column \\(\\mathbf{v}_{n}\\), \\(n\\in[N]\\) can be generated as follows: First, uniformly sample a \\(\\beta_{n}\\in[0,\\alpha]\\) and sample a positive scalar \\(l_{n}\\) from the exponential distribution \\(\\mathrm{Exp}(\\lambda)\\), then we can write \\(\\mathbf{v}_{n}=\\sqrt{l_{n}}\\left[\\cos\\beta_{n}\\mathbf{e}_{f}+\\sin\\beta_{n} \\mathbf{y}_{n}\\right]\\), where \\(\\mathbf{y}_{n}\\) can be generated from sampling \\(y_{n}(1),\\ldots,y_{n}(f-1),y_{n}(f+1),\\ldots,y_{n}(F)\\) from the standard normal distribution \\(\\mathcal{N}(0,1)\\), and setting \\(y_{n}(j)=y_{n}(j)/\\sqrt{\\sum_{i\ eq f}y_{n}(i)^{2}}\\), \\(j\ eq f\\), \\(y_{n}(f)=0\\). Then \\[\\mathbb{E}\\left[v_{n}(f_{1})v_{n}(f_{2})\\right] \\tag{54}\\] \\[=\\left\\{\\begin{array}{cl}0,&f_{1}\ eq f_{2},\\\\ g(\\alpha)/\\lambda,&f_{1}=f_{2}=f,\\\\ f(\\alpha)/\\left((F-1)\\lambda\\right),&f_{1}=f_{2}\ eq f.\\end{array}\\right., \\tag{55}\\] where \\(e_{f}(f_{1})=1\\{f=f_{1}\\}\\) is the \\(f_{1}\\)-th entry of the vector \\(\\mathbf{e}_{f}\\). Thus \\(\\mathbb{E}\\left(\\mathbf{V}\\mathbf{V}^{T}/N\\right)=\\mathbb{E}\\left(\\mathbf{v}_{n} \\mathbf{v}_{n}^{T}\\right)=\\mathbf{D}_{f}/\\lambda\\). **Definition 3**: _A sub-gaussian random variable \\(X\\) is one that satisfies one of the following equivalent properties 1. Tails: \\(\\mathbb{P}(|X|>t)\\leq\\exp\\left(1-t^{2}/K_{1}^{2}\\right)\\) for all \\(t\\geq 0\\); 2. Moments: \\((\\mathbb{E}|X|^{p})^{1/p}\\leq K_{2}\\sqrt{p}\\) for all \\(p\\geq 1\\); 3. \\(\\mathbb{E}\\left[\\exp\\left(X^{2}/K_{3}^{2}\\right)\\right]\\leq e\\); where \\(K_{i},i=1,2,3\\) are positive constants. The sub-gaussian norm of \\(X\\), denoted \\(\\|X\\|_{\\Psi_{2}}\\), is defined to be_ \\[\\|X\\|_{\\Psi_{2}}:=\\sup_{p\\geq 1}p^{-1/2}\\left(\\mathbb{E}|X|^{p}\\right)^{1/p}. \\tag{56}\\] _A random vector \\(X\\in\\mathbb{R}^{F}\\) is called sub-gaussian if \\(X^{T}\\mathbf{x}\\) is a sub-gaussian random variable for any constant vector \\(\\mathbf{x}\\in\\mathbb{R}^{F}\\). The sub-gaussian norm of \\(X\\) is defined as_ \\[\\|X\\|_{\\Psi_{2}}=\\sup_{\\|\\mathbf{x}\\|_{2}=1}\\|X^{T}\\mathbf{x}\\|_{\\Psi_{2}}. \\tag{57}\\] **Lemma 12**: _A random variable \\(X\\) is sub-gaussian if and only if \\(X^{2}\\) is sub-exponential. Moreover, it holds that_ \\[\\|X\\|_{\\Psi_{2}}^{2}\\leq\\|X\\|_{\\Psi_{1}}\\leq 2\\|X\\|_{\\Psi_{2}}^{2}. \\tag{58}\\] _See Vershynin [30] for the proof._ **Lemma 13** (Covariance estimation of sub-gaussian distributions [30]): _Consider a sub-gaussian distribution \\(\\mathbb{P}\\) in \\(\\mathbb{R}^{F}\\) with covariance matrix \\(\\Sigma\\). Let \\(\\epsilon\\in(0,1)\\) and \\(t\\geq 1\\). If \\(N\\geq c(t/\\epsilon)^{2}F\\), then with probability at least \\(1-2\\exp(-t^{2}F)\\),_ \\[\\|\\Sigma_{N}-\\Sigma\\|_{2}\\leq\\epsilon, \\tag{59}\\] _where \\(\\|\\cdot\\|_{2}=\\sigma_{1}(\\cdot)\\) is the spectral norm, \\(\\Sigma_{N}:=\\sum_{n=1}^{N}X_{n}X_{n}^{T}/N\\) is the empirical covariance matrix, and \\(the corresponding data points in \\(C_{\\ell_{k}}^{0}\\). In addition, denoting \\(N_{k}\\) as the number of data points in \\(\\mathbf{V}_{k}\\), we have \\[\\frac{\\sigma_{1}^{2}\\left(\\mathbf{V}_{k}\\right)}{N_{k}}=\\frac{\\sigma_{1}^{2} \\left(\\mathbf{V}_{k}^{T}\\right)}{N_{k}}=\\lambda_{\\max}\\left(\\frac{\\mathbf{V}_{k }\\mathbf{V}_{k}^{T}}{N_{k}}\\right) \\tag{63}\\] where \\(\\lambda_{\\max}\\left(\\mathbf{V}_{k}\\mathbf{V}_{k}^{T}/N_{k}\\right)\\) represents the largest eigenvalue of \\(\\mathbf{V}_{k}\\mathbf{V}_{k}^{T}/N_{k}\\). Take any \\(\\mathbf{v}\\in\\mathbf{V}_{k}\\). Note that \\(\\mathbf{v}\\) can be written as \\(\\mathbf{v}=\\mathbf{P}_{k}\\mathbf{x}\\) with \\(\\mathbf{x}\\) being generated from \\(C_{\\ell_{k}}^{0}\\). Now, for all unit vectors \\(\\mathbf{z}\\in\\mathbb{R}^{F}\\), we have \\[\\|\\mathbf{v}\\|_{\\Psi_{2}} =\\|\\mathbf{P}_{k}\\mathbf{x}\\|_{\\Psi_{2}}=\\|\\mathbf{x}\\|_{\\Psi_{2}} \\tag{64}\\] \\[=\\sup_{\\|\\mathbf{z}\\|_{2}=1}\\sup_{p\\geq 1}p^{-1/2}\\left( \\mathbb{E}\\left(|\\mathbf{x}^{T}\\mathbf{z}|^{p}\\right)\\right)^{1/p}\\] (65) \\[\\leq\\sup_{p\\geq 1}p^{-1/2}\\mathbb{E}\\left(\\|\\mathbf{x}\\|_{2}^{p }\\right)^{1/p}\\] (66) \\[=\\|\\|\\mathbf{x}\\|_{2}\\|_{\\Psi_{2}}\\leq\\sqrt{\\|\\|\\mathbf{x}\\|_{2} ^{2}\\|_{\\Psi_{1}}}\\leq 1/\\sqrt{\\lambda_{k}}. \\tag{67}\\] That is, all columns are sampled from a sub-gaussian distribution. By Lemma 11, \\[\\mathbb{E}\\left(\\mathbf{v}\\mathbf{v}^{T}\\right)=\\mathbb{E}\\left(\\mathbf{P}_{ k}\\mathbf{x}\\mathbf{x}^{T}\\mathbf{P}_{k}^{T}\\right)=\\mathbf{P}_{k}\\mathbf{D}_{f_{k}} \\mathbf{P}_{k}^{T}/\\lambda_{k}. \\tag{68}\\] By Lemma 13, we have for \\(\\epsilon\\in(0,1),t\\geq 1\\) and if \\(N_{k}\\geq\\xi_{k}(t/\\epsilon)^{2}F\\) (\\(\\xi_{k}\\) is a positive constant depending on \\(\\lambda_{k}\\)), with probability at least \\(1-2\\exp(-t^{2}F)\\), \\[\\left|\\lambda_{\\max}\\left(\\mathbf{V}_{k}\\mathbf{V}_{k}^{T}/N_{k} \\right)-\\lambda_{\\max}\\left(\\mathbb{E}\\left(\\mathbf{v}\\mathbf{v}^{T}\\right) \\right)\\right|\\] \\[\\leq\\|\\mathbf{V}_{k}\\mathbf{V}_{k}^{T}/N_{k}-\\mathbb{E}\\left( \\mathbf{v}\\mathbf{v}^{T}\\right)\\|_{2}\\leq\\epsilon, \\tag{69}\\] where the first inequality follows from Lemma 2. Because \\(\\lambda_{\\max}\\left(\\mathbb{E}\\left(\\mathbf{v}\\mathbf{v}^{T}\\right)\\right)=g( \\alpha_{k})/\\lambda_{k}\\), we can obtain that with probability at least \\(1-4K\\exp(-t^{2}F)\\), \\[\\left|\\sum_{k=1}^{K}\\frac{\\sigma_{1}^{2}\\left(\\mathbf{V}_{k} \\right)}{N}-\\sum_{k=1}^{K}\\frac{g(\\alpha_{k})}{K\\lambda_{k}}\\right|\\] \\[=\\left|\\sum_{k=1}^{K}\\lambda_{\\max}\\left(\\frac{\\mathbf{V}_{k} \\mathbf{V}_{k}^{T}}{N_{k}}\\right)\\frac{N_{k}}{N}-\\sum_{k=1}^{K}\\frac{g(\\alpha _{k})}{K\\lambda_{k}}\\right| \\tag{70}\\] \\[\\leq 2K\\epsilon, \\tag{71}\\] where the final inequality follows from the triangle inequality and (69). From the proof of Theorem 6, we know that with probability at least \\(1-2\\exp(-c_{1}N\\epsilon^{2})\\), \\[\\left|\\frac{\\|\\mathbf{V}\\|_{\\mathrm{F}}^{2}}{N}-\\frac{\\sum_{k=1}^{K}1/\\lambda _{k}}{K}\\right|\\leq\\epsilon. \\tag{72}\\] Taking \\(N\\) to be sufficiently large such that \\(t^{2}F\\leq c_{1}N\\epsilon^{2}\\), we have with probability at least \\(1-6K\\exp(-t^{2}F)\\), \\[\\frac{\\sum_{k=1}^{K}g(\\alpha_{k})/\\lambda_{k}}{\\sum_{k=1}^{K}1/ \\lambda_{k}}-c_{2}\\epsilon \\leq\\frac{\\sum_{k=1}^{K}\\sigma_{1}^{2}\\left(\\mathbf{V}_{k}\\right)} {\\sum_{k=1}^{K}\\|\\mathbf{V}_{k}\\|_{\\mathrm{F}}^{2}} \\tag{73}\\] \\[\\leq\\frac{\\sum_{k=1}^{K}g(\\alpha_{k})/\\lambda_{k}}{\\sum_{k=1}^{K} 1/\\lambda_{k}}+c_{3}\\epsilon. \\tag{74}\\] Note that \\(g(\\alpha_{k})+f(\\alpha_{k})=1\\). As a result, we have \\[\\frac{\\sum_{k=1}^{K}f(\\alpha_{k})/\\lambda_{k}}{\\sum_{k=1}^{K}1/ \\lambda_{k}}-c_{3}\\epsilon \\leq\\frac{\\|\\mathbf{V}-\\mathbf{W}^{*}\\mathbf{H}^{*}\\|_{\\mathrm{F}}^ {2}}{\\|\\mathbf{V}\\|_{\\mathrm{F}}^{2}} \\tag{75}\\] \\[\\leq\\frac{\\sum_{k=1}^{K}f(\\alpha_{k})/\\lambda_{k}}{\\sum_{k=1}^{K} 1/\\lambda_{k}}+c_{2}\\epsilon. \\tag{76}\\] Thus, with probability at least \\(1-6K\\exp(-t^{2}F)\\), we have \\[\\left|\\frac{\\|\\mathbf{V}-\\mathbf{W}^{*}\\mathbf{H}^{*}\\|_{\\mathrm{F}}}{\\| \\mathbf{V}\\|_{\\mathrm{F}}}-\\sqrt{\\frac{\\sum_{k=1}^{K}f(\\alpha_{k})/\\lambda_{k} }{\\sum_{k=1}^{K}1/\\lambda_{k}}}\\right|\\leq c_{4}\\epsilon, \\tag{77}\\] where \\(c_{4}\\) depends on \\(K\\) and \\(\\{(\\alpha_{k},\\lambda_{k}):k\\in[K]\\}\\). ## Appendix B Proof of Theorem 9 We first state and prove the following lemma. **Lemma 14**: _Suppose data matrix \\(\\mathbf{V}\\) is generated as in Theorem 6 with all the circular cones being contained in \\(\\mathcal{P}\\), then the expectation of the covariance matrix \\(\\mathbf{v}_{1}\\mathbf{v}_{1}^{T}\\) is_ \\[\\mathbb{E}\\left[\\mathbf{v}_{1}\\mathbf{v}_{1}^{T}\\right]=\\frac{\\sum _{k=1}^{K}f(\\alpha_{k})/\\lambda_{k}}{K(F-1)}\\mathbf{I}\\] \\[\\quad+\\frac{1}{K}\\sum_{k=1}^{K}\\frac{g(\\alpha_{k})-f(\\alpha_{k})/(F -1)}{\\lambda_{k}}\\mathbf{u}_{k}\\mathbf{u}_{k}^{T}, \\tag{78}\\] _where \\(\\mathbf{v}_{1}\\) denotes the first column of \\(\\mathbf{V}\\)._ Proof:: From the proof in Lemma 11, we know if we always take \\(\\mathbf{e}_{1}\\) to be the original vector for the Householder transformation, the corresponding Householder matrix for the \\(k\\)-th circular cone \\(\\mathcal{C}_{k}\\) is given by (47) and we have \\[\\mathbb{E}\\left[\\mathbf{v}_{1}\\mathbf{v}_{1}^{T}\\right]=\\frac{1}{K}\\sum_{k=1}^{K }\\frac{\\mathbf{P}_{k}\\mathbf{D}_{k}\\mathbf{P}_{k}^{T}}{\\lambda_{k}}, \\tag{79}\\] where \\(\\mathbf{D}_{k}\\) is a diagonal matrix with the first diagonal entry being \\(g(\\alpha_{k}):=\\frac{1}{2}+\\frac{\\sin(2\\alpha_{k})}{4\\alpha_{k}}\\) and other diagonal entries are \\[\\frac{f(\\alpha_{k})}{F-1}=\\frac{\\frac{1}{2}-\\frac{\\sin(2\\alpha_{k})}{4\\alpha_{k}}}{ F-1}. \\tag{80}\\] We simplify \\(\\mathbf{P}_{k}\\mathbf{D}_{k}\\mathbf{P}_{k}^{T}\\) using the property that all the \\(F-1\\) diagonal entries of \\(\\mathbf{D}_{k}\\) are the same. Namely, we can write \\[\\mathbf{P}_{k} =\\mathbf{I}-2\\mathbf{z}_{k}\\mathbf{z}_{k}^{T}=\\mathbf{I}-\\frac {(\\mathbf{e}_{1}-\\mathbf{u}_{k})(\\mathbf{e}_{1}-\\mathbf{u}_{k})^{T}}{1-u_{k}(1)} \\tag{81}\\] \\[=\\begin{bmatrix}u_{k}(1)&u_{k}(2)&\\cdots&u_{k}(F)\\\\ u_{k}(2)&1-\\frac{u_{k}(2)^{2}}{1-u_{k}(1)}&\\cdots&-\\frac{u_{k}(2)u_{k}(F)}{1-u_{k }(1)}\\\\ \\vdots&\\vdots&\\ddots&\\vdots\\\\ u_{k}(F)&-\\frac{u_{k}(F)u_{k}(2)}{1-u_{k}(1)}&\\cdots&1-\\frac{u_{k}(F)^{2 }}{1-u_{k}(1)}\\end{bmatrix}. \\tag{82}\\] Note that \\(\\mathbf{P}_{k}=\\left[\\mathbf{p}_{1}^{k},\\mathbf{Thus, we obtain (78) as desired. We are now ready to prove Theorem 9. Proof:: Define \\[a :=\\frac{\\sum_{k=1}^{K}f(\\alpha)/\\lambda}{K(F-1)}=\\frac{f(\\alpha)/ \\lambda}{F-1},\\ \\ \\text{and} \\tag{87}\\] \\[b :=\\frac{g(\\alpha)-f(\\alpha)/(F-1)}{K\\lambda}. \\tag{88}\\] By exploiting the assumption that all the \\(\\alpha_{k}\\)'s and \\(\\lambda_{k}\\)'s are the same, we find that \\[\\mathbb{E}\\left[\\mathbf{v}_{1}\\mathbf{v}_{1}^{T}\\right]=a\\mathbf{I}+b\\sum_{k= 1}^{K}\\mathbf{u}_{k}\\mathbf{u}_{k}^{T}. \\tag{89}\\] Let \\(\\mathbf{U}=[\\mathbf{u}_{1},\\mathbf{u}_{2_{i}}\\ldots,\\mathbf{u}_{K}]\\). We only need to consider the eigenvalues of \\(\\sum_{k=1}^{K}\\mathbf{u}_{k}\\mathbf{u}_{k}^{T}=\\mathbf{U}\\mathbf{U}^{T}\\). The matrix \\(\\mathbf{U}^{T}\\mathbf{U}\\) has same non-zero eigenvalues as that of \\(\\mathbf{U}\\mathbf{U}^{T}\\). Furthermore, \\[\\mathbf{U}^{T}\\mathbf{U} =\\begin{bmatrix}1&\\cos\\beta&\\cdots&\\cos\\beta\\\\ \\cos\\beta&1&\\cdots&\\cos\\beta\\\\ \\vdots&\\vdots&\\ddots&\\vdots\\\\ \\cos\\beta&\\cos\\beta&\\cdots&1\\end{bmatrix} \\tag{90}\\] \\[=(\\cos\\beta)\\mathbf{e}\\mathbf{e}^{T}+(1-\\cos\\beta)\\mathbf{I} \\tag{91}\\] where \\(\\mathbf{e}\\in\\mathbb{R}^{K}\\) is the vector with all entries being 1. Therefore, the eigenvalues of \\(\\mathbf{U}^{T}\\mathbf{U}\\) are \\(1+(K-1)\\cos\\beta,1-\\cos\\beta,\\ldots,1-\\cos\\beta\\). Thus, the vector of eigenvalues of \\(\\mathbb{E}\\left[\\mathbf{v}_{1}\\mathbf{v}_{1}^{T}\\right]\\) is \\([a+b(1+(K-1)\\cos\\beta),a+b(1-\\cos\\beta),\\ldots,a+b(1-\\cos\\beta),a,a,\\ldots,a]\\). By Lemmas 2 and 13, we deduce that for any \\(t\\geq 1\\) and a sufficiently small \\(\\epsilon>0\\), such that \\[\\frac{a+\\epsilon}{a-\\epsilon}<\\frac{a+b(1-\\cos\\beta)-\\epsilon}{a+\\epsilon}, \\tag{92}\\] then if \\(N\\geq c(t/\\epsilon)^{2}F\\) (where \\(c>0\\) depends only on \\(\\lambda\\), \\(\\alpha\\), and \\(\\beta\\)), then with probability at least \\(1-2\\left(K_{\\text{max}}-K_{\\text{min}}+1\\right)\\exp\\left(-t^{2}F\\right)\\), Eqn. (46) holds. ## Appendix C Invariance of \\(\\left(\\mathbf{W}^{*},\\mathbf{H}^{*}\\right)\\) **Lemma 15**: _The \\(\\left(\\mathbf{W}^{*},\\mathbf{H}^{*}\\right)\\) pair generated by Algorithm 2 remains unchanged in the iterations of standard multiplicative update algorithm [3] for NMF._ Proof:: There is at most one non-zero entry in each column of \\(\\mathbf{H}^{*}\\). When updating \\(\\mathbf{H}^{*}\\), the zero entries remain zero. For the non-zero entries of \\(\\mathbf{H}^{*}\\), we consider partitioning \\(\\mathbf{V}\\) into \\(K\\) submatrices corresponding to the \\(K\\) circular cones. Clearly, \\[\\|\\mathbf{V}-\\mathbf{W}^{*}\\mathbf{H}^{*}\\|_{\\mathrm{F}}^{2}=\\sum_{k=1}^{K}\\| \\mathbf{V}_{k}-\\mathbf{w}_{k}\\mathbf{h}_{k}^{T}\\|_{\\mathrm{F}}^{2}, \\tag{93}\\] where \\(\\mathbf{V}_{k}\\in\\mathbb{R}^{F\\times|\\mathcal{I}_{k}|}\\) and \\(\\mathbf{h}_{k}\\in\\mathbb{R}_{+}^{|\\mathcal{I}_{k}|}\\). Because of the property of rank-one NMF (Lemma 3), for any \\(k\\), when \\(\\mathbf{w}_{k}\\) is fixed, \\(\\mathbf{h}_{k}\\in\\mathbb{R}_{+}^{|\\mathcal{I}_{k}|}\\) minimizes \\(\\|\\mathbf{V}_{k}-\\mathbf{w}_{k}\\mathbf{h}^{T}\\|_{\\mathrm{F}}^{2}\\). Also, for the standard multiplicative update algorithm, the objective function is non-increasing for each update [3]. Thus \\(\\mathbf{h}_{k}\\) for each \\(k\\in[K]\\) (i.e., \\(\\mathbf{H}^{*}\\)) will remain unchanged. A completely symmetric argument holds for \\(\\mathbf{W}^{*}\\). ### Acknowledgements The authors would like to thank the three anonymous reviewers for their excellent and detailed comments that helped to improve the presentation of the results in the paper. ## References * [1] A. Cichocki, R. Zdunek, A. Phan, and S. Amari. _Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation_. John Wiley & Sons, 2009. * [2] I. Buciu. Non-negative matrix factorization, a new tool for feature extraction: theory and applications. _Int. J. Comput. Commun._, 3:67-74, 2008. * [3] D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. In _Proc. NIPS_, pages 556-562, 2000. * [4] M. Chu, F. Diele, R. Plemmons, and S. Ragni. Optimality, computation, and interpretation of nonnegative matrix factorizations. _SIAM J. Matrix Anal._, 2004. * [5] H. Kim and H. Park. Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method. _SIAM J. Matrix Anal. A._, 30(2):713-730, 2008. * [6] J. Kim and H. Park. Toward faster nonnegative matrix factorization: A new algorithm and comparisons. In _Proc. ICDM_, pages 353-362, Dec 2008. * [7] J. Kim and H. Park. Fast nonnegative matrix factorization: An active-set-like method and comparisons. _SIAM J. Sci. Comput._, 33(6):3261-3281, 2011. * [8] A. Cichocki, R. Zdunek, and S. I. Amari. Hierarchical ALS algorithms for nonnegative matrix and 3d tensor factorization. In _Proc. ICA_, pages 169-176, Sep 2007. * [9] N.-D. Ho, P. Van Dooren, and V. D. Blondel. _Descent methods for Nonnegative Matrix Factorization_. Springer Netherlands, 2011. * [10] S. A. Vavasis. On the complexity of nonnegative matrix factorization. _SIAM J. Optim._, 20:1364-1377, 2009. * [11] C.-J. Lin. Projected gradient methods for nonnegative matrix factorization. _Neural Comput._, 19(10):2756-2779, Oct. 2007. * [12] C.-J. Lin. On the convergence of multiplicative update algorithms for nonnegative matrix factorization. _IEEE Trans. Neural Netw._, 18(6):1589-1596, Nov 2007. * [13] D. Donoho and V. Stodden. When does non-negative matrix factorization give correct decomposition into parts? In _Proc. NIPS_, pages 1141-1148. MIT Press, 2004. * [14] S. Arora, R. Ge, R. Kannan, and A. Moitra. Computing a nonnegative matrix factorization-provably. In _Proc. STOC_, pages 145-162, May 2012. * [15] N. Gillis and S. A. Vavasis. Fast and robust recursive algorithms for separable nonnegative matrix factorization. _IEEE Trans. Pattern Anal. Mach. Intell._, 36(4):698-714, 2014. * [16] V. Bittorl, B. Recht, C. Re, and J. A. Tropp. Factoring nonnegative matrices with linear programs. In _Proc. NIPS_, pages 1214-1222, 2012. * [17] A. Kumar, V. Sindhwani, and P. Kambadur. Fast conical hull algorithms for near-separable non-negative matrix factorization. In _Proc. ICML_, pages 231-239, Jun 2013. * [18] A. Benson, J. Lee, B. Rajwa, and D. Gleich. Scalable methods for nonnegative matrix factorizations of near-separable tall-and-skinny matrice. In _Proc. NIPS_, pages 945-953, 2014. * [19] N. Gillis and R. Luce. Robust near-separable nonnegative matrix factorization using linear optimization. _J. Mach. Learn. Res._, 15(1):1249-1280, 2014. * [20] E. F. Gonzalez. _Efficient alternating gradient-type algorithms for the approximate non-negative matrix factorization problem_. PhD thesis, Rice University, Houston, Texas, 2009. * [21] M. Ackermann and S. Ben-David. Clusterability: A theoretical study. In _Proc. AISTATS_, volume 5, pages 1-8, 2009. * [22] S. Wild, J. Curry, and A. Dougherty. Improving non-negative matrix factorizations through structured initialization. _Pattern Recognit._, 37:2217-2232, 2004. * [23] C. Boutsidis and E. Gallopoulos. SVD based initialization: A head start for nonnegative matrix factorization. _Pattern Recognit._, 41:1350-1362, 2008. * [24] I. S. Dhillon and D. S. Modha. Concept decompositions for large sparse text data using clustering. _Mach. Learn._, 42:143-175, 2001. * [25] Y. Xue, C. S. Chen, Y. Chen, and W. S. Chen. Clustering-based initialization for non-negative matrix factorization. _Appl. Math. Comput._, 205:525-536, 2008. * [26] Z. Zheng, J. Yang, and Y. Zhu. Initialization enhancer for non-negative matrix factorization. _Eng. Appl. Artif. Intell._, 20:101-110, 2007. * [27] A. N. Langville, C. D. Meyer, and R. Albright. Initializations for the nonnegative matrix factorization. In _Proc. SIGKDD_, pages 23-26, Aug 2006. * [28] G. H. Golub and C. F. Van Loan. _Matrix computations_. JHU Press, 1989. * [29] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In _Proc. COLT_, pages 144-152, Jul 1992. * [30] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices, 2010. arXiv:1011.3027. * [31] V. Y. F. Tan and C. Fevotte. Automatic relevance determination in nonnegative matrix factorization with the \\(\\beta\\)-divergence. _IEEE Trans. on Pattern Anal. Mach. Intell._, 35:1592-1605, 2013. * [32] P. J. Rousseeuw. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. _J. Comput. Appl. Math._, 20:53-65, 1987. * [33] D. Pelleg and A. Moore. X-means: Extending k-means with efficient estimation of the number of clusters. In _Proc. ICML_, 2000. * [34] R. Tibshirani, G. Walther, and T. Hastie. Estimating the number of clusters in a data set via the gap statistic. _J. R. Stat. Soc. Series B Stat. Methodol._, 63:411-423, 2001. * [35] R. L. Thorndike. Who belongs in the family. _Psychometrika_, pages 267-276, 1953. * [36] R. L. Burden and J. D. Faires. _Numerical Analysis_. Thomson/Brooks/Cole, 8 edition, 2005. * [37] A. Strehl and J. Ghosh. Cluster ensembles-a knowledge reuse framework for combining multiple partitions. _J. Mach. Learn. Res._, pages 583-617, 2002. * [38] G. Salton. _Automatic text processing : the transformation, analysis, and retrieval of information by computer_. Reading: Addison-Wesley, 1989. * [39] C. D. Manning, P. Raghavan, and H. Schutze. _Introduction to information retrieval_. Cambridge University Press, 2008. * [40] Y. Li and A. Ngom. The non-negative matrix factorization toolbox for biological data mining. _Source Code Biol. Med._, 8, 2013. * [41] A. Jung, Y. C. Eldar, and N. Gortz. On the minimax risk of dictionary learning. _IEEE Trans. Inform. Theory_, 62(3):1501-1515, 2016. * [42] R. Zhao and V. Y. F. Tan. Online nonnegative matrix factorization with outliers. _IEEE Trans. Signal Process._, 65(3):555-570, 2017. * [43] R. Zhao, V. Y. F. Tan, and H. Xu. Online nonnegative matrix factorization with general divergences. In _Proc. AISTATS_, 2017. arXiv:1608.00075. * [44] D. Arthur and S. Vassilvitskii. \\(k\\)-means++: The advantages of careful seeding. In _Proc. SODA_, pages 1027-1035, 2007. \\begin{tabular}{c c} & Zhaoqiang Liu was born in China in 1991. He is currently a Ph.D. candidate in the Department of Mathematics at National University of Singapore (NUS). He received the B.Sc. degree in Mathematics from the Department of Mathematical Sciences at Tsinghua University (THU) in 2013. His research interests are in machine learning, including unsupervised learning such as matrix factorization and deep learning. \\\\ \\end{tabular} \\begin{tabular}{c c} & Vincent Y. F. Tan (S'07-M'11-SM'15'15) was born in Singapore in 1981. He is currently an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and the Department of Mathematics at the National University of Singapore (NUS). He received the B.A. and M.Eng. degrees in Electrical and Information Sciences from Cambridge University in 2005 and the Ph.D. degree in Electrical Engineering and Computer Science (EECS) from the Massachusetts Institute of Technology in 2011. He was a postdoctoral researcher in the Department of ECE at the University of Wisconsin-Madison and a research scientist at the Institute for Infocomm (I\\({}^{2}\\)R) Research, A\\({}^{*}\\)STAR, Singapore. His research interests include network information theory, machine learning, and statistical signal processing. Dr. Tan received the MIT EECS Jin-Au Kong outstanding doctoral thesis prize in 2011 and the NUS Young Investigator Award in 2014. He has authored a research monograph on _\"Asymptotic Estimates in Information Theory with Non-Tunishing Error Probabilities\"_ in the Foundations and Trends in Communications and Information Theory Series (NOW Publishers). He is currently an Editor of the IEEE Transactions on Communications and the IEEE Transactions on Green Communications and Networking. \\\\ \\end{tabular}
We propose a geometric assumption on nonnegative data matrices such that under this assumption, we are able to provide upper bounds (both deterministic and probabilistic) on the relative error of nonnegative matrix factorization (NMF). The algorithm we propose first uses the geometric assumption to obtain an exact clustering of the columns of the data matrix; subsequently, it employs several rank-one NMFs to obtain the final decomposition. When applied to data matrices generated from our statistical model, we observe that our proposed algorithm produces factor matrices with comparable relative errors vis-a-vis classical NMF algorithms but with much faster speeds. On face image and hyperspectral imaging datasets, we demonstrate that our algorithm provides an excellent initialization for applying other NMF algorithms at a low computational cost. Finally, we show on face and text datasets that the combinations of our algorithm and several classical NMF algorithms outperform other algorithms in terms of clustering performance. Nonnegative matrix factorization, Relative error bound, Clusterability, Separability, Initialization, Model selection
Provide a brief summary of the text.
201
arxiv-format/1605_05753v2.md
# Delay Bounds for Multiclass FIFO Yuming Jiang Norwegian University of Science and Technology Vishal Misra Columbia University [email protected] [email protected] ## 1 Introduction Multiclass FIFO refers to the scheduling discipline where customers are served in the first-in-first-out (FIFO) manner and the services required by different classes may differ. Compared to single-class FIFO where all customers typically have the same service requirements, multiclass FIFO is more general, providing a more natural way to model the system for scenarios where the service requirements of customers from different classes may differ. One example is downlink-sharing in wireless networks, where a wireless base station, shared by multiple users, sends packets to them in the FIFO manner. Since the characteristics of the wireless channel seen by these users may differ, the data rates to them may also be different. Another example is input-queueing on a switch, where packets are FIFO-queued at the input port before being forwarded to the output ports that pick packets from the FIFO queue and serve at possibly different rates. A third example is video conferencing, where the video part stream and the audio part stream have highly different characteristics. However, the two streams are synchronized when they are generated. In addition, inside the network, they may share the same FIFO queues, e.g. in network interface cards and in switches. Surprisingly, while there are a lot of results for single-class FIFO, few such results exist for multiclass FIFO. The existing results for multiclass FIFO are mostly under the classic queueing theory (e.g. [3]). However, those available results are rather limited and their focus has been mainly on the queue stability condition. In the context of input-queueing in packet-switched systems, multiclass FIFO has also been studied (e.g. [4]). However, in these studies, the focus has been on the throughput of the switch, assuming saturated traffic on each input port. None of these studies has focused on the delay performance of multiclass FIFO. In terms of delay bounds for FIFO, available results are almost all for single-class FIFO. Among them are the two well-known delay bound results: one by Kingman [1] for the stochastic case \\(GI/GI/1\\), and one by Cruz [2] for the deterministic case \\(D/D/1\\). For other or more general arrival and service processes, various delay bounds have also been derived, mainly in the context of network calculus (e.g. [5][6][7]). In particular, for some multiclass settings studied in this work, analytical bounds on their single-class counterparts can be found in the literature (e.g. [8][9]). With those results, one might expect that they could be readily or easily extended to multi-class FIFO. Unfortunately, such an extension is surprisingly difficult and a direct extension may result in rather limited applicability of the obtained results. This work is devoted, as an initial try, to bridging the gap. The focus is on finding bounds (or approximations) for the tail distribution of delay or waiting time in multiclass FIFO. The rest is organized as follows. In Section 2, the difficulties in extending or applying single-class FIFO delay bound results to multiclass FIFO are first discussed, using the two well-known delay bounds as examples. Then in Section 3, we prove delay bounds for multiclass FIFO, considering both deterministic and stochastic cases. Specifically, delay analysis is performed and delay bounds are derived for \\(D/D/1\\), \\(GI/GI/1\\) and \\(G/G/1\\), all under multiclass FIFO. In Section 4, examples are further provided for several basic settings to demonstrate the obtained bounds in more explicit forms and compare the obtained bounds with simulation results. ## 2 Difficulties Below we use the Kingman's bound and the Cruz's bound as examples to discuss the difficulties or limitations in applying the single-class delay bounds to multiclass FIFO. ### Kingman's Bound For a \\(GI/GI/1\\) queue, the following delay bound on waiting time by Kingman [1] is well-known: \\[P\\{W\\geq\\tau\\}\\leq e^{-\\vartheta\\tau} \\tag{1}\\] where \\(W\\) denotes the steady-state waiting time in queue, \\[\\vartheta=\\sup\\{\\theta>0:M_{Y(1)-X(1)}(\\theta)<1\\}\\] with \\(X(1)\\) being the interarrival time and \\(Y(1)\\) the servicetime of a customer, and \\(M_{Z}(\\theta)\\) denotes the moment generating function of random variable \\(Z\\), i.e., \\(M_{Z}(\\theta)\\equiv E[e^{\\theta Z}]\\). The i.i.d. condition on interarrival times and the i.i.d. condition on service times imply that the bound (1) is mostly applicable only for single-class FIFO. Extending it or directly applying it to multiclass FIFO can be difficult. For example, for a multiclass FIFO system, even under that the two i.i.d. conditions hold for each class, the applicability of (1) may still be limited due to three main reasons: * For the aggregate of all classes, the two i.i.d. conditions for the aggregate may not hold. For example, for a two-class system with one class being \\(M/M\\) and the other being \\(D/D\\), the i.i.d. conditions, particularly the _identical_ part, for the aggregate do not hold. * There are cases where, even though within each class, customers are independent, there is dependence between classes. For example, in video conferencing, the video stream and the audio stream are synchronized when generated. As a result, the _independent_ part of the i.i.d. condition, for the aggregate, may not be met. * For cases where the two i.i.d. conditions for the aggregate also hold, it can still be _challenging_ to find the probability distribution or characteristic functions of the interarrival times and the service times of the aggregate class of customers, which are needed by (1). We remark that the above reasons also make other related single-class results (e.g. [8][9]), which rely on similar i.i.d. conditions, difficult to apply to multiclass FIFO. ### Cruz's Bound For a \\(D/D/1\\) system in communication networks, the following delay bound was initially shown by Cruz [2] \\[D\\leq\\frac{\\sigma}{C} \\tag{2}\\] where \\(D\\) denotes the system delay of any packet, \\(C\\) (in bps) the service rate of the system, and \\(\\sigma\\) (in bits) the traffic burstiness parameter. The conditions of the Cruz's delay bound are \\(r\\leq C\\) and that the input traffic during any time interval \\([s,s+t]\\), denoted as \\(A(s,s+t)\\), for all \\(s,t\\geq 0\\), is upper-constrained by \\(A(s,s+t)\\leq r\\cdot t+\\sigma\\). Unlike the Kingman's bound, the Cruz's bound may be readily used for multiclass FIFO. To illustrate this, consider a FIFO queue with \\(N\\) classes, where the traffic of each class \\(n\\) is upper-constrained by \\(A_{n}(s,s+t)\\leq r_{n}\\cdot t+\\sigma_{n}\\) and the service rate of the class is \\(C_{n}\\) with \\(r_{n}\\leq C_{n}\\). Without difficulty, (2) can be extended to this multiclass queue. Specifically, for the aggregate, there holds \\(A(s,s+t)\\equiv\\sum_{n}A_{n}(s,s+t)\\leq\\sum_{n}r_{n}\\cdot t+\\sum_{n}\\sigma_{n}\\). Then, if there holds \\[\\sum_{n}r_{n}\\leq\\min_{n}\\{C_{n}\\},\\] the delay of any packet is upper-bounded by \\[D\\leq\\frac{\\sum_{n}\\sigma_{n}}{\\min_{n}\\{C_{n}\\}}.\\] Unfortunately, the condition \\(\\sum_{n}r_{n}\\leq\\min_{n}\\{C_{n}\\}\\) can be too restrictive, particularly when the service rates differ much. As a result, the bound can be highly loose or even it cannot be used due to that the condition to use the bound is not met, as to be exemplified later in Section 4. ## 3 Main Results ### System Model and Notation Consider a multiclass queueing system. There is only one queue that is initially empty. Customers are served in the FIFO manner. If multiple customers arrive at the same time, the tie is broken arbitrarily. The size of the queue is unlimited. The serving part of the system is characterized by a work-conserving server. There are \\(N(\\geq 1)\\) classes of customers (e.g. packets in a communication network). Let \\(p_{n}^{j}\\) denote the \\(j\\)th customer of class \\(n\\), with \\(n\\in[1,N]\\) and \\(j=1,2,\\cdots\\). Each customer \\(p_{n}^{j}\\) is characterized by a traffic parameter \\(l_{n}^{j}\\) that denotes the amount of traffic (in the number counted on a defined traffic unit, e.g. bits in the communication network setting) carried by the customer. To customers of class \\(n\\), the service rate (in traffic units per second, e.g. bps - bits per second) of the server is constant, denoted by \\(C_{n}\\). For each class \\(n\\), let \\(A_{n}(0,t)\\equiv A(t)\\) denote the amount of traffic (in traffic units, e.g. bits) that arrives within the time period \\([0,t)\\), and \\(A_{n}(s,t)\\equiv A_{n}(t)-A_{n}(s)\\) the traffic in \\([s,t)\\). For the aggregate traffic of all classes, \\(A(s,t)\\equiv\\sum_{n}A_{n}(s,t)\\) and \\(A(t)\\) are similarly defined. Also for each class \\(n\\), we use \\(\\lambda_{n}\\) to denote the average customer arrival rate and \\(\\mu_{n}\\) the average customer service rate, and define \\(\\rho_{n}=\\frac{\\lambda_{n}}{\\mu_{n}}\\). For any customer \\(p_{n}^{j}\\), let \\(a_{n}^{j}\\) and \\(d_{n}^{j}\\) respectively denote its arrival time and departure time. By convention, we let \\(a_{n}^{0}=d_{n}^{j}=0\\). The delay in system of the customer is then \\(D_{n}^{j}=d_{n}^{j}-a_{n}^{j}\\), and the waiting time in queue is \\(W_{n}^{j}=D_{n}^{j}-l_{n}^{j}/C_{n}\\). In addition, corresponding to the notation used in Section 2.1, we use \\(X_{n}(j)\\) to denote the interarrival time between \\(p_{n}^{j-1}\\) and \\(p_{n}^{j}\\), and \\(Y_{n}(j)\\) the service time of \\(p_{n}^{j}\\). By definition, \\(X_{n}(j)=a_{n}^{j}-a_{n}^{j-1}\\), for \\(j=1,2,\\dots\\), and \\(Y_{n}(j)=l_{n}^{j}/C_{n}\\). In this paper, we assume that for each class \\(n\\), the processes \\(X_{n}(j)\\) and \\(Y_{n}(j)\\) are both stationary. Then by definition, we can also write \\(\\lambda_{n}=1/E[X_{n}(1)]\\), \\(\\mu_{n}=1/E[Y_{n}(1)]\\), and \\(\\rho_{n}=E[X_{n}(1)]^{-1}E[Y_{n}(1)]\\). Like single-class FIFO, the dynamics of the multiclass FIFO system are also described by, for all \\(j=1,2,\\dots\\), \\[d^{j}=\\max(a^{j},d^{j-1})+l^{j}/C^{j} \\tag{3}\\] where \\(j=1,2,\\dots\\) denotes the aggregate sequence of all customers ordered according to their arrival times, and \\(p^{j},a^{j},d^{j},l^{j}\\) and \\(C^{j}\\) respectively denote the \\(j\\)th customer in this ordered aggregate sequence, its arrival time, departure time, carried traffic amount and received service rate. Similarly, we denote the delay of \\(p^{j}\\) as \\(D^{j}=d^{j}-a^{j}\\), and its waiting time in queue as \\(W^{j}=D^{j}-l^{j}/C^{j}\\). ### Delay Bound for Multiclass \\(D/d/1\\) Suppose that the traffic of each class \\(n\\) is constrained by \\(A_{n}(s,s+t)\\leq r_{n}t+\\sigma_{n}\\) for all \\(s,t\\geq 0\\). For this multiclass \\(D/D/1\\) queue, we have: **Theorem 1**: _If \\(\\sum_{n}\\frac{r_{n}}{C_{n}}\\leq 1\\), the delay of any customer \\(p^{j}\\) is bounded by:_ \\[D^{j}\\leq\\sum_{n}\\frac{\\sigma_{n}}{C_{n}} \\tag{4}\\] _and the delay bound is tight._ For any \\(p^{j}\\), there exists some time \\(t^{0}\\) that starts the busy period where it is in. Note that such a busy period always exists, since in the extreme case, the period is only the service time period of \\(p^{j}\\) and in this case, \\(t^{0}=a^{j}\\). Since the system is work-conserving, it is busy with serving customers in \\([t^{0},d^{j}]\\). So, \\(d^{j}=t^{0}+\\sum_{n=1}^{N}Y_{n}(t^{0},d^{j})\\), where \\(Y_{n}(t^{0},d^{j})\\) denotes the total service time of class \\(n\\) customers that are served in \\([t^{0},d^{j}]\\). Because of FIFO and that the system is empty at \\(t^{0}_{-}\\), \\(Y_{n}(t^{0},d^{j})\\) is hence limited by the amount of traffic that arrives in \\([t^{0},a^{j}]\\), i.e., \\(A(t^{0},a^{j}_{+})\\), where \\(x_{-}\\equiv x-\\epsilon\\) and \\(x_{+}\\equiv x+\\epsilon\\) with \\(\\epsilon\\to 0\\). Specifically, \\(Y_{n}(t^{0},d^{j})\\leq\\frac{A_{n}(t^{0},a^{j}_{+})}{C_{n}}\\), so we then have \\(d^{j}\\leq t^{0}+\\sum_{n=1}^{N}\\frac{A_{n}(t^{0},a^{j}_{+})}{C_{n}}\\). Under the condition \\(\\sum_{n=1}^{N}\\frac{r_{n}}{C_{n}}\\leq 1\\), we then obtain: \\[D^{j} \\leq \\sum_{n=1}^{N}\\frac{A_{n}(t^{0},a^{j}_{+})}{C_{n}}+t^{0}-a^{j}\\] \\[\\leq \\sum_{n=1}^{N}\\frac{r_{n}\\cdot(a^{j}-t^{0})+\\sigma_{n}}{C_{n}}-(a^ {j}-t^{0})\\leq\\sum_{n=1}^{N}\\frac{\\sigma_{n}}{C_{n}}\\] where the second last step is due to the traffic constraint and the last step is from \\(\\sum_{n}\\frac{r_{n}}{C_{n}}\\leq 1\\) and \\(a^{j}\\geq t^{0}\\). Note that, for the system, consider that immediately after time \\(0\\), every traffic class generates a burst with size \\(\\sigma_{n}\\). In this case, the customer in these bursts, which receives service last, will experience delay \\(\\sum_{n=1}^{N}\\frac{\\sigma_{n}}{C_{n}}\\) that equals the delay bound. So, the bound is tight. ### Delay Bounds for Multiclass \\(Gi/gi/1\\) To assist proving results for the ordinary multiclass \\(GI/GI/1\\) system, we first consider a discrete time counterpart of the system, where time is indexed by \\(t=0,1,2,\\dots\\). The length of unit time is \\(\\delta\\). The discrete time system becomes the former by letting \\(\\delta\\to 0\\). In the discrete time system, depending on the length of the unit time \\(\\delta\\), it could happen that multiple customers arrive at the same (discrete) time. Because of this, in addition to waiting time, we introduce the concept of virtual waiting time. The virtual waiting time at time \\(t\\) is defined to be the time that a virtual customer, which arrives immediately before time \\(t\\), would experience: _All arrivals at \\(t\\) are excluded in the calculation of the virtual waiting time at \\(t\\)_. More specifically, for any \\(p^{j}\\), the corresponding virtual waiting time \\(V^{j}\\) can be written as: \\[V^{j}\\equiv\\sup_{0\\leq s\\leq a^{j}}\\left[\\sum_{n=1}^{N}\\frac{A_{n}(s,a^{j})}{ C_{n}}-(a^{j}-s)\\right]. \\tag{6}\\] Note that the definition of virtual waiting time also applies to continuous time systems. If in a (continuous time or discrete time) system, there is at most one arrival at a time, then \\(V^{j}\\) equals \\(W^{j}\\), i.e. \\(W^{j}=V^{j}\\). For the discrete-time counterpart of multiclass \\(GI/GI/1\\), we have the following result. Its proof is in the Appendix. **Lemma 1**: _For the discrete time system, if there exists small \\(\\theta>0\\) such that \\(E[e^{\\theta(\\sum_{n=1}^{N}\\frac{A_{n}(1)}{C_{n}}-1)}]\\leq 1\\), then, for any \\(p^{j}\\) and for all such \\(\\theta\\), the virtual waiting time at \\(a^{j}\\) and the delay of the customer are respectively bounded by,_ \\[P\\{V^{j}\\geq\\tau\\} \\leq M_{\\sum_{n}\\frac{A_{n}(1)}{C_{n}}-1}(\\theta)e^{-\\theta\\tau} \\tag{7}\\] \\[P\\{D^{j}\\geq\\tau\\} \\leq 1-F_{A_{1}(1)}*\\dots*F_{\\frac{A_{N}(1)}{C_{N}}}*F_{V}(\\tau) \\tag{8}\\] _where \\(F_{X}\\) denotes the CDF (or a lower bound on CDF) of \\(X\\), \\(F_{V}(\\tau)\\equiv 1-M_{\\sum_{n}\\frac{A_{n}(1)}{C_{n}}-1}(\\theta)e^{-\\theta\\tau}\\), and \\(*\\) denotes the convolution operation._ For Lemma 1, we highlight that, though in the term \\(F_{\\frac{A_{1}(1)}{C_{1}}}*\\dots*F_{\\frac{A_{N}(1)}{C_{N}}}\\) of (8), the time index \\(1\\) is used, the term is actually contributed by concurrent arrivals of the considered \\(p^{j}\\), specifically by the total service time of all arrivals at time \\(a^{j}\\), as shown in the proof. For the ordinary continuous time multiless FIFO system, if there is at most one customer arrival at a time, the rest of this part disappears and the remaining is the contribution by the considered customer \\(p^{j}\\), which is the service time of \\(p^{j}\\). Now by letting \\(\\delta\\to 0\\), the following result immediately follows from Lemma 1. **Theorem 2**: _For a multiclass \\(GI/GI/1\\) system with no concurrent arrivals at any time, if there exists small \\(\\theta>0\\) such that \\(E[e^{\\theta(\\sum_{n=1}^{N}\\frac{A_{n}(1)}{C_{n}}-1)}]\\leq 1\\), then, the waiting time and delay of any customer \\(p^{j}\\) are respectively bounded by,_ \\[P\\{W^{j}\\geq\\tau\\} \\leq e^{-\\theta^{\\tau}\\tau} \\tag{9}\\] \\[P\\{D^{j}\\geq\\tau\\} \\leq 1-F_{Y(j)}*F_{W}(\\tau) \\tag{10}\\] _where \\(F_{Y(j)}\\) is the CDF (or a lower bound on CDF) of the service time of the customer, \\(F_{W}=1-e^{-\\theta^{\\tau}\\tau}\\) and_ \\[\\theta^{*}=\\sup\\{\\theta>0:E[e^{\\theta(\\sum_{n}\\frac{A_{n}(1)}{C_{n}}-1)}]\\leq 1\\}.\\] ### Delay Bounds for Multiclass \\(G/g/i\\) As discussed in Section 2.1, when there are multiple classes, even though the i.i.d. conditions may hold for each class, such i.i.d. conditions do not necessarily carry over to be-tween classes. As a consequence, Theorem 2 may not be applicable. To deal with this, we present the following bounds. **Theorem 3**: _Suppose the traffic of each class has generalized Stochastically Bounded Burstiness (gSBB) [10][11], satisfying for some \\(R_{n}>0\\) and \\(\\forall t>0\\),_ \\[P\\{\\sup_{0\\leq s\\leq t}[A_{n}(s,t)-R_{n}\\cdot(t-s)]>\\sigma\\}\\leq\\bar{F}_{n}(R_{n },\\sigma)\\] _for all \\(\\sigma\\geq 0\\). Then, for \\(\\forall(R_{1},\\dots,R_{N})\\), under the condition_ \\[\\sum_{n}\\frac{R_{n}}{C_{n}}\\leq 1,\\] _the delay of any customer \\(p^{j}\\) is bounded by (a.s.):_ \\[P\\{D^{j}>\\tau\\}\\leq\\inf_{p_{1}+\\dots+p_{N}=1}\\sum_{n=1}^{N}\\bar{F}_{n}(R_{n},p_ {n}\\cdot C_{n} Note that in (11), \\(t^{0}\\) is a random variable. Taking all sample paths into consideration, with \\(A_{n}(t^{0},a_{+}^{j})-R_{n}\\cdot(a^{j}-t^{0})\\leq\\sup_{0\\leq s\\leq a^{j}}[A_{n}(s,a_{+}^{j})-R_{n}\\cdot(a^{j}-s)]\\), we get: \\[D^{j} \\leq \\sum_{n=1}^{N}\\frac{\\sup_{0\\leq s\\leq a^{j}}[A_{n}(s,a_{+}^{j})-R_ {n}\\cdot(a^{j}-s)]}{C_{n}}. \\tag{12}\\] Since the traffic of each class has gSBB, with simple manipulation on the definition and applying \\(\\epsilon\\to 0\\), we have, \\[P\\{\\frac{\\sup_{0\\leq s\\leq a^{j}}[A_{n}(s,a_{+}^{j})-R_{n}\\cdot( a_{+}^{j}-s)]+R_{n}\\epsilon}{C_{n}}>\\tau\\}\\] \\[\\leq \\bar{F}_{n}(R_{n},C_{n}\\tau).\\] The theorem then follows from probability theory results on sum of random variables. We remark that it is easily verified that the deterministic traffic model is a special case of the gSBB model with \\(\\bar{F}(x)=0\\) for all \\(x\\geq\\sigma_{n}\\) and \\(\\bar{F}(x)=1\\) otherwise. In addition, a wide range of traffic processes have been proved to have gSBB [11]. ## 4 Examples To illustrate the obtained delay bounds, this section presents examples for some basic settings, whose single-class counterparts have been extensively studied in the literature, with more explicit expressions for the delay bounds. In addition, the obtained bounds are compared with simulation results and discussed. ### Multiclass \\(D/d/1\\) Consider a multiclass FIFO queue in a communication network. Assume that there are two traffic classes, and for each class, packets have constant size \\(l_{n}\\), which arrive periodically with \\(X_{n}\\) being the period length. It is easily verified that for each class, its traffic arrival process satisfies \\(A_{n}(s,t)\\leq r_{n}\\cdot(t-s)+\\sigma_{n}\\), with \\(r_{n}=l_{n}/X_{n}\\) and \\(\\sigma_{n}=l_{n}\\). Applying them to Theorem 1, a delay bound is found as: If \\(\\frac{r_{n}}{C_{1}}+\\frac{r_{n}}{C_{2}}\\leq 1\\), the delay of any packet satisfies: \\[D\\leq\\frac{l_{1}}{C_{1}}+\\frac{l_{2}}{C2}. \\tag{13}\\] To compare, recall the delay bound directly from single class \\(D/D/1\\) network calculus analysis discussed in Section 2.2, which is, if \\(r_{1}+r_{2}\\leq\\min\\{C_{1},C_{2}\\}\\), \\[D\\leq\\frac{l_{1}+l_{2}}{\\min\\{C_{1},C2\\}}. \\tag{14}\\] To illustrate the two bounds, Figure 1 is presented, where two cases, Case 1 and Case 2, are considered. For Case 1, \\(C_{1}=20Mbps\\) and \\(C_{2}=100Mbps\\); \\(l_{1}=100\\) bytes and \\(l_{2}=1250\\) bytes; \\(X_{1}=0.1ms\\) and \\(X_{2}=1ms\\). For Case 2, the other settings are the same except that \\(C_{1}=10Mbps\\). For both cases, \\(r_{1}=8Mbps\\) and \\(r_{2}=10Mbps\\). It is easily verified that while for both cases, the condition \\(\\frac{r_{1}}{C_{1}}+\\frac{r_{2}}{C_{2}}\\leq 1\\) is satisfied, the condition \\(r_{1}+r_{2}\\leq\\min\\{C_{1},C_{2}\\}\\) is only met for Case 1. Figure 1 shows that, for both cases, the bound by Theorem 1 is not only able to bound the delays of all simulated packets but also tight, i.e., some packets can indeed experience delay equal to the bound. However, (14) can be highly conservative as shown by Figure 1(a) for Case 1, and may even not be applicable (N.A.) due to that its required condition is not met as indicated by Figure 1(b) for Case 2. ### Multiclass \\(Gi/Gi/1\\) In this subsection, we give two examples for multiclass \\(GI/GI/1\\), which are multiclass \\(M/D/1\\) and multiclass \\(M/M/1\\). In both examples, customers of each class arrive according to a Poisson process with average interarrival time \\(X_{n}\\). In addition, customers of the same class have the same expected service time \\(Y_{n}\\). However, while for \\(M/M/1\\), the service time of each customer is exponentially distributed, it is constant for \\(M/D/1\\). Similar to the \\(D/D/1\\) example, for each of them, we try to give an expression for the tail of delay / waiting time in a closed-form format to help the use of the related bounds. In particular, we have the following corollaries, for which we assume the stability condition is met, i.e. \\(\\rho\\equiv\\sum_{n}\\frac{r_{n}}{C_{n}}<1\\). Their proofs are included in the Appendix. **Corollary 1**: _For multiclass \\(M/D/1\\), if all classes are independent, then for any customer \\(p_{n}^{j}\\), its waiting time satisfies: \\(P\\{W_{n}^{j}>\\tau\\}\\leq e^{-\\theta^{*}\\tau}\\) with_ \\[\\theta^{*}=\\sup\\{\\theta>0:\\sum_{n}\\lambda_{n}(e^{\\theta Y_{n}}-1)- \\theta\\leq 0\\} \\tag{15}\\] _an approximation of which is_ \\[\\theta^{*}=2(1-\\rho)\\tau/(\\sum_{n=1}^{N}X_{n}^{-1}Y_{n}^{2}). \\tag{16}\\] **Corollary 2**: _For multiclass \\(M/M/1\\), if all classes are independent, then for any customer \\(p_{n}^{j}\\), its waiting time satisfies \\(P\\{W_{n}^{j}>\\tau\\}\\leq e^{-\\theta^{*}\\tau}\\) with_ \\[\\theta^{*}=\\sup\\{0<\\theta<\\min_{n}\\mu_{n}:\\sum_{n}\\frac{\\lambda_{n}}{\\mu_{n}- \\theta}\\leq 1\\} \\tag{17}\\] _an approximation of which is_ \\[\\theta^{*}=(1-\\rho)\\tau/(\\sum_{n=1}^{N}X_{n}^{-1}Y_{n}^{2}). \\tag{18}\\] We remark that, with the bound on waiting time, a bound on delay can be easily obtained, e.g. for \\(M/D/1\\), \\(P\\{D_{n}^{j}>Y_{n}+\\tau\\}=P\\{W_{n}^{j}>\\tau\\}\\). In addition, we remark that the single class version of the bounds in the above two corollaries resemble closely with the literature approximations for tail of delay / waiting time distribution in single class \\(M/G/1\\), e.g., (2.9), (124) and (2.143) in [12]. Figure 1: Multiclass \\(D/D/1\\) To illustrate the bounds, Figure 2 is presented, where two cases, Case 3 and Case 4, are respectively considered. For Case 3, the other settings are the same as for Case 2, except that packets of each class arrive according to an independent Poisson process, i.e. Case 3 is multiclass \\(M/D/1\\). For Case 4, the other settings are the same as for Case 3, except that the traffic of each packet results in an exponentially distributed service time, i.e. Case 4 is multiclass \\(M/M/1\\). In Figure 2(a), the delay CCDF simulation results of the 1st, the 10th and the 100th packet of Class 1, are included, in addition to the steady state delay CCDF and the analytical bound. Figure 2(a) shows that the delays of packets are stochastically increasing as the packet number goes higher and they converge to the steady state distribution. This is as expected and it is a proven phenomenon for single-class FIFO (e.g. [12]). For this reason, in later figures, only steady-state delay or waiting time distribution will be focused. In Figure 2(b), the curves are for Class 2, which include the simulated steady-state waiting time CCDF, the analytical bound based on (17) and the approximate analytical bound (18). Figure 2 shows that the analytical bounds are fairly tight and provide good approximations of the corresponding steady-state delay CCDF for both cases. ### Multiclass \\(G/g/1\\) In this subsection, we consider two examples where, even though within each class, the interarrival times and the service times are still respectively i.i.d., the overall FIFO system no more has i.i.d. interarrival times or i.i.d. service times. To ease expression, only two classes are considered. Corresponding to the two examples are Case 5 and Case 6. The settings of Case 5 are the same as Case 3 except that some dependence1 is introduced in the two Poisson arrival processes. To denote this case, we use \\(M^{*}/D/1\\), where \\({}^{*}\\) indicates that some dependence exists among classes. Footnote 1: The same series of pseudo random numbers have been used in generating the interarrival times for both classes. In Case 6, Class 1 has the same settings of Class 1 in Case 2, while Class 2 has the same settings of Class 2 in Case 4. In other words, the system has two classes, where one is \\(D/D\\) and the other is \\(M/M\\). In Figure 3, we denote the system by \\(DM/DM/1\\). As discussed in Section 2.1, in this \\(DM/DM/\\)system, customers do not identical service time distribution. For the two cases, the following corollaries are obtained by directly applying Theorem 3 with the characteristics of the corresponding processes. The detailed proofs are omitted. **Corollary 3**: _For the \\(M^{*}/D/1\\) example, we have for any customer \\(p_{n}^{j}\\),_ \\[P\\{W_{n}^{j}>\\tau\\} \\lessapprox Ne^{-\\theta^{*}\\tau/N}. \\tag{19}\\] _with \\(\\theta^{*}\\) as shown in (16)._ **Corollary 4**: _For the \\(DM/DM/1\\) example, we have for any customer \\(p_{1}^{j}\\),_ \\[P\\{W_{1}^{j}>\\tau\\}\\leq e^{-(\\mu_{2}-\\frac{\\lambda_{2}}{1-\\rho_{1}})\\tau}, \\tag{20}\\] _and for any customer \\(p_{2}^{j}\\)_ \\[P\\{W_{2}^{j}-Y_{1}>\\tau\\}\\leq e^{-(\\mu_{2}-\\frac{\\lambda_{2}}{1-\\rho_{1}}) \\tau}. \\tag{21}\\] To illustrate the two bounds in Corollary 3 and Corollary 4, Figure 3 presents results for the two cases, both for Class 2. Figure 3(a) indicates that when there is dependence between the two classes, the analytical bound assuming independent classes is no more an upper-bound. However, the general analytical bound (19) holds, even though there is a noticeable gap from the actual distribution. Note that the general bound holds for any possible dependence structure between the classes. By making use of the dependence information in the analysis, the analytical bound could be improved, but this is out of the scope of the present paper and we leave it for future investigation. Figure 3(b) indicates an interesting phenomenon, which is that the (steady state) waiting time distributions of the two classes are different. Note that, here, the waiting time distribution has been intentionally used. In Case 3 - Case 5, where both classes have Poisson arrivals, the waiting time distribution is the same for both classes. However, in Case 6, while Class 2 still has Poisson arrivals, Class 1 has periodic arrivals. This arrival process difference results in the waiting time distribution difference: The waiting time observed by a Poisson inspector is different from that by a periodic inspector. Nevertheless, the bounds (20) and (21) are valid and are fairly good. Though improvement might be further made, the bounds provide an initial step towards the analysis of similar problems. * [3] Hong Chen and Hanqin Zhang. Stability of multiclass queueing networks under FIFO service discipline. _Mathematics of Operations Research_, 22(3), 1997. * [4] Mark J. Karol, M. G. Hluchyj, and S. P. Morgan. Input versus output queueing on a space-division packet switch. _IEEE Trans. Commun._, 35(12), 1987. * [5] C.-S. Chang. _Performance Guarantees in Communication Networks_. Springer-Verlag, 2000. * [6] J.-Y. Le Boudec and P. Thiran. _Network Calculus: A Theory of Deterministic Queueing Systems for the Internet_. Springer-Verlag, 2001. * [7] Yuming Jiang and Yong Liu. _Stochastic Network Calculus_. Springer-Verlag, 2008. * [8] Florin Ciucu. Network calculus delay bounds in queueing networks with exact solutions. _ITC_, 2007. * [9] Yuming Jiang. Network calculus and queueing theory: Two sides of one coin. _Valuetools_, 2009. * [10] Q. Yin, Y. Jiang, S. Jiang, and P. Y. Kong. Analysis on generalized stochastically bounded bursty traffic for communication networks. _IEEE LCN_, 2002. * [11] Y. Jiang, Q. Yin, Y. Liu, and S. Jiang. Fundamental calculus on generalized stochastically bounded bursty traffic for communication networks. _Computer Networks_, 53(12):2011-2021, 2009. * [12] L. Kleinrock. _Queueing Systems, Volume II: Computer Applications_. John Wiley & Sons, 1976. ## Appendix A Proof of Lemma 1 Our starting point is (5). From (5) and following the same argument as for (12), the following inequality is readily obtained, which holds for all sample paths: \\[D^{j} \\leq \\sup_{0\\leq s\\leq a^{j}}\\left[\\sum_{n=1}^{N}\\frac{A_{n}(s,a_{+}^{ j})}{C_{n}}-(a^{j}-s)\\right] \\tag{22}\\] \\[= V^{j}+\\sum_{n=1}^{N}\\frac{A_{n}(a^{j},a_{+}^{j})}{C_{n}} \\tag{23}\\] Define \\(Z(k)=e^{\\theta[\\sum_{n=1}^{N}\\frac{A_{n}(a^{j}-k,a^{j})}{C_{n}}-k]}\\), \\(k=1,2,\\ldots,a^{j}\\), where \\(\\theta>0\\) is a constant. Then, under the condition \\(E[e^{\\theta(\\sum_{n=1}^{N}\\frac{A_{n}(1)}{C_{n}}-1)}]\\leq 1\\), it can be proved that \\(\\{Z(k)\\}\\) forms a supermartingale. We now have, \\[P\\{V^{j}>\\tau\\} = P\\{e^{\\theta V^{j}}>e^{\\theta\\tau}\\} \\tag{24}\\] \\[\\leq P\\{e^{\\theta\\sup_{s\\leq a^{j}}|\\sum_{n=1}^{N}\\frac{A_{n}(s,a^{j} )}{C_{n}}-(a^{j}-s)}]>e^{\\theta\\tau}\\}\\] \\[= P\\{\\sup_{1\\leq k\\leq a^{j}}Z(k)>e^{\\theta\\tau}\\}\\] \\[\\leq E[Z(1)]e^{-\\theta\\tau}\\] where the last step follows from the Doob's maximal inequality for supermartingale. Since \\(E[Z(1)]=E[e^{\\theta(\\sum_{n=1}^{N}\\frac{A_{n}(1)}{C_{n}}-1)}]\\equiv M_{\\frac {A_{n}(1)}{C_{n}}-1}(\\theta)\\), the first part is proved. For the second part, since \\(V^{j}\\) and \\(\\frac{A_{n}(a^{j},a^{j}+1)}{C_{n}}\\), \\(n=1,\\cdots,N\\), are independent, it follows from elementary probability theory results on sum of independent random variables and that \\(A_{n}(a^{j},a_{+}^{j})\\leq A_{n}(a^{j},a^{j}+1)=_{st}A_{n}(1)\\). ## Appendix B Proof of Corollary 1 Note that, \\(A_{n}(1)\\) is a compound Poisson process with \\(A_{n}(1)=\\sum_{\\mu=1}^{\\mathcal{N}_{n}(1)}l_{n}^{i}=\\mathcal{N}_{n}(1)\\times l _{n}\\), where \\(\\mathcal{N}_{n}(1)\\) denotes the number of Class \\(n\\) packets that arrive within a unit time. In addition, since \\(C_{n}\\) is constant, we can write \\(\\frac{A_{n}(1)}{C_{n}}=\\mathcal{N}_{n}(1)\\times\\frac{l_{n}}{C_{n}}\\), which is also a compound Poisson with MGF: \\[E[e^{\\theta\\frac{A_{n}(1)}{C_{n}}}]=e^{\\lambda_{n}(e^{\\theta l_{n}/C_{n}}-1)}.\\] Then, \\(M_{\\sum_{n}\\frac{A_{n}(1)}{C_{n}}-1}=e^{\\sum_{n}\\lambda_{n}(e^{\\theta l_{n}/C _{n}}-1)-\\theta}\\), which implies that solving \\(M_{\\sum_{n}\\frac{A_{n}(1)}{C_{n}}-1}\\leq 1\\) to get \\(\\theta\\) is equivalent to finding \\(\\theta\\) from: \\[\\sum_{n}\\lambda_{n}(e^{\\theta l_{n}/C_{n}}-1)-\\theta\\leq 0 \\tag{25}\\] which proves the first part. With \\(e^{\\theta l_{n}/C_{n}}\\approx 1+\\theta l_{n}/C_{n}+\\frac{1}{2}\\theta^{2}(l_{n}/C_{n})^ {2}\\) from Taylor expansion and \\(\\theta>0\\), (25) can be rewritten as \\[\\theta\\sum_{n}\\lambda_{n}Y_{n}+\\frac{\\theta^{2}}{2}\\sum_{n}\\lambda_{n}Y_{n}^{2 }-\\theta\\leq 0.\\] Since \\(\\theta>0\\), \\(\\rho=\\sum_{n}\\lambda_{n}Y_{n}\\) and \\(\\lambda_{n}=X_{n}^{-1}\\), we then get \\[\\theta\\lessapprox 2(1-\\rho)/(\\sum_{n=1}^{N}X_{n}^{-1}Y_{n}^{2}).\\] Taking \\(\\theta^{*}=2(1-\\rho)/(\\sum_{n=1}^{N}X_{n}^{-1}Y_{n}^{2})\\), the 2nd part is proved. ## Appendix C Proof of Corollary 2 Note that, \\(A_{n}(1)\\) is again a compound Poisson process with \\(A_{n}(1)=\\sum_{i=1}^{\\mathcal{N}_{n}(1)}Y_{n}(i)\\) and similarly, \\(A_{n}(1)/C_{n}\\) is also a compound Poisson. Since each \\(Y_{n}(i)=l_{n}^{i}/C_{n}\\) has exponential distribution, the MGF of \\(A_{n}(1)\\) can be written as, with \\(0<\\theta\\leq\\min_{n}\\mu_{n}\\), \\[E[e^{\\theta A_{n}(1)/C_{n}}]=e^{\\frac{\\lambda_{n}}{\\mu_{n}-\\theta}\\theta}.\\] Then, solving \\(M_{\\sum_{n}\\frac{A_{n}(1)}{C_{n}}-1}\\leq 1\\) to get \\(\\theta\\) is equivalent to finding \\(\\theta\\) from: \\(\\sum_{n}(\\frac{\\lambda_{n}}{\\mu_{n}-\\theta}\\theta)-\\theta\\leq 0\\) and with simple manipulation, it becomes \\[\\sum_{n}\\frac{\\lambda_{n}}{\\mu_{n}-\\theta}\\leq 1. \\tag{26}\\] which proves the first part. While (26) looks neat, finding an explicit expression for \\(\\theta\\) is not easy. In the following, we adopt an approximation approach. In particular, \\[\\frac{\\lambda_{n}}{\\mu_{n}-\\theta}=\\frac{\\rho_{n}}{1-\\theta/\\mu_{n}}\\approx \\rho_{n}(1+\\theta/\\mu_{n})\\] applying which to (26) gives \\(\\rho+\\theta\\sum_{n}\\frac{\\rho_{n}}{\\mu_{n}}\\lessapprox 1\\) i.e., \\[\\theta\\lessapprox(1-\\rho)/(\\sum_{n=1}^{N}X_{n}^{-1}Y_{n}^{2})\\] since \\(\\mu_{n}=Y_{n}^{-1}\\) and \\(\\lambda_{n}=X_{n}^{-1}\\). Then taking \\(\\theta^{*}=(1-\\rho)/(\\sum_{n=1}^{N}X_{n}^{-1}Y_{n}^{2})\\), the 2nd part is proved. Proof of Corollary 3 Our starting point is (12). Without loss of generality, suppose \\(p^{j}\\) is a customer of class \\(n\\). Due to also that all customers of the same class have the same service time \\(Y_{n}\\) and that at time \\(a_{+}^{j}\\), there is only one arrival that is \\(p^{j}\\) and hence \\(W^{j}=D^{j}-Y_{n}\\), we now have \\[W^{j} \\leq \\sum_{n=1}^{N}\\frac{\\sup_{0\\leq s\\leq a^{j}}[A_{n}(s,a_{+}^{j})- R_{n}\\cdot(a^{j}-s)]}{C_{n}}-Y_{n} \\tag{27}\\] \\[= \\sum_{n=1}^{N}\\frac{\\sup_{0\\leq s\\leq a^{j}}[A_{n}(s,a^{j})-R_{n} \\cdot(a^{j}-s)]}{C_{n}}\\] The right hand side of (27) has \\(N\\) items. Denote each as \\[\\tilde{W}^{j}_{n}=\\frac{\\sup_{0\\leq s\\leq a^{j}}[A_{n}(s,a^{j})-R_{n}\\cdot(a^{ j}-s)]}{C_{n}}. \\tag{28}\\] Following the proof of the first part of Theorem 3, we have the following inequality that holds without any assumption on the potential dependence condition among classes, \\[P\\{W^{j}>\\tau\\}\\leq\\inf_{\\sum_{n}p_{n}=1}\\sum_{n}P\\{\\tilde{W}^{j}_{n}>p_{n} \\cdot\\tau\\} \\tag{29}\\] We highlight that (28) has a form similar to (6). Then following the same approach as for the proof of Lemma 1 and Theorem 2, we can get: \\[P\\{\\tilde{W}^{j}_{n}\\geq\\tau\\}\\leq e^{-\\theta^{*}_{\\omega_{n}}\\cdot\\tau}\\] where \\(\\omega_{n}\\equiv\\frac{R_{n}}{C_{n}}\\), and \\(\\theta^{*}_{\\omega_{n}}\\) is the solution of \\[E[e^{\\theta(\\frac{A(1)}{C_{n}}-\\omega_{n})}]=1.\\] For \\(M/D\\), using similar approximation as for Corollary 1, \\[\\theta^{*}_{\\omega_{n}}=2(\\omega_{n}-\\rho_{n})/(X_{n}^{-1}Y_{n}^{2}). \\tag{30}\\] Finding the solution for \\((\\omega_{1},\\ldots,\\omega_{N})\\) from \\[2(\\omega_{1}\\mbox{\\tiny{$-$}}\\rho_{1})/(X_{1}^{-1}Y_{1}^{2})=\\cdots=2(\\omega_ {N}-\\rho_{N})/(X_{N}^{-1}Y_{N}^{2}).\\] under the conditions \\(\\omega_{n}\\leq\\rho_{n}\\) and \\(\\sum_{n}\\omega_{n}\\leq 1\\), the resultant \\(\\theta^{*}_{\\omega_{n}}\\) becomes \\(\\theta^{*}\\). Finally, (19) is obtained by directly applying the resultant \\(P\\{\\tilde{W}^{j}_{n}>\\tau\\}\\) to (29). ## Appendix E Proof of Corollary 4 If \\(p^{j}\\) belongs to Class 2, we can start also from (12). Following the same argument of (27), we get \\[W^{j} \\leq \\sum_{n=1}^{N}\\frac{\\sup_{0\\leq s\\leq a^{j}}[A_{n}(s,a^{j})-R_{n} \\cdot(a^{j}-s)]}{C_{n}}. \\tag{31}\\] Note that for Class 1, its customers arrive at \\(Y_{1},2Y_{1},3Y_{1},\\ldots,\\) so, for any time period \\([s,t]\\), \\(A_{1}(s,t)\\leq r_{1}\\cdot(t-s)+l_{1}\\), applying which to (27), together with letting \\(R_{1}=r_{1}\\), gives: \\[W^{j} \\leq \\frac{\\sup_{0\\leq s\\leq a^{j}}[A_{2}(s,a^{j})-R_{2}\\cdot(a^{j}-s )]}{C_{2}}+\\frac{l_{1}}{C_{1}}.\\] Letting \\(R_{2}=(1-\\frac{R_{n}}{C_{1}})C_{2}=(1-\\rho_{1})C_{2}\\), we have \\[W^{j}-Y_{1} \\leq \\sup_{0\\leq s\\leq a^{j}}[\\frac{A_{2}(s,a^{j})}{C_{2}}-(1-\\rho_{1} )\\cdot(a^{j}-s)]. \\tag{32}\\] Following the same approach, a bound on \\(W^{j}-Y_{1}\\) can be found as \\[P\\{W^{j}-Y_{1}\\geq\\tau\\}\\leq e^{-\\theta^{*}\\tau}\\] where \\(\\theta^{*}\\) is the maximum \\(\\theta\\) satisfying \\[E[e^{\\theta A_{2}(1)/C_{2}-(1-\\rho_{1})}]\\leq 1.\\] Since Class 2 is \\(M/M\\), applying it to \\(A_{2}(1)/C_{2}\\) gives \\[\\frac{\\lambda_{2}}{\\mu_{2}-\\theta}-(1-\\rho_{1})\\leq 0\\] from which, we can further get \\(\\theta^{*}=\\mu_{2}-\\lambda_{2}/(1-\\rho_{1})\\). However, if \\(p^{j}\\) belongs to Class 1, we can start from (5) and apply \\(A_{1}(s,t)\\leq r_{1}\\cdot(t-s)+l_{1}\\) to it directly. What we then get is: \\[D^{j} \\leq \\sum_{n=1}^{2}\\frac{A_{n}(t^{0},a_{+}^{j})}{C_{n}}+t^{0}-a^{j}\\] \\[\\leq \\frac{r_{1}\\cdot(a^{j}-t^{0})+l_{1}}{C_{1}}+\\frac{A_{2}(t^{0},a_{ +}^{j})}{C_{2}}-(a^{j}-t^{0})\\] \\[= \\frac{A_{2}(t^{0},a_{+}^{j})}{C_{2}}-(1-\\rho_{1})(a^{j}-t^{0})+ \\frac{l_{1}}{C_{1}}\\] \\[\\leq \\sup_{0\\leq s\\leq a^{j}}[\\frac{A_{2}(s,a^{j})}{C_{2}}-(1-\\rho_{1} )\\cdot(a^{j}-s)]+\\frac{l_{1}}{C_{1}}.\\] Since \\(p^{j}\\) belongs to Class 1, the above then gives \\[W^{j} \\leq \\sup_{0\\leq s\\leq a^{j}}[\\frac{A_{2}(s,a^{j})}{C_{2}}-(1-\\rho_{1} )\\cdot(a^{j}-s)]. \\tag{34}\\] Comparing (34) with (32), one can see the only difference is the \\(Y_{1}\\) term on the left hand side of (32). Following the same approach, the waiting time distribution bound for Class 1 is obtained.
FIFO is perhaps the simplest scheduling discipline. For single-class FIFO, its delay performance has been extensively studied: The well-known results include an upper bound on the tail of waiting time distribution for \\(GI/GI/1\\) by Kingman [1] and a deterministic delay bound for \\(D/D/1\\) by Cruz [2]. However, for multiclass FIFO, few such results are available. To bridge the gap, we prove delay bounds for multiclass FIFO in this work, considering both deterministic and stochastic cases. Specifically, delay bounds are presented for multiclass \\(D/D/1\\), \\(GI/GI/1\\) and \\(G/G/1\\). In addition, examples are provided for several basic settings to demonstrate the obtained bounds in more explicit forms, which are also compared with simulation results.
Condense the content of the following passage.
173
arxiv-format/2409_16213v1.md
# Deep Learning for Precision Agriculture: Post-Spraying Evaluation and Deposition Estimation Harry Rogers (), Tahmina Zebin (), Grzegorz Cielniak (), Beatriz De La Iglesia (), Ben Magri This work is supported by the Engineering and Physical Sciences Research Council [EP/S023917/1]. This work is also supported by Syngenta as the Industrial partner. Harry Rogers and Beatriz De La Iglesia, University of East Anglia, United Kingdom, [email protected], [email protected], Tahmina Zebin, Brunel University London, United Kingdom, [email protected], Grzegorz Cielniak, University of Lincoln, United Kingdom, [email protected], Ben Magri, Syngenta, United Kingdom, [email protected]. ## I Introduction Automated precision spraying in precision agriculture requires efficient evaluation post-spraying to locate and quantify spray deposits. Current methods involve human intervention, leading to increased costs and potential inaccuracies. The predominant techniques utilized are Water Sensitive Papers (WSPs) and tracers, each accompanied by its own set of limitations. While these methods have been instrumental in assessing spray deposition, they fall short in providing automated results. Moreover, the reliance on human intervention introduces the possibility of subjective errors and delays in the decision-making processes. Hence, there is a pressing need for innovative solutions that can autonomously evaluate spray deposition accurately in a timely manner. We have previously worked on spraying evaluation post-spray without the use of traditional methods with great success within the classification of pre- or post-spraying [1, 2, 3]. However, there were limitations such as knowing what object is sprayed and how much has been deposited. To address these issues, using the same dataset, semantic segmentation annotations are added to generate a ground truth for each object class. There are 7 classes: background, lettuce, chickweed, meadowgrass, sprayed lettuce, sprayed chickweed, and sprayed meadowgrass. The dataset has been made publicly available alongside this paper. Further information on this dataset is in Section III. Additionally, a domain-specific Weakly Supervised Deposition Estimation (WSDE) task has been added to the dataset. To estimate deposition values, Class Activation Maps (CAMs), that highlight regions of interest from Deep Learning models using the last convolutional layer, are combined with the Deep Learning model prediction itself to create deposition values in a class-wise manner. This is not possible with traditional methods. For this new dataset and task, an eXplainable Artificial Intelligence (XAI) pipeline is proposed to complete semantic segmentation of crops and weeds pre- and post-spraying. The developed pipeline can be used to evaluate precision sprayers without the need for traditional manual agricultural methods. To improve model accuracy, inference-only feature fusion has been developed combining auxiliary outputs with the traditional output. In the XAI pipeline two CAM methodologies are compared and evaluated using CAM metrics to understand how representative a CAM is of model predictions. Inference-only feature fusion is compared to a baseline to identify if it could be more interpretable. We are assessing if more of the early stage layers within Deep Learning models, included in the inference-only feature fusion, could help understand model predictions. The core contributions of this paper are: * An automated process to create deposition values in a class wise manner without WSPs or tracers. * Inference-Only feature fusion to improve from baseline model in interpretability and segmentation metrics. * An open dataset for post-spraying evaluation. The remainder of this paper is organized as follows. Section II introduces related work on precision spraying systems and how systems are evaluated post-spray, and XAI methods for generating and evaluating CAMs. Section III introduces the AI semi-automatic annotation methodology and relevant coverage rate information for the dataset. Section IV provides details of the XAI pipeline workflow, Deep Learning architectures utilized, and evaluation metrics for segmentation, CAMs, and the WSDE task. The results of the segmentation models, CAMs, and WSDE scores are reported in Section V. Finally, conclusions and future work are presented in Section VI. ## II Related Work We review current precision spraying systems and evaluation methods to demonstrate the need for an automated method. CAMs for semantic segmentation and evaluation are also investigated. ### _Precision Spraying evaluation_ Precision spraying in agriculture is crucial for the sustainable and efficient application of chemicals over large areas of land. Sprayers must be precisely calibrated to ensure that chemicals are accurately deposited on target areas, thereby minimizing waste and environmental impact. According to the 2019 European Union (EU) Green Deal [4], modern precision sprayers will need to undergo further regulatory assessment to confirm their ability to minimize chemical usage by ensuring accurate application. Therefore, developing robust methodologies to evaluate the effectiveness of these systems, particularly in terms of accurately landing spray deposits on desired targets, is of paramount importance. Currently there are two primary approaches for precision spraying evaluation. Namely, traditional agricultural or sprayer specific. Proposed precision sprayers typically use one of these to evaluate post-spraying. One popular approach within traditional agricultural methods, the most common within the literature, is WSPs. WSPs are yellow pieces of paper that are used as targets for precision sprayers. When sprayed WSPs change color and make deposition values possible with computer vision [5]. Many types of aerial and ground precision spraying systems have been developed and evaluated with WSPs. Applications within the literature vary widely and include weed spraying in corn fields, cabbage fields, and cereal fields [6, 7, 8, 9], pest control [10], disease detection in potatoes [11, 12], orchard tree spraying [13, 14, 15], and vineyard spraying [16, 17]. Testing can also be for systems that are static and are under development [18, 19]. Aerial spraying has been primarily explored with usage of WSPs [20, 21, 22, 23, 24]. Despite this, WSPs have several drawbacks. Firstly, human intervention is required to place and retrieve the WSPs for analysis post-spraying, making the process labor intensive. Secondly, the texture of WSPs differs from the actual targets, potentially leading to differences between actual and estimated spray deposits. Lastly, no deployed spraying system can perfectly replicate the same spray deposit therefore deposits on WSPs will not be the same as the deposits on real targets. Tracers are another traditional agricultural method developed to improve upon WSPs. They are typically dyes that color the chemical used for spraying. Tracers make the location of deposits clearly visible when sprayed on target or non-targets in error. This means that spray deposits from each system can be evaluated directly. Similarly to WSPs, tracers have been explored within a wide variety of applications. For example, pesticide spraying in Maize fields, orchards, and rice fields [25, 26, 27]. Weed control has also been explored in field and in controlled greenhouse environments [19, 28, 29, 30, 31]. However, not all systems can use the same tracer, as different tracers may be incompatible with specific chemical applications. For example, Gao et al. [27] use Allura red food dye as a tracer for pesticide spraying, but the drawback of the developed system is that to analyse the spray deposits, the target must be harvested and then tested. Zheng et al. [25] use Rhodamine dye for pesticide spraying, whereas Liu et al. [26] use Tartrazine dye for pesticide spraying. Thus, the type of tracer is dependent on what exact chemical is being sprayed as there are no generalisable tracers that work with all types of chemical and spraying application. There are some systems in the literature that are evaluated without traditional agricultural methods. These are sprayer specific methodologies. Some precision spraying systems are evaluated with human intervention, where humans manually count the number of weeds sprayed [32, 33]. Some other systems calculate the volume of chemical sprayed [34]. Some systems just assume that they hit the target, [35, 36]. Furthermore, some state the evaluation of spraying post-spray is future work [37]. However, these evaluation methods create ambiguity as it is unclear how effective precision sprayers truly are. Furthermore, these type of evaluation systems are typically system specific, and do not generalise well to other systems. From this review, it can be stated that precision spraying systems need an automated process that can evaluate post-spraying without the usage of WSPs or tracers. The automated method also needs to generate deposition values in a class-wise manner to improve from traditional methods. Therefore, a XAI pipeline has been developed that can do so by using Deep Learning and inference-only feature fusion. ### _XAI methodologies_ XAI has several methodologies focused on identifying regions of interest in images generated by Deep Learning model predictions. These approaches, utilizing either gradient or activations from the predictions, concentrate on the model's final layer before classification to compute CAMs. These representations are typically visualized for interpretation. Gradient-based methods like GradCAM, GradCAM++, and FullGrad have shown effectiveness in visualizing these regions of interest [38, 39, 40]. However, recent trends are moving towards gradient-free techniques to better accommodate models with gradients that are either negative or non-differentiable, typically used within Deep Learning for more complex tasks than classification such as segmentation. A leading gradient-free approach, AblationCAM, evaluates the significance of activations by measuring the impact of their removal on the model output [41]. This method has shown excellent performance over GradCAM for Convolutional Neural Networks (CNNs) trained on the ImageNet dataset [42]. Other gradient-free CAM strategies like EigenCAM, which calculates the principal component analysis of network activations, has set new performance benchmarks on ImageNet whilst being more efficient [43]. ScoreCAM, another approach, employs a unique two-step process that combines activation maps as masks on the original image to generate a combined output [44]. There are the leading three CAM methods using activations only. To evaluate CAM effectiveness, several metrics are used, including Deletion, Insertion, and a weakly supervised task [45]. Deletion quantifies the change in confidence when different regions of an image are removed by setting pixels to 0. Insertion measures the confidence change where the region of interest is added to an image with no surrounding context. The literature shows differing types of methodologies of Deletion and Insertion with altering perturbation methods. Specifically, the Most Relevant First (MoRF) imputation strategy introduced by Tomsett et al. [46] involves removing the most relevant pixels first as these regions will be used in a WSDE task. The classical versions of Deletion and Insertion mentioned previously where 0 values are used, is used as opposed to blurring by Rong et al. [47]. This is to get a true baseline for the comparison of CAM methods. In our weakly supervised task presented in this paper, CAMs are converted to key points, taking inspiration from Ryou and Perona's work [48], to then be evaluated using the pointing game from Zhang et al. [49]. These are further explained in Section IV. From the literature, this paper will compare AblationCAM and ScoreCAM using Deletion and Insertion with our collected precision spraying evaluation dataset. EigenCAM could be used but as it is class indiscriminate this will visualize poorly with multiple classes in each image against ScoreCAM and AblationCAM, which have class specific information. After evaluating the effectiveness of both CAM methods and identification of a best method, a domain-specific WSDE as described in Section IV will be completed. Inference-only feature fusion is proposed to identify if adding earlier information from Deep Learning models with CAM generation is more interpretable than just the traditional output. Further details on the Semantic Segmentation Architectures and inference-only feature fusion is in Section IV. ## III Dataset The dataset is publicly available at [https://github.com/Harry-Rogers/PSIE](https://github.com/Harry-Rogers/PSIE). ### _XY spot sprayer_ The dataset contains images of trays of 4 evenly spaced lettuce with randomly own chickweed and meadowgrass that are commonly found in fields. The precision sprayer evaluated in this paper is called the XY spot sprayer. Syngenta developed this as an experimental Agri-robot precision spraying system. The system uses an XY gantry system with an adjustable floor to spray at differing heights and locations. Added to the system is a Canon 500D camera to capture images pre- and post-spraying. Syngenta recommended a spraying height of 30 cm from the try bed with a 3-bar pressure and a spray time of 8 milliseconds. The dataset is made up of 176 images of trays that contain lettuce, chickweed, and meadowgrass. A tray was placed into the precision spraying system and an image was taken from the attached camera. After image capture the precision sprayer, controlled by an expert, sprayed the target chickweed and meadowgrass once. After spraying was complete another image was captured. Therefore, this dataset looks at the accuracy of the targeting within the precision spraying system and can be used to identify coverage and deposition values. All trays within the dataset were stored in a greenhouse with misting on, this means that lettuce, chickweed, or meadowgrass may look sprayed but are just wet and are not actually sprayed by the precision sprayer. This adds a layer of complexity to the data which may resemble real world situations and may ensure precision spraying deposits can be identified in real world scenarios. ### _Semi-automatic segmentation annotation_ The collected data was annotated for a semantic segmentation task as an efficient way to label the data collected. Due to the granularity of the images as shown in Figure 1, segmentation of individual droplets is difficult or not possible for smaller droplets, but segmentation of sprayed objects for this task is more effective. The classes include background, lettuce, chickweed, meadowgrass, sprayed lettuce, sprayed chickweed, and sprayed meadowgrass. A background class has been added as it allows for the semantic segmentation model to assign a class to all pixels and generalize better. From the ground truth labels several statistics can be identified. For example, there are a total of 9542 annotations with 398 instances of Lettuces, 3098 instances of Chickweed, 3529 instances of Meadowgrass, 310 instances of sprayed Lettuces, 1732 instances of Sprayed Chickweed, 299 instances of Sprayed Meadowgrass, and 176 background instances. The data is split into a 160 images for training, and 16 images for test. As images are pre- and post-spraying there can be images where all class instances exist. From the dataset spray deposition and targeting statistics can be identified. Shown in Table I are the coverage values for the entire dataset with the hit rate and miss rate as a percentage. It can be seen that on average lettuce covers an area of 41.1 \\(CM^{2}\\), chickweed covers 19.5 \\(CM^{2}\\), and meadowgrass covers Fig. 1: Example of a single spray deposit. 13.3 \\(CM^{2}\\). With the dataset collected lettuce is hit 93.1% of the time (in error as it should not be sprayed), chickweed is hit 76.1% of the time, and meadowgrass is hit 16.9% of the time. These are calculated using the number of instances of sprayed objects in the given class. The dataset has an unbalanced class representation, like real-world scenarios where systems are designed to spray weeds in general. It also includes misfires and hits on non-target lettuce, mimicking the challenges a real system would face, such as inertia and other real-world issues. Data was annotated semi-automatically as semantic segmentation labels are labor intensive. First a human labeler created bounding boxes around the lettuce and clusters of weeds regardless of type as shown in Figure 2 in the top left. Clusters are defined as any weeds that overlap with each other. Using these bounding boxes objects were segmented out of the image, shown in the top right of Figure 2, and passed to a Segment Anything Model (SAM) [50] that was then able to segment each target with a higher granularity than when given the original image, an example is shown in Figure 2 in the bottom left. When using SAM, due to the nature of the image, most of the segmentation's when not using the clusters of weeds or individual lettuce lead to unsatisfactory results, as shown in Figure 2 in the bottom right. These labels were checked by the human labeler. This method and pipeline of labelling is effective as demonstrated by the ground truth shown in Figure 3. In the labelled image it can be seen not all chickweed or meadowgrass is sprayed despite being targeted and all lettuce are sprayed in error. ### _Deposition Estimation_ To be able to estimate deposition values the quantification of a single spray actuation needs to be validated. Therefore, quantifying a single spray actuation from the precision spraying system was completed by spraying into a container that was weighed by an analytical balance. The system sprayed 100 times into 10 identical containers. From these findings the average quantity sprayed per deposit is 20.9 \\(\\mu\\)L, standard deviation is 0.16\\(\\mu\\)L, standard error is 0.05\\(\\mu\\)L and variance is 0.02\\(\\mu\\)L thus, the average weight will be used. Visually, a single deposit on a target chickweed can be shown in Figure 1. Following this, a WSDE task has been added to the test set of the data. Key point annotations have been added. These locations are sprayed locations and are center points of spray actuations that are a minimum appropriate distance from each other considering the XY spot sprayer specifications. This task is used to be able to find an estimation of deposition values in a class-wise manner without labelling all instances in the dataset. Figure 4 shows a labelled example with a number of points on lettuces that are sprayed in error by the XY spraying system. To calculate the ground truth spraying weight for each image the number of keypoints is multiplied by the average spray actuation weight for each class. Therefore, in the example Figure 4, lettuce (in red) has been sprayed with 229.9 \\(\\mu\\)L in error, chickweed (in blue) has been sprayed with 209.0 \\(\\mu\\)L, and meadowgrass (in purple) has been sprayed with 20.9 \\(\\mu\\)L. In total the test set contains 1212.2 \\(\\mu\\)L for lettuce, 1630.2 \\(\\mu\\)L for chickweed, and 188.1 \\(\\mu\\)L for meadowgrass creating a total of 3030.5 \\(\\mu\\)L for the test set. Fig. 4: Image labelled with keypoints for Weakly Supervised Deposition Estimation using the centre of spray actuations. Fig. 3: RGB image Figure 3a against ground truth semantic segmentation Figure 3b. Fig. 2: Semi automatic process for segmentation annotation using SAM. ## IV XAI Pipeline To be able to interpret and complete a domain-specific WSDE task an XAI pipeline has been developed. The pipeline includes the usage of inference-only feature fusion. Within the pipeline each model and feature fusion methodology is thoroughly evaluated using segmentation and CAM metrics. ### _Segmentation Architectures_ Experiments have been conducted with two semantic segmentation Deep Learning architectures, DeepLabV3 [51], and Fully Convolutional Network (FCN) [52]. The choice of these semantic segmentation architectures is informed from their successful deployment within agriculture [53]. These architectures use differing CNN backbones, EfficientNet-B0 [54], MobileNetV3 [55], and ResNet50 [56]. Pretrained weights from PyTorch for DeepLabv3 ResNet50, MobileNetV3, and FCN ResNet50 architectures are publicly available and have been used. Using the same publicly available recipe as PyTorch we train the DeepLabV3 EfficientNet-B0, FCN EfficientNet-B0, and FCN MobileNetV3 on the Pascal class labels in the COCO dataset [57, 58]. These architectures are also trained with an auxiliary loss. When constructing the auxiliary loss for each architecture, that is not pretrained, it is noteworthy to mention where the auxiliary stems from. Therefore, in the EfficientNet-B0 architecture the auxiliary stem branches from stage 3 of the backbone, the MobileNetV3 and ResNet50 have the same stem points as the pretrained architectures for the pretrained DeepLabV3 and FCN at the second convolutional layer and layer 3, respectively. ### _Inference-Only feature fusion_ In this paper inference-only feature fusion is explored with XAI. The semantic segmentation architectures used, utilize an auxiliary loss. Therefore, these are fused together during inference only. This means that during training, the model is optimized without the need of additional layers or computation. Two primary fusion techniques are explored: concatenation and multiplication. Through concatenation, the auxiliary and main architecture outputs are combined along the channel dimension, allowing for the integration of information from both sources. Alternatively, multiplication involves element-wise multiplication of corresponding feature maps from the auxiliary and main outputs, facilitating a more intricate interaction between the two sets of features. These techniques are compared not only with the segmentation metrics but also with CAM metrics. Therefore, this allows for the identification of which method is the most interpretable as well as best at the domain specific WSDE task. Essentially, more of the architecture that is typically not used will be used. As we believe this will have a positive impact on the overall performance. This will be compared to the baseline of the main architecture output in Section V. ### _Weakly Supervised Deposition Estimation_ To enable the domain-specific WSDE task, key points that are the center points of spray actuations have been labelled. Each key point labelled is an appropriate distance from each other considering the precision spraying specifications, meaning key points must be a minimum distance from each other. This task is used to be able to find an estimation of deposition values in a class-wise manner with the average spray actuation weight. To generate predictions for WSDE the Deep Learning model prediction and the top 10% region of interest from the CAM for the specified class are multiplied. Semantic segmentation CAMs can have pixels that are not included in the desired class, therefore multiplying the CAM with the prediction allows for a better prediction. In Figure 4(a) is an example model prediction, next to it is an example of a CAM for sprayed chickweed in Figure 4(b), finally the resulting islands from the multiplication of these is in Figure 4(c). After island creation, three clustering methods are compared to find the most accurate points on the resulting image. The resulting regions of interest within the image could have several points for one cluster of weeds as shown in Figure 4(c). As weeds were sprayed once this means clustering points that are close to each other to ensure only one prediction is made. To cluster the islands three methods are used: a baseline method using the centre points of all islands as this is not technically a clustering method, Affinity Propagation, and a thresholding methodology where the centre points of islands are removed if they are within a specified distance [59]. The threshold distance is prior information considering the precision spraying specifications that can be altered to fit differing precision spraying systems. WSDE evaluation is based on the original pointing game which can be used to evaluate if a CAM overlaps with objects in an image [49]. The pointing game uses an accuracy measure as shown: \\[Accuracy=\\frac{\\#\\text{Hits}}{\\#\\text{Hits}+\\#\\text{Misses}} \\tag{1}\\] where a \\(hit\\) is counted if the maximum value in a CAM lies inside of a bounding box annotation of the object class, if not a \\(miss\\) is counted. However, for WSDE the original pointing game is not particularly useful as WSDE contains key points and not bounding boxes. Therefore, a bounding box is created for each point with precision sprayer specifications to count Fig. 5: Prediction from Deep Learning model Figure 4(a) with CAM for sprayed chickweed Figure 4(b) with resulting island images after thresholding and combination in Figure 4(c). hits. This will be recorded as a mean hit rate across all classes in the results. After calculating the mean hit rate, deposition values can be calculated by multiplying the number of predictions by 20.9\\(\\mu\\)L. This will then be evaluated against the ground truth weight using an absolute difference. The absolute difference is used as this illustrates the difference regardless of if the prediction is larger than the ground truth or lower as both are undesirable. The average of the absolute differences across each class for the entire test set will be reported in a class-wise manner in Section V with the best clustering methodology for each model and inference-only fusion method. This means that the best scoring models have a high mean hit rate and a low absolute difference. Using the best inference-only feature fusion method for each model there will be an image-wise test to find the best WSDE prediction on a single tray. Using the best test prediction result, the predictions will be converted into a coverage value to show that coverage can also be created. ### _Segmentation Metrics_ To evaluate semantic segmentation several different metrics are used. The metrics used are Class-wise Dice score, Mean Intersection over Union (MIoU), Pixel-wise accuracy, and Pixel Micro F1 score. Whilst there is a background class, results for it are not reported. * **Class-wise Dice Score**: The Class-wise Dice score measures the similarity between the predicted segmentation and the ground truth for each class individually. It calculates the overlap between the predicted and ground truth masks for each class, showing segmentation accuracy for each specific class. This can be described as follows: \\[Dice=\\frac{2\\times|A\\cap B|}{|A|+|B|}\\] (2) Where A and B are the prediction and ground truth segmentation's, respectively. * **Mean Intersection over Union (mIoU)**: mIoU computes the average IoU across all classes. IoU measures the overlap between the predicted and ground truth segmentation masks, providing an overall assessment of segmentation accuracy across all classes. IoU can be calculated as: \\[IoU(A,B)=\\frac{A\\cap B}{A\\cup B}\\] (3) Where A and B are the prediction and ground truth segmentation's, respectively. From this an average across all classes is taken. When compared to Dice score mIoU treats all errors (False Negative, or False Positive) more symmetrically whereas Dice score is more sensitive to class imbalances as True positives are weighted more heavily. * **Pixel-wise Accuracy**: Pixel-wise accuracy measures the proportion of correctly classified pixels in the entire image for each class. This metric can be computed as: \\[PixelAccuracy=\\frac{\\#\\text{Correctly Classified Pixels}}{\\#\\text{Pixels}}\\] (4) This metric is important when converting the model predictions to real world coverage as the shape of the class does not need to be considered. * **Micro F1 Score**: The micro F1 score across multiple classes is calculated by first determining True Positives (TP), False Positives (FP), and False Negatives (FN) for each pixel. TP is the number of pixels classified correctly by the model; FP is counted when the model incorrectly predicts a pixel as belonging to a certain class, but that pixel actually belongs to a different class, FN is counted when the model fails to correctly predict a pixel as belonging to a certain class, instead predicting it as belonging to a different class. Using these, precision (\\(P\\)) and recall (\\(R\\)) are computed for each class using the formulas: \\[P=\\frac{TP}{TP+FP}\\] (5) \\[R=\\frac{TP}{TP+FN}\\] (6) Subsequently, the F1 score for each class is obtained using the formula: \\[F1=2*\\frac{(P*R)}{(P+R)}\\] (7) To calculate the micro F1 score, we aggregate the TP, FP, and FN scores across all classes before computing precision and recall. This means summing TP, FP, and FN for all classes and then using these aggregated counts to compute a single precision and recall value: \\[P_{micro}=\\frac{\\sum TP}{\\sum TP+\\sum FP}\\] (8) \\[R_{micro}=\\frac{\\sum TP}{\\sum TP+\\sum FN}\\] (9) The micro F1 score is then calculated using these micro-averaged precision and recall values: \\[F1_{micro}=2*\\frac{(P_{micro}*R_{micro})}{(P_{micro}+R_{micro})}\\] (10) Micro F1 is selected as the overall performance of models is imperative. ### _CAM Metrics_ CAMs are evaluated using Deletion, Insertion, and the domain-specific WSDE task. For Deletion and Insertion, the confidence from a Deep Learning model prediction is recorded. Using MoRF areas from a CAM are removed or inserted increasing the area by 1% for both until the entire image is deleted or inserted. After plotting the confidence values from the Deep Learning model against the amount of each image that is deleted or inserted, the Area Under the Curve (AUC) is calculated using the Trapezoidal Rule as follows: \\[AUC=\\frac{h}{2}\\left[y_{0}+2\\left(y_{1}+y_{2}+y_{3}+\\cdots+y_{n-1}\\right)+y_ {n}\\right] \\tag{11}\\]where y is the prediction confidence, n is equal to the number of plotted points, and h is equal to the increase in deletion or insertion change. Therefore, Deletion scores that are lower are better and Insertion scores that are higher are better. An average across all classes for each is taken and reported in Section V. ## V Results When comparing the outcomes of our experiments, we have designated the feature fusion method for each model as follows: OUT is the main traditional output (the baseline), AUX is the auxiliary output, ADD is the concatenation of the output and auxiliary, and MULTI indicates the multiplication of both the output and auxiliary. These designations are capitalized for clarity. The results are split into semantic segmentation scores, inference-only feature fusion interpretability, and WSDE. ### _Semantic Segmentation_ Table II presents the class-wise pixel accuracies and micro F1 scores, Table III displays the class-wise Dice scores and mIoU scores. Notably, feature fusion scores that surpass the baseline are highlighted in bold for enhanced clarity for both tables. A significant observation from the results concerns the AUX feature fusions inability to predict the final class, sprayed meadowgrass, across all architectures tested, with all evaluation metrics. This suggests that the layers between the auxiliary output and traditional output are crucial in the learning process of the final class across both architectures tested with the EfficientNet-B0, MobileNetV3, and ResNet50 backbones. The DeebLabV3 (EfficientNet-B0) with the ADD feature fusion achieved the highest micro F1 score across all models and backbones at 98.81%. When comparing with other feature fusion methods within the same backbone and architecture, the ADD fusion outperforms all other fusion methods across most classes (except sprayed meadowgrass) in terms of pixel accuracies. Furthermore, the ADD and MULTI feature fusion exhibit higher mIoU scores compared to the baseline, at 67.0%, 65.9%, and 64.9%, respectively. Moreover, both ADD and MULTI feature fusion methods yield higher dice scores across most classes, with exceptions for lettuce in the case of MULTI and sprayed meadowgrass for ADD and MULTI. For the DeepLabV3 (MobileNetV3), the ADD feature fusion demonstrates better pixel accuracies across most classes compared to other fusion methods and the baseline in Table II. The improvement in micro F1 scores is marginal with the ADD feature fusion compared to the baseline, ADD scores 98.47% and the baseline scores 98.23%. There is also improvement in the Dice scores for the ADD feature fusion across the majority of classes when compared to the baseline. In the case of the DeepLabV3 (ResNet50), the baseline achieves high accuracy, with no inference-only feature fusion method showing improvement in terms of segmentation metrics. This is also reflected in the mIoU score, which reaches 76.7%, the highest among all architectures and backbones tested. The FCN (EfficientNet-B0) architecture, using the ADD feature fusion shows improvements in pixel accuracies, Dice scores, and micro F1 scores compared to the baseline. The largest increase in pixel accuracy is with meadowgrass with a 10.11% increase. Micro F1 increases from 97.91% to 98.61%, and Dice for sprayed chickweed increases by 7.3% when using the ADD feature fusion. Similarly, the MULTI feature fusion increases in terms of pixel accuracies and micro F1 score. Micro F1 increases from 97.91% to 98.23% and the pixel accuracy for meadowgrass increases by 9.7% when using the MULTI feature fusion. The FCN (MobileNetV3) improves from the baseline with the ADD feature fusion when considering pixel accuraciesand Dice scores. Pixel accuracy increases by 10.56% for chickewed, and Dice increases by 7.1% for sprayed chickewed with the ADD feature fusion. The MULTI feature fusion also improves from the baseline, with micro F1 scores from 98.16% to 98.40%. Similar to DeepLabV3 (ResNet50), the FCN (ResNet50) baseline demonstrates high accuracy, with no notable improvements with inference-only feature fusion methods. These results suggest that with the ResNet50 backbone, the best outcomes are achieved when feature fusion is not employed, possibly due to the ResNet50 backbone being larger when compared to other tested backbones. ### _Inference-only feature fusion interpretability_ Figure 6 shows that AblationCAM generally produces more interpretable CAMs than ScoreCAM. It can be seen that all models for AblationCAM are interpretable, as Deletion is lower than Insertion, except for the DeepLabV3 (MobileNetV3) using the baseline. ScoreCAM has six models that are not interpretable. To compare CAM methods effectively the difference between Deletion and Insertion is used, larger differences are better. When considering the DeepLabV3 (EfficientNet-B0), using AblationCAM, the easiest feature fusion to interpret is the AUX which has a difference of 16.6%, this is the highest across all models tested. The next easiest to interpret is the MULTI fusion with a difference of 8.5%. Next the baseline which scores a difference of 5.5%, then finally ADD scores a difference of 0.2%. When using ScoreCAM, the fusion differences are 5.9% for MULTI, 4.3% for the baseline, 4.0% for ADD, and 3.0% for AUX. This means with the DeepLabV3 (EfficientNet-B0) it is easier to interpret the CAMs with AblationCAM as the differences for the baseline, AUX, and MULTI are larger. Moving to the DeepLabV3(MobileNetV3) with AblationCAM the only model that is not interpretable appears when using the baseline. This means that models are easier to interpret when using inference-only feature fusion as the AUX scores a difference of 6.5%, MULTI scores a difference of 5.2%, and ADD scores 0.4%. Whereas the baseline has the same score for Deletion and Insertion thus not interpretable. For ScoreCAM, however, each method is interpretable with AUX scoring a difference of 2.5%, ADD scoring 2.1%, the baseline scoring 2.0%, and MULTI scoring 1.7%. The DeepLabV3 (ResNet50) is interpretable with both AblationCAM and ScoreCAM as all fusion methods have a lower average Deletion than Insertion. The score differences, when considering AblationCAM, are 12.6% for AUX, 9.1% for MULTI, 7.1% for ADD, and 6.7% for the baseline. ScoreCAM has differences of 6.0% for the ADD fusion, 4.3% for the baseline, 1.8% for MULTI, and 1.3% for AUX. This would suggest that as the differences are much larger for AblationCAM when compared to ScoreCAM for all backbones in the DeepLabV3 architecture AblationCAM is better at generating representative CAMs for semantic segmentation. Similarly, the FCN (EfficientNet-B0) with AblationCAM has interpretable CAMs where the difference between Deletion and Insertion for fusion methods is better than ScoreCAM methods. Specifically, with AblationCAM, the FCN (EfficientNet-B0) with the MULTI fusion has a difference of 13.9%, ADD has a difference of 6.5%, AUX has a difference of 3.9%, and the baseline has a difference of 2.0%. This means that all feature fusion methods are more interpretable than the baseline with AblationCAM. However, when considering ScoreCAM, the baseline has a difference of 5.7%, and the other tested fusion methods are not interpretable. The FCN (MobileNetV3) with AblationCAM has all fusion methods as interpretable. Notably, the MULTI fusion scores a difference of 13.9%, ADD scores 6.5%, AUX scores 3.9%,and the baseline has a difference of 2.0%. Again, showing that AblationCAM is more interpretable with the proposed inference-only feature fusion. Furthermore, not all methods are interpretable with ScoreCAM, AUX scores a difference of 3.5%, ADD scores 2.6%, and the baseline and MULTI are not interpretable. Finally, the FCN (ResNet50) with AblationCAM has interpretable CAMs for all methods. In particular, the MULTI scores a difference of 12.1%, AUX has a difference of 8.0%, ADD scores 6.2%, and the baseline scores 6.0%. ScoreCAM CAMs are not all interpretable but the baseline scores a difference of 4.8%, ADD has a score of 3.8%, and MULTI has a score of 1.0%. Thus, it can be concluded that for the FCN architecture feature fusion and AblationCAM has a positive impact where CAMs are more interpretable compared to the baseline. As AblationCAM is easier to interpret using Deletion and Insertion in comparison to ScoreCAM, AblationCAM will be exclusively used in the WSDE task. ### _Weakly Supervised Deposition Estimation_ In Table IV the AblationCAM average \\(\\mu\\)L absolute differences and mean hit rate are reported. The best results are in bold. As shown in Table IV the best performing model for mean hit rate and total average absolute difference is the FCN (EfficientNet-B0) with the ADD feature fusion using Affinity propagation. The total average absolute difference is 156.8 \\(\\mu\\)L and the mean hit rate is 38.3%. When looking at class-specific average absolute differences the ADD fusion scores 71.0 \\(\\mu\\)L for lettuce, 66.0 \\(\\mu\\)L for chickweed, and 20.9 \\(\\mu\\)L for meadowgrass. The best score for the backbone and model when considering the lettuce class is the baseline scoring 63.3 \\(\\mu\\)L, and the lowest for the meadowgrass class is from the MULTI fusion with a score of 18.3 \\(\\mu\\)L. The DeepLabV3 (EfficientNet-B0) the best total absolute difference is with the AUX feature fusion at 164.6 \\(\\mu\\)L using Affinity propagation. When looking at class-specific average Fig. 6: AblationCAM average Deletion and Insertion Figure 5(a) against ScoreCAM average Deletion and Insertion Figure 5(b). absolute differences the model scores 39.8 \\(\\mu\\)L for lettuce, 110.3 \\(\\mu\\)L for chicked, and 18.3 \\(\\mu\\)L for meadowgrass. The lowest score for the backbone and model when considering chicked is the AUX feature fusion scoring 50.3 \\(\\mu\\)L. However, the best mean hit rate is with the ADD feature fusion at 36.1% using the center points. This shows the deposition estimation from the AUX fusion method is closer but it is not as accurate in its spatial location when compared to the ADD feature fusion. The DeepLabV3 (MobileNetV3) has the best total absolute difference with the ADD feature fusion using centers at 182.9 \\(\\mu\\)L. When considering the lettuce the average absolute difference with the ADD fusion is 94.2 \\(\\mu\\)L, 66.0 \\(\\mu\\)L for the chicked, and 23.5 \\(\\mu\\)L for meadowgrass. However, the best mean hit rate is from the baseline with the center points at 33.8%. The DeepLabV3 (ResNet50) performs best with the ADD feature fusion using the center points at 214.2 \\(\\mu\\)L, and 37.4% for the total absolute difference, and mean hit rate, respectively. Considering class-specific absolute difference scores, the AUX fusion scores best with 52.6 \\(\\mu\\)L, the MULTI scores best for chicked at 100.0 \\(\\mu\\)L, and the AUX scores best with 18.3 \\(\\mu\\)L for meadowgrass. With the FCN (MobileNetV3) the best results are with the AUX feature fusion for both total absolute difference and mean hit rate at 292.6 \\(\\mu\\)L and 34.4%, respectively using the center points. However, in a class-specific absolute difference comparison none of the AUX results are best. The best lettuce absolute difference is 65.7 \\(\\mu\\)L with the MULTI, for chicked WULTI scores best at 126.1 \\(\\mu\\)L, and the ADD scores best for meadowgrass with 23.5 \\(\\mu\\)L. Finally, the FCN (ResNet50) performs best with a total absolute difference of 245.6 \\(\\mu\\)L with the baseline using Affinity propagation. However, the baseline does not perform best in all classes. It only does for meadowgrass with a difference of 20.9 \\(\\mu\\)L. The best performing for the lettuce and chicked class is the ADD fusion with 94.4 \\(\\mu\\)L, and 118.3 \\(\\mu\\)L, respectively. The best mean hit rate for the model uses the ADD feature and the center points scoring 32.6%. Following this, an exploration into the best absolute difference using image-specific tests was completed and results are detailed in Table V. The results highlight that the baseline is not effective as the feature fusion methods AUX, ADD, or MULTI. It is also evident from Table V that Affinity Propagation is the predominant clustering method associated with the majority of the best outcomes. The results from Table V also indicate that the most common model is the FCN (ResNet50) with the MULTI feature fusion method. All scores are within 100\\(\\mu\\)L and exhibit good mean hit rates. The highest mean hit rate, 66.6%, is achieved by the FCN (EfficientNetB0) with AUX feature fusion using center points. However, this model does not record the lowest absolute difference. The smallest absolute difference, 20.9\\(\\mu\\)L, is observed in the FCN (ResNet50) using MULTI feature fusion and Affinity propagation. Using the model with the lowest absolute difference, further analysis is carried out. Table VI includes calculations of predicted coverage, predicted deposition estimation, and comparisons to the ground truth for that particular image. These results are visually represented in Figure 7. The FCN (ResNet50) with MULTI feature fusion provides very accurate coverage rates for sprayed lettuce, within 0.02 \\(CM^{2}\\). The coverage for chicked, meadowgrass, and sprayed chicked remains quite precise, within 0.35 \\(CM^{2}\\), 1.26 \\(CM^{2}\\), and 13.27 \\(CM^{2}\\) respectively. Visually, the coverage results in Figure (a)a appear consistent across all classes. However, it is worth noting that the sprayed Meadowgrass class poses prediction challenges. Moreover, the deposition values for the lettuce and chicked class are correct, despite not precisely identifying the key points, achieving a mean hit rate of 49.8%, and 90.8%, respectively. Finally, as the model does not predict sprayed meadowgrass. ## VI Conclusion This research marks an initial exploration into evaluating post-spraying effectiveness in precision agriculture without traditional tracers or WSPs for deposition quantification. We have demonstrated the capability to accurately distinguish and classify sprayed weeds and lettuces, and estimated spray weights on these targets. Among the models evaluated, the DeeplabV3 (ResNet50) without feature fusion was the most accurate for segmenting lettuces and weeds across all metrics. While there is not a definitive model for interpretability, AblationCAM notably outperformed ScoreCAM. For precise deposition estimation, the FCN (ResNet50) with ADD feature fusion was the most effective. [Online]. Available: [https://www.mdpi.com/2504-4490/6/1/14](https://www.mdpi.com/2504-4490/6/1/14) * [19] R. Raja, D. C. Slaughter, S. A. Fennimore, and M. C. Siemens, \"Real-time control of high-resolution micro-net spayer integrated with machine vision for precision used control,\" _Biosystems Engineering_, vol. 228, pp. 31-48, 2023. [Online]. Available: [https://www.sciencedirect.com/science/article/pii/S153751102300375](https://www.sciencedirect.com/science/article/pii/S153751102300375) * [20] A. S. Hanif, X. Han, and S.-H. Yu, \"Independent control spraying system for uav-based precise variable sprayer: A review,\" _Drones_, vol. 6, no. 12, 2022. [Online]. Available: [https://www.mdpi.com/2504-446X/6/1/2/383](https://www.mdpi.com/2504-446X/6/1/2/383) * [21] L. Wang, W. Song, Y. Lan, H. Wang, X. Yue, X. Yin, E. Luo, B. Zhang, Y. Lu, and Y. Tang, \"A smart droplet detection approach with vision sensing technique for agricultural aviation application,\" _IEEE Sensors Journal_, vol. 21, no. 16, pp. 17 508-17 516, 2021. * [22] G. Wang, Y. Lan, H. Qi, P. Chen, A. Hewitt, and Y. Han, \"Field evaluation of an unmanned aerial vehicle (uav) sprayer: effect of spray volume on deposition and the control of pests and disease in wheat,\" _Pest Management Science_, vol. 75, no. 6, pp. 1546-1555, 2019. [Online]. Available: [https://onlinelibrary.wiley.com/doi/abs/10.1002/ps.5321](https://onlinelibrary.wiley.com/doi/abs/10.1002/ps.5321) * [23] G. Wang, Y. Lan, H. Yuan, H. Qi, P. Chen, F. Ouyang, and Y. Han, \"Comparison of spray deposition, control efficacy on wheat aphids and working efficiency in the wheat field of the unmanned aerial vehicle with boom spayer and two conventional knapsack spravers,\" _Applied Sciences_, vol. 9, no. 2, 2019. [Online]. Available: [https://www.mdpi.com/2076-3417/9/2/218](https://www.mdpi.com/2076-3417/9/2/218) * [24] J. Martinez-Guanter, P. Aguera, J. Aguera, and M. Perez-Ruiz, \"Sparay and economics assessment of a uav-based ultra-low-volume application in olive and citrus orchards,\" _Precision Agriculture_, vol. 21, no. 1, pp. 226-243, Feb 2020. [Online]. Available: [https://doi.org/10.1007/s11119-09965-7](https://doi.org/10.1007/s11119-09965-7) * [25] K. Zheng, X. Zhao, C. Han, Y. He, C. Zhai, and C. Zhao, \"Design and experiment of an automatic row-oriented spraying system based on machine vision for early-stage maize corps,\" _Arginature_, vol. 13, no. 3, 2023. [Online]. Available: [https://www.mdpi.com/20777-0472/13/3/69/1](https://www.mdpi.com/20777-0472/13/3/69/1) * [26] L. Liu, Y. Liu, X. He, and W. Liu, \"Precision variable-rate spraying robot by using single 3d lidar in orchards,\" _Agronomy_, vol. 12, no. 10, 2022. [Online]. Available: [https://www.mdpi.com/2073-4395/1/2/01/2509](https://www.mdpi.com/2073-4395/1/2/01/2509) * [27] S. Gao, G. Wang, Y. Zhou, M. Wang, D. Yang, H. Yuan, and X. Yan, \"Water-soluble food dye of alura red as a tracer to determine the spray deposition of pesticide on target crops,\" _Pest management science_, vol. 75, no. 10, pp. 2592-2597, 2019. * [28] R. Raja, T. T. Nguyen, D. C. Slaughter, and S. A. Fennimore, \"Real-time weed-crop classification and localisation technique for robotic weed control in lettuce,\" _Biosystems Engineering_, vol. 192, pp. 257-274, 2020. * [29] J. Liu, I. Abbas, and R. S. Noor, \"Development of deep learning-based variable rate agrochemical spraying system for targeted wes control in strawberry crop,\" _Agronomy_, vol. 11, no. 8, p. 1480, 2021. * [30] Omer Barns Ozlojoymak, \"Development and assessment of a novel camera-integrated spraying needle nozzle design for targeted microscope spraying in precision weed control,\" _Computers and Electronics in Agriculture_, vol. 199, p. 107134, 2022. [Online]. Available: [https://www.sciencedirect.com/science/article/pii/S0168169922004513](https://www.sciencedirect.com/science/article/pii/S0168169922004513) * [31] L. Xun, J. Campos, B. Salas, F. X. Fabregas, H. Zhu, and E. Gil, \"Advanced spraying systems to improve pesticide saving and reduce spray drift for apple orchards,\" _Precision Agriculture_, vol. 24, no. 4, pp. 1526-1546, Aug 2023. [Online]. Available: [https://doi.org/10.1007/s11119-023-10007-x](https://doi.org/10.1007/s11119-023-10007-x) * [32] V. Partel, S. Charan Kakatz, and Y. Panaptizdis, \"Development and evaluation of a low-cost and smart technology for precision weed management utilizing artificial intelligence,\" _Computers and Electronics in Agriculture_, vol. 157, pp. 339-350, 2019. [Online]. Available: [https://www.sciencedirect.com/science/article/pii/S0168169918316612](https://www.sciencedirect.com/science/article/pii/S0168169918316612) * [33] T. Ruigrok, E. van Henten, J. Booij, K. van Boheemen, and G. Koostras, \"Application-specific evaluation of a weed-detection algorithm for plant-specific spraying,\" _Sensors_, vol. 20, no. 24, 2020. [Online]. Available: [https://www.mdpi.com/1424-8220/20/24/7262](https://www.mdpi.com/1424-8220/20/24/7262) * [34] P. R. Sanchez and H. Zhang, \"Evaluation of a cmu-based modular precision sprayer in broadcast-seeded field,\" _Sensors_, vol. 22, no. 24, 2022. [Online]. Available: [https://www.mdpi.com/1424-8220/22/22/4/9723](https://www.mdpi.com/1424-8220/22/22/4/9723) * [35] F. E. Nasir, M. Tufail, M. Haris, J. Iqbal, S. Khan, and M. T. Khan, \"Precision agricultural robotic sprayer with real-time tobacco recognition and spraying system based on deep learning,\" _PLOS ONE_, vol. 18, no. 3, pp. 1-22, 03 2023. [Online]. Available: [https://doi.org/10.1371/journal.pone.0283801](https://doi.org/10.1371/journal.pone.0283801) * [36] A. Salazar-Gomez, M. Darbyshire, J. Gao, E. I. Sklar, and S. Parsons, \"Towards practical object detection for weed spraying in precision agriculture,\" 2021. * [37] B. Salas, R. Salcedo, F. Garcia-Ruiz, and E. Gil, \"Design, implementation and validation of a sensor-based precise airblast sprayer to improve pesticide applications in orchards,\" _Precision Agriculture_, vol. 25, no. 2, pp. 8658-888, Apr 2024. [Online]. Available: [https://doi.org/10.1007/s11119-023-10097-7](https://doi.org/10.1007/s11119-023-10097-7) * [38] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, \"Grad-CAR: Visual explanations from deep networks via gradient-based localization,\" _International Journal of Computer Vision_, vol. 128, no. 2, pp. 336-359, oct 2019. [Online]. Available: [https://doi.org/10.1007/s2811263-019-01228-7](https://doi.org/10.1007/s2811263-019-01228-7) * [39] A. Chattopadhyay, A. Sarkar, P. Howlader, and V. N. Balasubramanian, \"Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks,\" in _2018 IEEE winter conference on applications of computer vision (WACV)_. IEEE, 2018, pp. 839-847. * [40] S. Srinivas and F. Fleuret, \"Full-gradient representation for neural network visualization,\" 2019. [Online]. Available: [https://arxiv.org/abs/1905.00780](https://arxiv.org/abs/1905.00780) * [41] H. G. Ramaswamy _et al._, \"Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization,\" in _proceedings of the IEEE/CVF winter conference on applications of computer vision_, 2020, pp. 983-991. * [42] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, \"Imagenet: A large-scale hierarchical image database,\" in _2020 IEEE Conference on Computer Vision and Pattern Recognition_, 2009, pp. 248-255. * [43] M. Bany Muhammad and M. Yeasin, \"Eigen-cam: Visual explanations for deep convolutional neural networks,\" _SN Computer Science_, vol. 2, pp. 1-14, 2021. * [44] H. Wang, Z. Wang, M. Du, F. Yang, Z. Zhang, S. Ding, P. Mardziel, and X. Hu, \"Score-cam: Score-weighted visual explanations for convolutional neural networks,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops_, 2020, pp. 24-25. [45] V. Petisuk, A. Das, and K. Saenko, \"Rise: Randomized input sampling for explanation of black-box models,\" 2018. [Online]. Available: [https://arxiv.org/abs/1806.07421](https://arxiv.org/abs/1806.07421) * [46] R. Tornsett, D. Harborne, S. Chakraborty, P. Gurram, and A. Preece, \"Sanity checks for saliency metrics,\" 2019. * [47] Y. Rong, T. Leemann, V. Borisov, G. Kasneci, and E. Kasneci, \"A consistent and efficient evaluation strategy for attribution methods,\" 2022. * [48] S. Ryou and P. Perona, \"Weakly supervised keypoint discovery,\" 2021. * [49] J. Zhang, S. A. Bargal, Z. Lin, J. Brandt, X. Shen, and S. Sclaroff, \"Top-down neural attention by excitation backprop,\" _International Journal of Computer Vision_, vol. 126, no. 10, pp. 1084-1102, Oct 2018. [Online]. Available: [https://doi.org/10.1007/s11263-017-1059-x](https://doi.org/10.1007/s11263-017-1059-x) * [50] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollar, and R. Girshick, \"Segment anything,\" 2023. * [51] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, \"Rethinking atrous convolution for semantic image segmentation,\" 2017. * [52] E. Shelhamer, J. Long, and T. Darrell, \"Fully convolutional networks for semantic segmentation,\" 2016. * [53] Z. Luo, W. Yang, Y. Yuan, R. Gou, and X. Li, \"Semantic segmentation of agricultural images: A survey,\" _Information Processing in Agriculture_, 2023. [Online]. Available: [https://www.sciencedirect.com/science/article/pii/S2214317323000112](https://www.sciencedirect.com/science/article/pii/S2214317323000112) * [54] M. Tan and Q. Le, \"Efficient: Rethinking model scaling for convolutional neural networks,\" in _International conference on machine learning_. PMLR, 2019, pp. 6105-6114. * [55] A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R.
Precision spraying evaluation requires automation primarily in post-spraying imagery. In this paper we propose an eXplainable Artificial Intelligence (XAI) computer vision pipeline to evaluate a precision spraying system post-spraying without the need for traditional agricultural methods. The developed system can semantically segment potential targets such as lettuce, chickweed, and meadowgrass and correctly identify if targets have been sprayed. Furthermore, this pipeline evaluates using a domain-specific Weakly Supervised Deposition Estimation task, allowing for class-specific quantification of spray deposit weights in \\(\\mu\\)L. Estimation of coverage rates of spray deposition in a class-wise manner allows for further understanding of effectiveness of precision spraying systems. Our study evaluates different Class Activation Mapping techniques, namely AblationCAM and ScoreCAM, to determine which is more effective and interpretable for these tasks. In the pipeline, inference-only feature fusion is used to allow for further interpretability and to enable the automation of precision spraying evaluation post-spray. Our findings indicate that a Fully Convolutional Network with an EfficientNet-B0 backbone and inference-only feature fusion achieves an average absolute difference in deposition values of 156.8 \\(\\mu\\)L across three classes in our test set. The dataset curated in this paper is publicly available at [https://github.com/Harry-Rogers/PSIE](https://github.com/Harry-Rogers/PSIE) Agri-Robotics, Computer Vision, XAI.
Condense the content of the following passage.
301
cambridge_university_press/576682bc_5c7f_485c_a76e_67313d73cea2.md
# Stable isotopes and Debris in Basal Glacier Ice, South Georgia, Southern Ocean David E. Sugden, Department of Geography, University of Edinburgh, Edinburgh EH8 9XP, Scotland Chalmers M. Clapperton, Department of Geography, University of Aberdeen, Old Aberdeen AB9 2UF, Scotland J. Campbell Gemmel, Christ Church College, University of Oxford, Oxford OX1 1DP, England and Peter G. Knight Department of Geography, University of Keele, Keele ST5 5BG, England ## Introduction The aim is to characterize the basal ice sequence exposed in the snouts of some glaciers in South Georgia and to establish its origin. More specifically, we combine a study of the entrained rock debris with the analysis of the 8D/\\({}^{81}\\)O characteristics of the ice in order to test the hypothesis that the basal ice sequence has accreted through freezing at the glacier bed. The results are of interest for two main reasons. First, detailed descriptions of rock debris in glaciers are relatively rare, especially in the sub-Antarctic, and this hinders the development of firmly constrained models of glacier sliding, erosion, and deposition. Secondly, co-isotopic analysis of basal glacier ice has not previously been explored in the sub-Antarctic and Antarctic domains. The approach to the study was to select three representative glaciers in South Georgia. Representing large glaciers with a high altitudinal range is Nordenskjold Glacier (Fig. 1), which is 12 km long and 3.5 km across at its calvinso snout in Cumberland East Bay. The glacier is nonwined in seven cirques etched into the Allardyge Range which includes the highest mountain in South Georgia, Mount Paget (2936 m). A great deal of increment to the glacier surface comes from snow and ice avalanches from surrounding peaks. Lyell Glacier represents the population of medium-sized glaciers and extends 7.75 km to sea-level from an altitude of 1050 m (Fig. 2). The snout is 2.5 km across; 1 km terminates on land, the remainder is grounded in shallow water. Representing small, low-altitude glaciers is Hodges Glacier, a small cirque glacier approximately 1 km in length and 1 km across. Its upper limit is delimited by the crest of the cirque backwall at 550-600 m, while its snout lies at 340 m. We examined basal ice sequences at the western side of Nordenskjold Glacier where it terminates on land, on both sides of Lyell Glacier, and at the snout of Hodges Glacier. Field work involved study of structural and stratigraphical relationships and large-calibre rock debris, and collection of debris-bearing ice samples (-50 cm\\({}^{5}\\)) for laboratory analysis. The latter were weighed, melted in a sealed polythene bag, and immediately bottted for isotopic analysis. The bottles were sealed with wax until processed by Dr J. Jouzel at the Centre d'Etudes Nucleaires de Saclay in France. The rock debris was dried, weighed, and analysed for its size and lithology in the laboratory. Figure 1: The location of the three glaciers in South Georgia. ## The Debris-Bearing Sequences ### Nordenskijd Glacier The debris-bearing sequences exposed on a 26deg slope at the margin of Nordenskijd Glacier consist of a series of debris bands separated by layers of ice with thin debris laminations (Figs 3 and 4); they dip up-glacier at angles of 36-56deg. The debris bands are 4-15 cm thick and consist mainly of debris or of a cluster of discrete debris bands separated by millimetre-thick ice layers. The debris bands frequently contain sub-angular, abraded, and striated chats 0.5-3 cm in size with occasional clats up to 15 cm. The ice between the debris bands is clear, contains few bubbles, and has crystals 1-2 cm in size. Fine debris occurs within crystals, along crystal boundaries, and in millimetre-thick laminations which can be followed laterally for several metres. At site N11, about 150 separate debris laminations separated by clear ice layers 1-2 cm thick occur in a 2 m thick band. Debris concentrations in the laminated ice vary between 2.6% and 8.8% by weight. Debris <32 mm has a bimodal grain-size distribution with peaks at 16 mm and 63-125 um; Fine silt and clay (2 cm) is also present (Fig. 5). There are two exceptions: one example of laminated ice (N4) has no material coarser than 4 mm, while one debris band (NS) has no time material. The ice above the top debris layer is quite different. It is white and bubbly (crystal size 2-4 cm), and contains angular fragments of debris similar to that present as a rust-coloured vener on the glacier surface. ### Lyell Glacier On Lyell Glacier, debris-bearing sequences 6-8 m thick are exposed in frontal ice cliffs of 30-50deg along the land-bound eastern and western margins of the glacier (Fig. 2). The debris contains abraded and striated clats, 82% of which are sub-rounded to rounded in shape. The debris-rich zones are separated by clearer ice (crystals 2-4 cm) Figure 4: Profile and sample sites. Nordenskijd Glacier. February 1985. The debris contents (by weight) of the debris bands were: N3, 83.5%; N5, 62.8%; N7, 69.2%; N10, 91.8%. The debris contents of the intervening laminated ice layers were: N4, 82%; N6, 2.6%; N8, 8.8%; N9, 4.9%. Figure 3: The basal debris-bearing sequence at the western edge of the snow of Nordenskijd Glacier, February 1985. The person is at the upper junction of the sequence. White bubbly ice above is covered with a vener of sur face rock–fall material. Figure 2: Lyell Glacier looking south towards Paulsen Peak, March 1982, showing the widestestest cover of surface rock-fall material. The basal ice was sampled at the land-bound parts of the snow to the left (east) and the right (west) of the photograph. #### 3.2.1 Journal of Glaciology containing debris thinly distributed both within and between crystals. Twenty-eight samples were collected from the basal debris at the eastern margin of the glacier. Table 1 shows that the mean debris concentration in the debris-rich zone (lower 2 m) is 11% by weight and in the clearer ice (2-5 m above the bed) it is 0.3% by weight. Overlying the basal sequence, and separated from it by a sharp discontinuity, is 0.5 m of banded white and blue ice; it consists of layers of white bubbly ice 30 cm thick and layers of blue ice 10 cm thick. This foliation is horizontal in mid-glacier and slopes up towards the margin. Within the ice there are low concentrations of rust-coloured angular rock debris (mean of ten samples being 0.05% by weight) similar to that on the glacier surface. #### 3.2.2 Hodges Glacier The basal sequence in Hodges Glacier is only 1.3 m thick. It is visible in melt-water tunnels and in the ice of rock bumps beneath the 23deg slope of the ice margin. The sequence consists of discontinuous lenses of debris 1 mm to 1 cm in thickness separated by clear ice with sporadic or no bubbles; the debris lenses are often folded on a scale of 10 cm. Abraded, striated, and angular clust occur throughout the basal sequence. Debris concentrations are highest near the base (46% by weight) and fall to 15% by weight 120 cm above the base (Fig. 6). The grain-size distribution of debris <32 mm reveals a sharp peak at 8 mm with smaller peaks in the fine sand sizes (Fig. 5b). A sharp discontinuity separates the basal sequences from overlying ice that is bubbly and white with a crystal size of \\(c\\). 1 cm. The white bubbly ice is stratified; clear sedimentary layers c. 60 cm thick. dip up-glacier and occur up to the firn line at \\(c\\). 450 m (Fig. 7). #### 3.2.3 ORIGIN OF THE DEBRIS There are two distinct populations of debris, the characteristics of which indicate their probable origin. The angular, rust-stained debris sporadically disseminated within Figure 5: Grain-size distribution of debris finer than 32-16 mm in (a) the Nebraska/glacier profile, (b) Hodges Glacier, and (c) artificially crushed greyscale and volcanatelastic rocks. Samples of the latter were passed through spring-loaded rollers five times. The main features of the curves shown were achieved on the second pass. Thereafter there was a more gradual reduction of grain-size, especially the coarser fraction. Figure 6: The location of debris and isotopic samples on Hodges Glacier. (a) The glacier as a whole, and (b) details of the basal ice sequence near the snott. Concentration of debris is given in brackets. Figure 7: Sedimentary layers exposed in a longitudinal crevasse on Hodges Glacier. February 1985. The camera on the ice measures 15 cm \\(\\times\\) 15 cm \\(\\times\\) 10 cm. the white bubbly ice is identical to weathered rock-fall material around and on the glaciers (Gordon and others, 1978; Birnie, unpublished). It is mostly derived from the dominant rock type on this part of South Georgia, a fissile greynvace. This debris has undergone passive transport in the glaciers (Boutton, 1978) and has not been modified. Debris in the basal sequences is quite different in form, size, and colour, although it is mostly derived from the same parent material. Individual class are more rounded, abraded, and striated. Unaffected by surface weathering, the material is grey in colour. The particle-size distribution reveals multi-modal peaks characteristic of subglacial crushing and abrasion (Drewry, 1986). Peaks in the gravel size reflect rock fragments, while peaks in the smaller size range indicate mineral grains. This conclusion was confirmed by examining the grain-size of the two main component rocks in the area (Fig. 5c). The greynvace sample shows peaks at 0.4 mm and 4-6 \\(\\mu\\)m, while the volcanatical sample shows peaks at 0.5 mm and 63 \\(\\mu\\)m. Similar peaks can be picked out on curves for the Nordenskjold Glacier debris bands. Here, the clay component in one sample is finer than the characteristic mineral grain-size and this is likely to be due to abrasion (Haldorsen, 1981). All these characteristics are convincing evidence that the debris in the basal ice sequences has undergone crushing and abrasion at the ice-rock interface. The nature of the ice associated with the two types of debris is also distinctive. The white bubbly ice with inter-cated layers of blue ice is widely accepted to be glacier ice unmodified by refreezing or by contact with the glacier bed (Lawson, 1979). On the other hand, clear ice with few or no bubbles and debris contained in or between crystal boundaries or in thin laminations is characteristic of basal freezing. ### 6D/6\\({}^{10}\\) Characteristics Co-isotropic analyses were undertaken to test the conclusions derived from the debris studies. In particular, it was hoped to differentiate ice originating by basal freezing from glacier ice which had not touched the bottom. Samples were taken from the basal ice sequence and white bubbly ice immediately above. In addition, samples were obtained from one basal sequence in Nordenskjold Glacier to establish the nature of the relationship between 6D and 6\\({}^{18}\\)O in the basal ice. It is this relationship which can be used to establish the occurrence and amount of regulation (Jouzel and Souchez, 1982) as well as the characteristics of the parent water (Souchez and de Groote, 1985). On Nordenskjold Glacier, the samples were taken from debris bands, thinly laminated ice, and white bubbly glacier ice, as shown on Figure 4. On LyeII Glacier, four samples were taken from the basal series 200 m east of the sea (L3, L4, L5, and L6 sampled 6, m 3, m 1, m, and 0.5 m above the base, respectively). Of these, only 16 came from a debris layer (30% by weight). Two samples (L1 and L2) came from white bubbly ice 1.5 m and 1 m above the basal ice series, respectively. On Hodges Glacier, three samples came from the basal ice series, five from successive white bubbly ice layers between the snout and the firin line, and one from old snow just above the firin line (Fig. 6). The results from all three glaciers are shown in Figure 8. The 6D and 6\\({}^{18}\\)O values obtained are expressed in per mil versus SMOW (Standard Mean Ocean Water with D/H and \\({}^{18}\\)O/\\({}^{18}\\)O respectively equal to 155.76 and 2005.2 ppm). Precision of the measurements is 45% in 8D and \\(\\pm\\)0.1% in 8\\({}^{18}\\)O. For convenience, they are drawn with 8\\({}^{18}\\)O on the Figure 8: 8D/6\\({}^{18}\\)O values obtained from the three South Georgia glaciers. They are grouped by glacier and by ice type. #### Journal of Glaciology snowfall becomes increasingly negative. Nordenskjold Glacier, which backs into mountains over 2000 m high, has values similar to those of Glacier of Tamefueron in the European Alps (Jouzel and Souchez, 1982). Lyell Glacier, at an intermediate altitude, has values intermediate between Hodges Glacier and Nordenskjold Glacier. The contrast between the 8D/8\\({}^{\\rm H}\\)O characteristics of the basal ice and pure glacier ice of Nordenskjold Glacier is shown in Figure 9. Samples N1 and N12 are white bubbly ice and, as expected, the 8D/8\\({}^{\\rm H}\\)O relationship of these samples is aligned along a slope of 8, a direct relationship related to isotopic fractionation of water occurring during condensation or sublimation (Dansgaard, 1964). The remaining samples all involve ice from the basal series and they lie along a line with a slope calculated by linear regression to be 6.4. This is significantly different from the precipitation slope and reflects fractionation during refreezing. This value is in the range of theoretical predictions which would be expected from the freezing of parent water derived from pure Nordenskjold Glacier ice. Thus, it seems reasonable to accept the isotopic values in South Georgia as convincing evidence that the debris-bearing ice has accreted by freezing-on. It is interesting that the absolute values of the three debris-band samples are less negative than those of the rest of the basal ice. This might indicate that the debris bands represent an early stage in the freezing of the parent water and the laminated ice a later stage of freezing from the same reservoir. The total range of the refrozen ice samples above the level of the least negative white bubbly ice samples is more than 3% in 8\\({}^{\\rm H}\\)O. This suggests that more than one cycle of melting and freezing may have occurred, since 3% is the maximum enrichment from the parent water than can occur during a single event. The Hodges Glacier values are too few to calculate reliable slopes. However, Figure 8 shows that the basal ice samples are isotopically heavier than those of the pure glacier ice. This effect is characteristic of regulation ice formed by partial refreezing of a reservoir derived from a glacial parent water (Jouzel and Souchez, 1982) and agrees with the stratigraphical and debris evidence. The isotopic values of the Lyell Glacier samples are again too few in number to calculate reliable slopes. Nonetheless, it can be seen from Figure 8 that the pure glacier ice samples lie close to the precipitation slope of 8 and that the four basal ice samples would fit a shallower slope; also the basal ice samples are isotopically heavier. Again, the relationships agree with an origin by refreezing and support the stratigraphical and sedimentological evidence. ## THE ORIGIN OF BASALICE SEQUENCES Both the debris and isotopic analyses confirm the subglacial origin of the basal ice sequences and point to accretion by freezing-on. It remains to discuss why the thicknesses and stratigraphical characteristics of the sequences vary from glacier to glacier. The 1.3 m basal ice layer beneath Hodges Glacier with discontinuous lenses of debris is likely to relate to processes of regulation associated with the roughness of the glacier bed. For example, a bedrock bump of 1.3 m amplitude with no basal ice layer above it but with basal ice streaming round it lies immediately up-glacier of the sampling site. The discontinuous nature of the layer and the entrained debris point to a discontinuous process of refreezing such as would be associated with flow over and round bumps. The overall layer is apparently not being thickened by compressive flow near the snout, perhaps because the glacier has been in retreat since the 1968 and terminates as a feather edge. The thick Nordenskjold Glacier section represents the opposite extreme. Here, strong compressive flow is indicated by folds visible in marginal crevasses and by shear discontinuities (Fig. 4). Presumably, the debris sequence has been thickened considerably by such processes. Compressive flow is favoured by the position of the site at a pinning point beside a glacier whose snout is afrad and calving rapidly. Also, seasonal freezing of the landward margin may favour compressive flow. The detailed characteristics of the debris throw some light on subglacial processes. Debris in the basal sequences of all three glaciers reflects the effects of crushing and abrasion, with clasts, rock fragments, mineral grains, and a fine clay/silt fraction present. There is an interesting contrast between the longer Nordenskjold Glacier and shorter Hodges Glacier. Debris beneath the shorter glacier has presumably experienced less transport at the glacier base and it is notable that it contains proportionately more rock fragments, less still/clay fraction, and more angular clasts than that of the longer Nordenskjold Glacier. The varied stratigraphy of the incorporated basal debris reflects different processes of entrainment. It is likely that the debris exposed at the snout of Nordenskjold Glacier represents the wholesale freezing-on of the glacier bed, as in the case of the compact debris-dominant layers, or the periodic freezing-on of the bed as an cases of debris bands separated by thin ice layers. Debris bands which lack fines (such as N4; Fig. 5a) could represent the freezing-on of a part of the bed where melt water is flushing out the fine material. The laminated ice with millimetre-thick layers of debris is difficult to explain. One can suggest that the 1-2 cm thick \"clear-ice\" zones freeze on with small amounts of debris being incorporated within and between crystals. The problem is to explain the extensive millimetre-thick debris layers. Perhaps these represent the freezing-on of a thin subglacial film of water which is heavily charged with debris. In support of such a notion is the striking lack of debris parser than 4 mm in the laminated ice samples (Fig. 5a). Perhaps coarser material is simply too big to pass along the basal water layer and is thus not available for freezing-on. The apparent continuity of thin debris layers is also a problem. Unlike the case of Hodges Glacier, the continuity suggests a process operating over many square metres of the glacier bed. In this context, the role of tidal Fig. 9: The regression line drawn through 8D/8\\({}^{\\rm H}\\)O values for the debris-bearing ice, Nordenskjold Glacier (correlation coefficient = 0.98) compared to a precipitation slope of 8. The slope of 6.4 dif/refs from the precipitation slope of pure glacier ice and demonstrates it has undergone refreezing. flexure of the glacier could be important in that the rise and fall of the tide will modify basal pressures in the immediately adjacent grounded part of the glacier and allow instantaneous freezing of a subglacial water film over a relatively wide area. CONCLUSIONS 1. The debris and co-isotropic characteristics of basal ice sequences in three South Georgia glaciers are best explained by freezing-on at the glacier base. 2. The basal ice sequences may be thickened by folding and shearing associated with compressive ice flow at the ice margin. 3. The basal debris in the sequences reflects processes of crushing and abrasion. The former process results in a characteristic multi-modal size distribution of the fraction smaller than 16 mm, while the latter process produces clay-sized material. These characteristics are most clearly developed in the longer glaciers. 4. 8D/8\\({}^{\\text{1B}}\\)O ratios of glacier ice in South Georgia demonstrate an alititudinal effect between glaciers. ACKNOWLEDGEMENTS We are pleased to acknowledge the support of the U.K. Natural Environment Research Council (grant GR3/5199) for a field visit in 1985. We are grateful to the Royal Navy and especially HMS _Endurance_ for travel to and within the island, the British Antarctic Survey for logistical help, R.A. Soucher and J. Jouzel for the co-isotopic analyses, and M. _Sugden and others: Basal glacier ice at South Georgia_ Lamb for the mechanical analysis. P.G.K. and J.C.G. were both funded by U.K. Natural Environment Research Council studentships at the time they were involved in the work. REFERENCES Birnie, R.V. Unpublished. Rock debris transport and deposition by glaciers in South Georgia. [Ph.D. thesis, University of Aberdeon, 1978.] Boutlon, G.S. 1978. Bouider shapes and grain size distributions of debris as indicators of transport paths through a glacier and till genesis. _Sedimentology_, Vol. 25, p. 773-99. Dansgaard, W. 1964. Stable isotopes in precipitation. _Tellus_, Vol. 16, No. 4, p. 436-68. Drewery, D. 1986. _Galical geologic processes_. London, Edward Arnold. Gordon, J.E., and others. 1978. A major rockfall and debris slide on the Lyell Glacier, South Georgia, by J.E. Gordon, R.V. Birnie, and R. Timmis. _Arctic and Alpine Research_, Vol. 10, No. 1, p. 49-60. Haldorgen, S. 1981. Grain-size distribution of subglacial till and its relation to glacial crushing and abrasion. _Boreas_, Vol. 10, No. 1, p. 91-105. Jouzel, J., and Soucher, R.A. 1982. Melting-referencing at the glacier side and the isotopic composition of the ice. _Journal of Glaciology_, Vol. 28, No. 98, p. 35-42. Lawson, D.E. 1979. Sedimentological analysis of the western terminus region of the Matnauka Glacier, Alaska. _CRREL Report 79-9_. Souchez, R.A., and Groote, J.M. de. 1985. SD-8\\({}^{\\text{1B}}\\)O relationships in ice formed by subglacial _irez_: paleoclimatic implications. _Journal of Glaciology_, Vol. 31, No. 109, p. 229-32.
This paper combines a study of the rock debris and 8D/\\({}^{81}\\)O isotopic characteristics of basal ice sequences in three representative glaciers in South Georgia and concludes that the debris and ice has been entrained mainly by basal freezing. The size distribution of the rock debris is typical of crushing and abrasion, and reflects transport at the ice-rock interface. The 8D/\\({}^{81}\\)O relation-ships show that clear ice associated with the debris has accreted through freezing. The white bubbly glacier ice has 8D/5\\({}^{81}\\)O relationships typical of precipitation which demonstrates an altitudinal effect between glaciers. + Footnote †: slugcomment: Journal of Glaciology, Vol. 33, No. 115, 1987
Write a summary of the passage below.
169
arxiv-format/2404_05758v1.md
# Implicit Assimilation of Sparse In Situ Data for Dense & Global Storm Surge Forecasting Patrick Ebel\\({}^{*}\\) [email protected] Brandon Victor\\({}^{\\dagger}\\) [email protected] Peter Naylor\\({}^{*}\\) [email protected] Gabriele Meoni\\({}^{*,o}\\) [email protected] Federico Serva\\({}^{\\ddagger}\\) [email protected] Rochelle Schneider\\({}^{*}\\) [email protected] \\({}^{*}\\)European Space Agency, \\(\\Phi\\)-lab \\({}^{\\dagger}\\) La Trobe University \\({}^{\\diamond}\\)European Space Agency, ACT \\({}^{\\ddagger}\\) Consiglio Nazionale delle Ricerche ## 1 Introduction Space-borne Earth observation allows for large-scale monitoring of our planet, its atmosphere and events such as natural hazards that may pose significant threat to human life. While the strengths of satellite imagery are its broad spatial extent, its spatio-temporal resolution is inferior compared to on-site measurements. In contrast, in situ sensors may provide (sub-)hourly recordings at highest accuracy, yet they are sparsely deployed and thus lack spatial coverage. Fusing both kinds of data at a global scale holds promises, but harmonizing the sensors in a manner suitable for neural networks to process is an open research direction. A well-established paradigm tackling this issue in the context of weather analysis is that of data assimilation [10, 13]. However, it is computationally costly and not easily approachable. In this work, we address the challenge of global storm surge forecasting by implicitly assimilating sparse and raw tide gauge data with coarse weather and ocean state reanalysis products. Storm surges are extreme weather-driven ocean dynamics superimposed on the mean sea level and tidal rhythms, which can cause coastal floods. Scientific consensus is that the coming decades will bring a sharp increase of coastal hazards due to climate change-caused rise in mean sea levels [17, 22, 36], aggravated by land subsidence [39] and more intense extreme storm events [2, 8]. Our work on storm surge forecasting is motivated to address such hazards, aligning with the _United Nation Sustainable Development Goals_ 11.5 & 13 [28, 31]. Particularly, our approach is inspired by recent Figure 1: **Overview:** Our approach provides densified high-resolution storm surge forecasts (top) by implicitly assimilating inputs of sparse in situ tide gauge time series (top) with paired sequences of ocean (center) and weather state (bottom) re-analysis products. For additional supervision, coarse ocean state reanalysis maps (bottom) at coinciding lead time are also predicted. advances in weather forecasting that allow generalization to previously unencountered or ungauged sites, which may especially benefit under-served communities with less access to well-maintained in situ measurement infrastructure. To empower worldwide AI-driven surge forecasting, we curate a novel global and multiple decades spanning dataset of in situ tidal gauge records, paired with weather and ocean state reanalysis, all preprocessed according to the best domain-specific practices. We highlight the dataset's worth by benchmarking a diverse landscape of approaches including conventional forecasting techniques, an operational numerical model, state-of-the-art deep neural networks, a recent vision transformer for weather forecasting and our enhanced adaptation of a popular lightweight temporal attention network. While we evince the competitiveness of the latter model, the main objective of our experiments is to demonstrate the predictability of storm surges at previously unencountered or altogether ungauged locations, an aim whose feasibility has been questioned in prior work [4, 37]. In sum, our main contributions are two-fold: * We introduce a novel, global and multi-decadal dataset of in situ ocean surge time series, paired with atmospheric and ocean state reanalysis products, spuring and facilitating further research on this critical matter. * We demonstrate that precise and dense storm surge forecasts can be obtained by fusing sparse in situ data of coastal tide gauges with coarse atmosphere and ocean reanalysis. Critically, our forecasts extend to previously unseen gauges and entirely ungauged locations, which may benefit under-served communities. ## 2 Related Work ### Short-to-medium Range Weather Forecasting Recent work marked notable progress on numerical weather prediction [3, 20, 26, 34]. Such models typically rely on atmospheric initial values from a reanalysis product such as ERA5 [10], i.e. best estimates derived by updating prior knowledge with multi-source weather observations. Contrarily, the contribution of [1] proposes an implicit assimilation approach to fuse in situ weather radar station data with coarse resolution reanalysis products, yielding dense and skillful rainfall forecasts over the United States. While we draw inspiration from this approach, our focus is on the global coastlines to model marine dynamics. Technically, our approach deviates from the aforementioned ones by processing local patches of data instead of a coarsely resolved global context in a single forward pass of the network, and is thus significantly more lightweight. ### Storm Surge Forecasting Operational storm surge forecasting pre-dates deep learning, with early techniques explicitly modeling the physics of maritime dynamics with a focus on particular ocean basins [21, 35]. Of particular interest is the Global Tide and Surge Model (GTSM), a hydrodynamic model forced with ERA5 to globally predict surge on an irregular grid. In this study, we built upon coarsely resolved GTSM ocean state analysis [23] to drive the assimilation of raw in situ data. That is, we fuse the reanalysis product with accurate but sparse in situ records for improved and densified surge predictions. Initial efforts for global storm surge modeling via deep learning are given by [4, 37]. Both studies are limited to temporal generalization, i.e. they evaluate on gauges trained upon and solely generalize to future time points of these gauges. In contrast, our data and approach enable to generalize in both time and space: Relatedly, recent work [25] proved the feasibility of river streamflow predictions at ungauged basins, in the spirit of which we generalize coastal storm surge forecasting to unseen shores. This defeats the prevailing wisdom that ocean modeling necessitates at least 6-7 prior years of training data at any site of interest [4, 37]. ## 3 Data We collect a new global multi-decadal dataset combining co-registered atmosphere reanalysis, ocean state reanalysis and pre-processed in situ tide gauge measurements. All data is sampled to an hourly frequency with dates ranging from the beginning of \\(1979\\) to the start of \\(2019\\), and gridded at \\(0.025^{\\circ}\\) spatial resolution. The atmosphere re-analysis denotes best estimates of historical mean sea level pressure as well as 10 metre U and V wind components ('msl', 'u10' and 'v10', respectively) at about \\(30\\) km resolution, as provided by the ERA5 catalogue [10]. The ocean-state model forced by ERA5 meteorology provides the storm surge residual based upon the irregularly gridded Deltares Global Tide and Surge Model (GTSM) forced via the aforementioned ERA5 inputs, as given by the Copernicus Climate Change Service (C3S) Climate Data Store (CDS) [23]. Furthermore, a global land-sea mask [15, 40] resampled to circa \\(3\\) km resolution is provided. Finally, precise storm surge measurements are derived from in situ tide gauge records collected in GESLA-3 [9] and spatially distributed as shown in Fig. 2. Principally, our pre-processing pipeline follows the established workflow of [37]; featuring mean sea level de-trending, harmonic decompositioning [5] and de-noising steps. Key differences to the prior work are their filtering of any in situ sites with records shorter than seven years, whereas we take inspiration from recent progress on (un)gauged river streamflow forecasting [25] and keep such data. While shorter durations pose a greater challenge to learn site-wise dynamics, this drastically increases the overall amount of valuable in situ data from a total of \\(736\\) tidal gauges in [37] to \\(3553\\) locations in our work -- yielding an almost five-fold increase of valuable in situ data compared to preceding efforts. ## 4 Methods The problem tackled herein is that of forecasting the highly non-linear dynamics of storm surges on a short lead time: Every sample \\(i=0,1, ,|\\mathcal{D}|\\) of the dataset \\(\\mathcal{D}\\) denotes a pair \\((\\mathbf{X^{i}},Y^{i})\\), with \\(\\mathbf{X}^{i}=[X^{i}_{1},\\cdots,X^{i}_{T}]\\) being the input time series of size \\([T\\times C_{in}\\times H\\times W]\\) featuring in situ plus atmospheric reanalysis and model-based surge data, and \\(Y^{i}\\) is the target image of shape \\([C_{out}\\times H\\times W]\\) at a lead time of \\(L\\) hours. \\(T\\) is the temporal length of the input series, \\(C_{in}\\) and \\(C_{out}\\) denote the number of input and output channels, and \\(H\\times W\\) the images' two spatial dimensions. For convenience, the \\(i\\) superscript is omitted in the remainder of the paper. Unless stated otherwise, we set \\(T=12\\), \\(C_{in}=5\\), \\(C_{out}=2\\), \\(H=W=256\\) and \\(L=8\\) hours, which has been a common choice in prior short-term forecasting works [34]. Example data are illustrated in Fig. 3, showcasing the time series dynamics, the diversity in context and the presence of extreme weather events in the data. ### Models We demonstrate the feasibility of our implicit assimilation and densification approach by adapting a representative variety of models. Besides highlighting our paradigm's effectiveness for sparse coastal observations, these evaluations may serve as a benchmark for future research. For each considered network, we follow the respective architecture's best practice in terms of hyperparameters given in the referenced literature, unless otherwise specified. Models are: Conventional baselinesAs a simple baseline, we consider the seasonal average surge based on historic values at the gauge of interest and the given target time. This necessitates historical data at the target gauge, but is expected to provide a solid baseline for sites experiencing seasonal cyclone activity [32]. Second, we consider the mean surge at the gauge of interest over the input time period. While again requiring access to the gauge's records, this would prove beneficial whenever the surge at target time doesn't deviate too much from the input period's. Third, we consider the linear extrapolation of surge time series inputs to the target time. Finally, the global physical storm surge predictions of GTSM forced with ERA5 data are reported and compared against [23]. As GTSM is numerically simulated on an irregular grid [6, 14] we perform nearest neighbor extrapolation to get coarse forecasts at any site of interest, specifically for getting predictions at unknown test split gauges, and then extrapolate these to the target time. LSTM-basedLong short-term memory (LSTM) networks [11] are the state-of-the-art architectures for temporal modeling of fluvial [19, 25] and coastal dynamics [37]. Accordingly, we consider classical as well as convolutional LSTM (ConvLSTM) [33] for storm surge forecasting. We use the architectures from [37] and minimally adapt them for the sake of comparability to accept our more comprehensive data, including time series of preceding gauge values. As such, both models receive a 5x5 context window around the gauge, and predict a single point in the future. Figure 2: **Data.** Green & orange dots denote storm surge time series locations with records in 1979-2019, as pre-processed from the GESLA-3 collection of tide gauges [9]. Dark lines indicate hurricane tracks in 2014-2019 as indexed by IBTRaCS [18]. Pink markers highlight test split gauges, biased to points of landfall. Visualizations of the ERA5 grid and the irregular GTSM grid are omitted for brevity. Attention-basedWe consider spatio-temporal transformer models, which both principally share a common structure of inputs and outputs as depicted in Fig. 1. First, we evaluate a MaxVIT U-Net backbone [38] as recently proposed for weather forecasting in [1], adapted to our problem statement. The network collates temporal information into the channel dimension, but is conditioned on the lead time of the target via Feature-wise Linear Modulation (FiLM) [27]. Finally, we consider the U-TAE of [7], originally proposed for panoptic segmentation. We adjust U-TAE by introducing FiLM at each of its convolution blocks--such that additionally to temporal embeddings at the input time points, our adaptation of the model is conditioned to forecast at a variable target time. Notably, the key difference between the last two models is that [38] resolves the input time series into the channel dimension and doesn't model time dynamics explicitly, whereas [7] processes time series explicitly and applies lightweight temporal attention but doesn't explicitly model global spatial interactions. ### Densification Central to our approach generalizing storm surge forecasting to previously unseen or ungauged sites is the concept of densification. For any model that outputs a two-dimensional storm surge forecast map \\(\\mathbf{\\hat{y}}_{s}\\), we implement densification via the built-in spatial parameter sharing of the convolution operator. Specifically, we utilize 1 \\(\\times\\) 1 convolution kernels with a stride of \\(1\\) at the final network layer to broadcast from sparsely populated to non-observed pixel coordinates in the spatial dimensions. Auxiliary supervision and input data dropout are also used to further encourage the networks to learn densification, as proposed by Andrychowicz et al [1]. Auxiliary supervisionComplementarily to learning a densified forecast of the sparse in situ time series, the networks additionally predict a forecast \\(\\mathbf{\\hat{y}}_{c}\\) of the coarse GTSM ocean state at the same lead time \\(L\\), as depicted in Fig. 1. This way, the models receive additional feedback at pixels which would otherwise not be populated and the preceding shared layers internalize to implicitly assimilate the sparse observations with the coarse reanalysis. To only evaluate the coarse ocean state predictions over valid locations we mask the loss computation with a land sea mask \\(\\mathds{1}_{lsm}\\). In situ dropoutTo furthermore encourage the densifying networks to predict non-trivial outputs at unpopulated pixels within the sparse input time series we perform data dropout. Specifically, we randomly remove in situ tide gauges from the input with a probability \\(p\\) but keep all sites within the target patch, such that the network is forced to learn extrapolating to the dropped sites. We set \\(p=0.25\\) and include a binary validity mask \\(\\mathds{1}_{val}\\) in the network inputs as proposed in a weather prediction context by [1]. In sum, the densifying network architectures output two maps \\(\\mathbf{\\hat{y}}=[\\mathbf{\\hat{y}}_{s},\\mathbf{\\hat{y}}_{c}]\\). Map \\(\\mathbf{\\hat{y}}_{s}\\) densely predicts the sparse GESLA gauges, and \\(\\mathbf{\\hat{y}}_{c}\\) predicts the spatially interpolated future coarse GTSM values. Thus, they are trained via a weighted combination of two masked L1 cost functions \\[\\mathcal{L}_{s}(\\mathbf{\\hat{y}}_{s},\\mathbf{y}_{GESLA})=\\frac{1}{n}\\sum_{j=1}^{n} \\mathds{1}_{val}(j)\\cdot\\|\\mathbf{\\hat{y}}_{j}-\\mathbf{y}_{j}\\|_{1}\\;, \\tag{1}\\] \\[\\mathcal{L}_{c}(\\mathbf{\\hat{y}}_{c},\\mathbf{y}_{GTSM})=\\frac{1}{n}\\sum_{j=1}^{n} \\mathds{1}_{lsm}(j)\\cdot\\|\\mathbf{\\hat{y}}_{j}-\\mathbf{y}_{j}\\|_{1}\\;, \\tag{2}\\] Figure 3: **Example data**, one local sample per row. Input grouped at the left, targets at the right. Inputs: Time series of target (blue) and context gauges (grey), with target history only given in the hyperlocal setting. gauge locations and surge values. ERA5 pressure at mean sea level, wind at 10 m in u and v directions. Coarse GTSM input. Targets: Surge at target time, GTSM at target time. masked via \\(\\mathds{1}_{val}\\) and \\(\\mathds{1}_{lsm}\\), resulting in the combined loss \\[\\mathcal{L}(\\mathbf{\\hat{y}},\\mathbf{y})=\\mathcal{L}_{s}(\\mathbf{\\hat{y}}_{s},\\mathbf{y}_{GESLA} )+\\lambda\\mathcal{L}_{c}(\\mathbf{\\hat{y}}_{c},\\mathbf{y}_{GTSM}). \\tag{3}\\] We set the hyperparameter \\(\\lambda=\\frac{1}{100}\\), to account for the sparseness of in situ data in comparison to the pixel-wise coarse evaluations and to compensate for the resulting magnitudes of differences in supervision frequency across both domains as well as their respective loss terms. ### Lead time conditioning To enable a flexible forecasting, accommodating for varying hours of look-ahead predictions at inference time and via a single forward-pass, we implement lead time conditioning via Feature-wise Linear Modulation (FiLM) [27]. Specifically, a shallow encoder projects queried lead times \\(L\\) into a low-dimensional feature space and linearly modulates convolutional feature maps via a learned scale and bias offset. We utilize lead time conditioning for the two considered densifying networks, and have one shallow encoder per each of their U-Net backbone's convolutional blocks. ## 5 Experiments SplitsFor our experiments, we set up splits by defining holdout data in terms of both the spatial and temporal dimensions: A globally distributed 20 % of coastal gauges across all ocean basins are reserved for the test split, whose temporal extent starts from April \\(2014\\) and is thus well observed via satellites, in the interest of follow-up works. The remaining 70 and 10 % of locations are utilized as training and validation splits, with records ranging from \\(1979\\) to \\(2014\\). This amounts to a total of \\(2561\\), \\(284\\) and \\(708\\) gauges for our train, validation and test split, respectively. The resulting map of in situ recordings (color-coded according to their splits) and hurricane trajectories is depicted in Fig. 2. The distribution of accessible and gauged sites is biased towards developed countries, underlining the need for machine learning solutions to serve under-represented regions. SamplingHaving assigned a subset of gauges to the holdout splits, we determine the test dates by identifying coincidences of in situ records with storm tracks as given by IBTrACS [18]. When a storm passes within \\(100\\) km of a holdout tide gauge then we set such a date as the sample's target time. If no storm tracks pass by, then we resort to sampling target times at outlier surge values deviating more than 2 train split standard deviations from the train split's mean surge. At train time, we likewise perform outlier sampling with a probability of \\(0.5\\), roughly mirroring the distribution of (non) storm events at holdout gauges in the validation and test splits. If no such sample exists for the current gauge, then a random target date is drawn instead. ### Implementation details To enable fast online sampling of data and efficient training of deep networks, we represent all spatio-temporal data as netCDF files [29] via xarray [12], either loaded directly into memory or read in parallel via disk [30]. This is especially critical for the in situ GESLA-3 records, which are re-processed into a compact format as part of the preprocessing pipeline and get released with this publication. In contrast to weather forecasting models [3, 20, 26] that process a global context window at proximately \\(30\\) km resolution, the networks we consider are significantly less resource-demanding and digest local patches of (\\(256\\) px)\\({}^{2}\\) at a finer pixel-resolution of circa \\(3\\) km to capture local variations of surge. All data features are z-standardized via their sufficient statistics calculated on the training split. TrainingFor training, 1 epoch is defined by iterating over all train split gauges in a random order. At each gauge, a Gaussian is drawn around it's location to randomly sample what is treated as the local patches centroid \\(c\\). The target date & time \\(t\\) are drawn randomly from the current gauge's records. Next, a lead time \\(L\\) is drawn randomly from \\(\\{0,1,2, ,12\\}\\), as in [1]. The hourly input time sequence of length \\(T\\) is then given by the time interval of \\([t-L-T,t-L]\\). Note that the random sampling of \\(c\\), \\(t\\) and \\(L\\) effectively acts as data augmentation. For auto-regressive methods we evaluate the prediction at time \\(t\\), whereas single-forward-pass approaches based on FiLM are conditioned on \\(L\\) to directly generate a forecast at time \\(t\\). Figure 4: **Experimental setup.** Design of the densification and hyperlocal evaluation schemes, conceptualizing their respective inputs and outputs. The hyperlocal protocol focuses on forecasting of novel dynamics encountered at inference time, predicting surge at holdout target (green) and context gauges (blue) \\(L\\) hours ahead. The generalization setup quantifies the goodness of models to broadcast predictions to ungauged locations, i.e. unknown gauges not contained in the input and solely used for evaluation. We use the ADAM optimizer [16] at a batch size of 16, with initial learning rates tuned over magnitudes \\(10^{-1}\\) to \\(10^{-4}\\) for each model individually. All networks train for 50 epochs with an exponential learning rate decay of 0.9. Models are evaluated on the validation split each epoch and the checkpoint with best validation loss is used for testing. ### Evaluation All network predictions at target time are compared against their respective test split in situ tide gauge values. Prediction goodness is evaluated in terms of Mean Absolute Error (MAE) as well as Mean Squared Error (MSE), reported in units of meters and with error-wise standard deviations (std) across the set of test split gauges denoted in brackets. Finally, we report each method's Normalized Nash-Sutcliffe Efficiency (NNSE) [24], which conceptually relates to the coefficient of determination (\\(R^{2}\\)) and takes values within \\(0\\) and \\(1\\). Similar to prior weather forecasting work [1] we evaluate in two experimental setups. The concepts of both setups are depicted in Fig. 4 and given as follows: i. Hyperlocal evaluationIn this experimental paradigm, the time series of holdout gauges not trained upon are included into the model's inputs at test time. Therefore, the challenge becomes to integrate previously unseen dynamics at novel locations and to assimilate newly encountered tidal gauges at inference time. ii. Densification evaluationPredictions at previously unseen test split gauges are obtained via densification, i.e. a model's ability to predict at unknown locations is quantified here. Importantly, test split gauges are not part of the input time series and only used as targets. Note that this setup can't be accomplished via previous established approaches for storm surge forecasting not implementing densification, e.g. the conventional baselines and LSTM-based models. ## 6 Results ### Main experiments To demonstrate the feasibility of our problem statement and the benefits of our curated dataset, we evaluate all considered approaches according to their applicability in the hyperlocal and densification experimental schemes. Outcomes are reported in Table 1. The results show that FiLM U-TAE performs best in the hyperlocal setting, forecasting surge at newly encountered gauges with a mean absolute accuracy of circa \\(16\\) cm. The seasonal average and input average predictions are more erroneous, particularly in terms of squared error--validating the presence of storm-driven extremes in the test data. Overall, all neural networks outperform competing approaches in the hyperlocal setting. In the densification experiment, FiLM U-TAE performs best, closely followed by MaxVIT U-Net. Notably, both densifying models denote a substantial improvement over the GTSM baseline, whose prediction goodness exhibits elevated standard deviations across gauges. Altogether, FiLM U-TAE tends to outperform MaxVIT U-Net, implying that temporal self-attention is of greater benefit than visual attention for our spatio-temporal forecasting task. To convey a better understanding of the spatial dependency of errors, Fig. 5 analyzes our best model's performance across the globe. The analysis shows that forecasting in the tropics is particularly hard, confirming that the curated benchmark provides a challenging problem. ### Ablation experiments In order to further investigate the challenges of short-term storm surge forecasting and explore which design choices determine the quality of predictions, we conduct a series of ablation studies following the densification experimental protocol. All ablations are run with the FiLM U-TAE network, which the preceding main experiments identified as the best performing backbone for our approach. \\begin{table} \\begin{tabular}{l l l l l l l l} \\hline \\hline **Model** & \\multicolumn{4}{c}{**Hyperlocal**} & \\multicolumn{4}{c}{**Densification**} \\\\ \\cline{2-7} & \\(\\downarrow\\) MAE (std) & \\(\\downarrow\\) MSE (std) & \\(\\uparrow\\) NNSE & \\(\\downarrow\\) MAE (std) & \\(\\downarrow\\) MSE (std) & \\(\\uparrow\\) NNSE \\\\ \\hline seasonal average & 0.281 (0.313) & 0.177 (0.539) & 0.424 & — & — & — \\\\ input average & 0.267 (0.295) & 0.158 (0.452) & 0.452 & — & — & — \\\\ input extrapolation & 0.182 (0.239) & 0.090 (0.342) & 0.593 & — & — & — \\\\ GTSM extrapolation [23] & — & — & — & 0.351 (0.643) & 0.536 (4.744) & 0.195 \\\\ \\hline LSTM [11, 37] & 0.166 (0.282) & 0.107 (0.759) & 0.595 & — & — & — \\\\ ConvLSTM [33, 37] & 0.162 (0.267) & 0.098 (0.691) & 0.618 & — & — & — \\\\ \\hline FiLM U-TAE [7, 27] & **0.158 (0.209)** & **0.069 (0.248)** & **0.655** & 0.190 (0.260) & **0.104 (0.535)** & **0.556** \\\\ MaxVIT U-Net [1, 38] & 0.160 (0.212) & 0.070 (0.263) & 0.649 & **0.178 (0.273)** & 0.106 (0.587) & 0.552 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: **Experimental evaluation.** We benchmark forecasting skills for \\(T=12\\) and \\(L=8\\) on a global holdout set of previously unseen gauges. Point-wise storm surge predictions at known gauges are evaluated in the hyperlocal setting (left), whereas the densification protocol (right) tests predictions at unknown or ungauged sites. FiLM U-TAE is the most competitive approach, followed by MaxVIT U-Net. **Accuracy vs. sequence length** To evaluate the effect of the number of input time points \\(T\\) on performances, we run FiLM U-TAE on input time series of lengths \\(T=6,12,18,24\\) hours. Table 2 shows that longer sequences drive better forecasts, although gains saturate. This confirms the intuition that more context and data regarding maritime dynamics facilitates short-term forecasting, but that observations further in the past become less informative. Accuracy vs. lead timeTo evaluate the effect of the lead time \\(L\\) on performances, we perform inference by varying \\(L=4,6,8,10,12\\) hours ahead. Note that the lead time can be systematically varied in a single forward pass thanks to the network being directly conditioned on \\(L\\). The outcomes in Table 3 validate the intuition that longer lead times exacerbate the prediction problem, affirming its non-linear dynamics and the challenges of medium-term forecasting as encountered by numerical weather prediction models. Ablation studiesWe systematically ablate over input information and supervision extent to investigate each element's significance in driving the prediction goodness. The results are reported in Tables 4 and 5, respectively. Input ablations justify the provisioning of coarse GTSM plus ERA5 auxiliary inputs, and show that the network can effectively translate atmospheric reanalysis to ocean states. Furthermore, data dropout is critical for enabling densification and the introduction of flexible lead time conditioning is beneficial. The output ablations confirm that coarse GTSM supervision provides valuable guidance, yet there is additional gains for the densified storm surge output. In sum, the outcomes validate our overall approach as depicted in Fig. 1. \\begin{table} \\begin{tabular}{l l l l} \\hline \\hline input ablation & \\(\\downarrow\\) MAE (std) & \\(\\downarrow\\) MSE (std) & \\(\\uparrow\\) NMSE \\\\ \\hline full model & 0.190 (0.260) & **0.104 (0.535)** & **0.556** \\\\ no GTSM input & 0.207 (0.284) & 0.124 (0.543) & 0.513 \\\\ no ERA5 input & 0.189 (0.273) & 0.110 (0.545) & 0.542 \\\\ no data dropout & 0.217 (0.289) & 0.130 (0.539) & 0.500 \\\\ no FiLM, \\(L=8\\) fixed & **0.183 (0.273)** & 0.108 (0.567) & 0.547 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: **Input ablations.** Evaluation of our models with varying inputs. The outcomes underline the relevance of each modality. \\begin{table} \\begin{tabular}{l l l l} \\hline \\hline input length T & \\(\\downarrow\\) MAE (std) & \\(\\downarrow\\) MSE (std) & \\(\\uparrow\\) NMSE \\\\ \\hline 6 & 0.194 (0.282) & 0.115 (0.587) & 0.551 \\\\ 12 & 0.190 (0.260) & 0.104 (0.535) & 0.556 \\\\ 18 & **0.180 (0.230)** & **0.085 (0.510)** & **0.573** \\\\ 24 & **0.180 (0.230)** & **0.085 (0.510)** & 0.571 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: **Repeated Measures.** Evaluation of FiLM U-TAE with varying numbers of input time points \\(T\\), flexibly accommodated for via temporal self-attention. Longer inputs tend to be beneficial. Figure 5: **Location** of holdout gauges impacts prediction performance. Average absolute errors in meters are color-coded and binned according to each sites’ longitude and latitude coordinates, with gauge counts overlayed for each spatial dimension. Particularly challenging regions are the Gulf of Mexico, the Caribbean Sea and the Indian Ocean due to their extreme climate and resulting outsized surge dynamics. \\begin{table} \\begin{tabular}{l l l l} \\hline \\hline output ablation & \\(\\downarrow\\) MAE (std) & \\(\\downarrow\\) MSE (std) & \\(\\uparrow\\) NMSE \\\\ \\hline full model & **0.190 (0.260)** & **0.104 (0.535)** & **0.556** \\\\ no GTSM supervision & 0.194 (0.276) & 0.114 (0.544) & 0.534 \\\\ GTSM, instead of densification & 0.210 (0.246) & 0.105 (0.536) & 0.554 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: **Output ablations.** Evaluation of FiLM U-TAE with varying output channels. Ablations show all outputs’ significance. Qualitative resultsComplementary to the reported quantitative outcomes, Fig. 6 illustrates example data and the different densification models' forecasts. Notably, the networks' densified predictions show substantial differences from GTSM, as they are driven by the assimilation of in situ tidal gauge data which GTSM does not incorporate. Specifically, modifications are undertaken close to the shorelines where gauges are present and surge forecasts are most relevant. Furthermore, the classifications often differ from the auxiliary coarse predictions, underlining the functional differences across the two kinds of outputs and highlighting once more the importance of integrating the sparse in situ data. Finally, the appearance of gridding in the spatio-temporal networks' outputs evidences the impact of the ERA5 atmospheric information, which is integrated in the storm surge forecasts. ## 7 Conclusion To tackle the aggravating hazard of coastal floods, we introduce a novel dataset and framework forecasting storm surges. Our curated data makes the posed challenge more accessible to the remote sensing community and may serve as a benchmark to fuel future research. Our approach is influenced by recent progress in weather forecasting, and shows that neural networks can implicitly assimilate sparse in situ measurements with coarse weather and ocean state reanalysis products to provide densified forecasts. In a follow-up, we'll explore the operational potential of our approach and replacing retrospective reanalysis products with recently developed forecasting models. Further directions may be the incorporation of satellite altimetry, the modeling of impact at landfall and the translation from storm surges to predicting flood maps. Data and code are given at [https://github.com/PatrickESA/StormSurgeCastNet](https://github.com/PatrickESA/StormSurgeCastNet). Figure 6: **Exemplary data and densified storm surge forecasts** in the densification experimental setup. Rows: Four samples from the test split. Columns: Sampled gauge locations. Dense surge forecasts of GTSM, MaxVit U-Net, FiLM U-TAE and coarse auxiliary FiLM U-TAE predictions. All illustrated outputs are in the densification setup without the target gauge provided, at a lead time of \\(L=8\\) hours. ## References * [1]M. Andrychowicz, L. Espeholt, D. Li, S. Merchant, A. Merose, F. Zyda, S. Agrawal, and N. Kalchbrenner (2023) Deep learning for day forecasts from sparse observations. arXiv preprint arXiv:2306.06079. Cited by: SS1. * [2]E. Bevacqua, M. I. Vousdoukas, G. Zappa, K. Hodges, T. G. Shepherd, D. Maraun, L. Mentaschi, and L. Feyen (2020) More meteorological events that drive compound coastal flooding are projected under climate change. Communications Earth & Environment1 (1), pp. 47. Cited by: SS1. * [3]K. Bi, L. Xie, H. Zhang, X. Chen, X. Gu, and Q. Tian (2023) Accurate medium-range global weather forecasting with 3D neural networks. Nature619 (7970), pp. 533-538. Cited by: SS1. * [4]N. Bruneau, J. Polton, J. Williams, and J. Holt (2020) Estimation of global coastal sea level extremes using neural networks. Environmental Research Letters15 (7), pp. 074030. Cited by: SS1. * [5]D. L. Codiga (2011) Unified tidal analysis and prediction using the UTide Matlab functions. Cited by: SS1. * [6]D. L. Codiga (2011) Unified tidal analysis and prediction using the UTide Matlab functions. Cited by: SS1. * [7]V. S. F. Garnot and L. Landrieu (2021) Panoptic segmentation of satellite image time series with convolutional temporal attention networks. In Proceedings of the IEEE International Conference on Computer Vision, Cited by: SS1. * [8]A. Gori, N. Lin, D. Xi, and K. Emanuel (2022) Tropical cyclone climatology change greatly exacerbates US extreme rainfall-surge hazard. Nature Climate Change12 (2), pp. 171-178. Cited by: SS1. * [9]I. D. Haigh, M. Marcos, S. A. Talke, P. L. Woodworth, J. R. Hunter, B. S. Hague, A. Arns, E. Bradshaw, and P. Thompson (2023) GESLA version 3: a major update to the global higher-frequency sea-level dataset. Geoscience Data Journal10 (3), pp. 293-314. Cited by: SS1. * [10]H. Hersbach, B. Bell, P. Berrisford, S. Hirahara, A. Horanyi, J. Munoz-Sabater, J. Nicolas, C. Peubey, R. Radu, D. Schepers, et al. (2020) The ERA5 global reanalysis. Quarterly Journal of the Royal Meteorological Society146 (730), pp. 1999-2049. Cited by: SS1. * [11]S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation9 (8), pp. 1735-1780. Cited by: SS1. * [12]S. Hoyer and J. Hamman (2017) xarray: n-d labeled arrays and datasets in Python. Journal of Open Research Software5 (1). Cited by: SS1. * [13]E. Kalnay (2003) Atmospheric modeling, data assimilation and predictability. Cambridge university press. Cited by: SS1. * [14]H. W. Kernkamp, A. V. Dam, G. S. Stelling, and E. D. de Goede (2011) Efficient scheme for the shallow water equations on unstructured grids with application to the continental shelf. Ocean Dynamics61, pp. 1175-1188. Cited by: SS1. * [15]G. E. Kindermann (2022) Global land / water mask map with 0.3 arc sec. (10 m) resolution ResearchGate. Cited by: SS1. * [16]D. P. Kingma and J. Ba (2015) ADAM: a method for stochastic optimization. In International Conference on Learning Representations, Cited by: SS1. * [17]E. Kirezci, I. R. Young, R. Ranasinghe, S. Muis, R. J. Nicholls, D. Lincke, and J. Hinkel (2020) Projections of global-scale extreme sea levels and resulting episodic coastal flooding over the 21st century. Scientific reports10 (1), pp. 11629. Cited by: SS1. * [18]K. R. Knapp, M. C. Kruk, D. H. Levinson, H. J. Diamond, and C. J. Neumann (2010) The international best track archive for climate stewardship (IBTrACS) unifying tropical cyclone data. Bulletin of the American Meteorological Society91 (3), pp. 363-376. Cited by: SS1. * [19]F. Kratzert, D. Klotz, C. Brenner, K. Schulz, and M. Herrnegger (2018) Rainfall-runoff modelling using long short-term memory (LSTM) networks. Hydrology and Earth System Sciences22 (11), pp. 6005-6022. Cited by: SS1. * [20]R. Lam, A. Sanchez-Gonzalez, M. Willson, P. Wirnsberger, M. Fortunato, F. Alet, S. Ravuri, T. Ewalds, Z. Eaton-Rosen, W. Hu, et al. (2023) Learning skillful medium-range global weather forecasting. Science382 (6677), pp. 1416-1421. Cited by: SS1. * [21]G. Madec and N. team (2008) NEMO ocean engine: note du pole de modelisation de l'institut pierre-simon laplace no 27. Cited by: SS1. * [22]A. K. Magnan, M. Oppenheimer, M. Garschagen, M. K. Buchanan, V. KE Duvat, D. L. Forbes, J. D. Ford, E. Lambert, J. Petzold, F. G. Renaud, et al. (2022) Sea level rise risks and societal adaptation benefits in low-lying coastal areas. Scientific reports12 (1), pp. 10677. Cited by: SS1. * [23]S. Muis, M. Apecechea, J. Alvarez, M. Verlaan, K. Yan, J. Dullaart, J. Aerts, T. Duong, R. Ranasinghe, D. Bars, R. Haarsma, and M. Roberts (2022) Global sea level change time series from 1950 to 2050 derived from reanalysis and high resolution CMIP6 climate projections. In Copernicus Climate Change Service (C3S) Climate Data Store (CDS), Cited by: SS1. * [24]J. Nash and J. V. Sutcliffe (1970) River flow forecasting through conceptual models: part I -- a discussion of principles. Journal of Hydrology10 (3), pp. 282-290. Cited by: SS1. * [25]G. Nearing, D. Cohen, V. Dube, M. Gauch, O. Gilon, S. Harrigan, A. Hassidim, F. Kratzert, A. Metzger, S. Nevo, et al. (2023) AI increases global access to reliable flood forecasts. arXiv preprint arXiv:2307.16104. Cited by: SS1. * [26]J. Pathak, S. Subramanian, P. Harrington, S. Raja, A. Chattopadhyay, M. Mardani, T. Kurth, D. Hall, Z. Li, K. Azizzadenesheli, et al. (2022) FourCastNet: a global data-driven high-resolution weather model using adaptive Fourier neural operators. arXiv preprint arXiv:2202.11214. Cited by: SS1. * [27]E. Perez, F. Strub, H. De Vries, V. Dumoulin, and A. Courville (2018) FiLM: visual reasoning with a general conditioning layer. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32. Cited by: SS1. * [28]C. Persello, J. D. Wegner, R. Hansch, D. Tuia, P. Ghamisi, M. Koeva, and G. Camps-Valls (2021) Deep learning and Earth observation to support the Sustainable Development Goals: current approaches, open challenges, and future opportunities. IEEE Geoscience andRemote Sensing Magazine_, 10(2):172-200, 2022. * [29] Russ Rew and Glenn Davis. NetCDF: an interface for scientific data access. _IEEE computer graphics and applications_, 10(4):76-82, 1990. * [30] Matthew Rocklin et al. Dask: Parallel computation with blocked algorithms and task scheduling. In _Proceedings of the 14th python in science conference_, volume 130, page 136. SciPy Austin, TX, 2015. * [31] Jeffrey D Sachs, Christian Kroll, Guillame Lafortune, Grayson Fuller, and Finn Woelm. _Sustainable Development Report 2022_. Cambridge University Press, 2022. * [32] Kaiyue Shan, Yanluan Lin, Pao-Shin Chu, Xiping Yu, and Fengfei Song. Seasonal advance of intense tropical cyclones in a warming climate. _Nature_, 623(7985):83-89, 2023. * [33] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. _Advances in neural information processing systems_, 28, 2015. * [34] Casper Kaae Sonderby, Lasse Espeholt, Jonathan Heek, Mostafa Dehghani, Avital Oliver, Tim Salimans, Shreya Agrawal, Jason Hickey, and Nal Kalchbrenner. MetNet: A neural weather model for precipitation forecasting. _arXiv preprint arXiv:2003.12140_, 2020. * [35] MG Sotillo, S Cailleau, P Lorente, B Levier, R Aznar, G Reffray, A Amo-Baladron, J Chanut, M Benkiran, and E Alvarez-Fanjul. The MyOcean IBI ocean forecast and reanalysis systems: operational products and roadmap to the future copernicus service. _Journal of Operational Oceanography_, 8(1):63-79, 2015. * [36] Mohsen Taherkhani, Sean Vitousek, Patrick L Barnard, Neil Frazer, Tiffany R Anderson, and Charles H Fletcher. Sea-level rise exponentially increases coastal flood frequency. _Scientific reports_, 10(1):6466, 2020. * [37] Timothy Tiggeloven, Anais Couasnon, Chiem van Straaten, Sanne Muis, and Philip J Ward. Exploring deep learning capabilities for surge predictions in coastal areas. _Scientific reports_, 11(1):17224, 2021. * [38] Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxvit: Multi-axis vision transformer. In _European conference on computer vision_, pages 459-479. Springer, 2022. * [39] Jun Wang, Si Yi, Mengya Li, Lei Wang, and Chengcheng Song. Effects of sea level rise, land subsidence, bathymetric change and typhoon tracks on storm flooding in the coastal areas of shanghai. _Science of the total environment_, 621:228-234, 2018. * [40] Daniele Zanaga, Ruben Van De Kerchove, Dirk Daems, Wanda De Keersmaecker, Carsten Brockmann, Grit Kirches, Jan Wevers, Oliver Cartus, Maurizio Santoro, Steffen Fritz, et al. ESA WorldCover 10 m 2021 v200, 2022.
Hurricanes and coastal floods are among the most disastrous natural hazards. Both are intimately related to storm surges, as their causes and effects, respectively. However, the short-term forecasting of storm surges has proven challenging, especially when targeting previously unseen locations or sites without tidal gauges. Furthermore, recent work improved short and medium-term weather forecasting but the handling of raw unassimilated data remains non-trivial. In this paper, we tackle both challenges and demonstrate that neural networks can implicitly assimilate sparse in situ tide gauge data with coarse ocean state reanalysis in order to forecast storm surges. We curate a global dataset to learn and validate the dense prediction of storm surges, building on preceding efforts. Other than prior work limited to known gauges, our approach extends to ungauged sites, paving the way for global storm surge forecasting.
Provide a brief summary of the text.
173
arxiv-format/2309_14657v2.md
# Field Testing of a Stochastic Planner for ASV Navigation Using Satellite Images Philip (Yizhou) Huang Robotics Institute Carnegie Mellon University Pittsburgh, USA [email protected] &Tony (Qiao) Wang Division of Engineering Science University of Toronto Toronto, Canada [email protected] &Florian Shkurti Department of Computer Science University of Toronto Toronto, Canada [email protected] &Timothy D. Barfoot Institute for Aerospace Studies University of Toronto Toronto, Canada [email protected] Corresponding author, worked done during Master's degree at the University of Toronto. ## 1 Introduction Autonomous Surface Vessels (ASVs) have seen increasing attention as a technology to monitor rivers, lakes, coasts, and oceans in recent years (Ang et al., 2022; Cao et al., 2020; Dash et al., 2021; Dunbabin and Marques, 2012; Ferri et al., 2015; Madeo et al., 2020; MahmoudZadeh et al., 2022; Odetti et al., 2020). A fundamental challenge to the wide adoption of ASVs is the ability to navigate safely and autonomously in uncertain environments, especially for long durations. For example, many existing ASV systems require the user to precompute a waypoint sequence. The robot then visits these target locations on a map andattempts to execute the path online (Tang et al., 2020; Vasilj et al., 2017). However, disturbances such as strong winds, waves, unseen obstacles, aquatic plants that may or may not be traversable, and even simply changing visual appearances in a water environment are challenging for ASV navigation (Fig. 1). Many potential failures in robot perception and control systems may also undermine the mission's overall success. Engineering challenges such as power and computational budget also make it challenging to implement many robot autonomy modules and integrate them with onboard sensors. To ensure the safety of the overall operation, users of the ASV system may also wish to understand its high-level behaviour and any decisions made during the mission. Our long-term goal is to use an ASV to monitor lake environments and collect water samples for scientists. A requirement for achieving this, and the primary focus of this paper, is to ensure robust global and safe local navigation. To enhance the robustness of the overall system, we identify waterways that are prone to local blockage as stochastic edges and plan mission-level policies offline on our high-level map. Uncertainties that arise during policy execution are handled by the local planner. One planning framework that is suitable for modelling uncertain paths is the Canadian Traveller Problem (CTP) (Papadimitriou and Yannakakis, 1991), a variant of the shortest-path planning problem for an uncertain road network. The most significant feature in a CTP graph is the stochastic edge, which has a probability of being blocked. The state of any stochastic edge can be disambiguated by visiting the edge. Once the state has been visited and classified as traversable or not, it remains the same. Separating planning into a high-level mission planner and a local online planner offers several advantages. The high-level planner creates a global policy with contingencies that can be adjusted online for any unmapped obstacles. Offline planning improves interpretability and reduces the computational resources required online. This allows the user to easily inspect the planned paths before deploying the robot. With offline planning, more time can be allocated to find the optimal global paths. The local planner can then focus on accurately tracking the global path and adjusting for any unmapped obstacles and environmental uncertainties. In our prior work (Y. Huang et al., 2023), we proposed a navigation framework -- the Partial Covering Canadian Traveller Problem (PCCTP) -- to solve a mission-planning problem in an uncertain environment. The framework used a stochastic graph derived from coarse satellite images to plan an adaptive policy that visits all reachable target locations. Stochasticity in the graph represents possible events where a water passage between two points is blocked due to changing water levels, strong wind, and other unmapped obstacles. The optimal policy is computed offline with a best-first tree-search algorithm. We evaluated our solution method on 1052 Canadian lakes selected from the _CanVec Series_ Ontario dataset (Natural Resources Canada, 2019) and showed it can reduce the total distance to visit all targets and return. In our past field tests, we found that completing the global mission fully autonomously, even for a 5-node policy, was very challenging. A total of seven manual interventions were required for reasons other than battery replacement in the two old field trials conducted. The failure to detect unmapped local obstacles directly led to collisions. We observed that the previous local planner experienced edge cases where it could not find a valid path around local obstacles while tracking the global plan. In addition, the local navigation system had many intermittent errors that temporarily stopped the robot due to false positives from obstacle detection. These past field experiences highlight the need to improve the previous system and conduct more field tests in new environments. This article extends our previous work as described by Y. Huang et al. (2023) in two ways. First, we made significant improvements to our local planner responsible for tracking the global path and handling any locally occurring uncertainties such as obstacles. Our ASV system estimates the waterline using a learned network and a stereo camera and detects underwater obstacles using a mechanically scanning sonar. We fuse both sensors into an occupancy grid map, facilitating a sampling-based local motion planner to compute a pathway to track the global path while avoiding local obstacles. As in our previous research, we use a timer to distinguish stochastic edges and select appropriate policy branches based on the traversability assessment of the stochastic edges. Secondly, we have validated the overall system on three distinct missions, two of which are new. Our field trials show that our ASV reliably and autonomously executes precomputed policies from the mission planner under varying operating conditions and amid unmapped obstacles, even when the local planner does not perfectly map the local environment or optimally steer the ASV. We have also tested the local planner through an ablation study to identify bottlenecks in localization, mapping, and sensor fusion in the field. Our lessons learned from our field tests are detailed, and we believe this work will serve as a beneficial reference for any future ASV systems developed for environmental monitoring. ## 2 Related Works Autonomous ASV navigation for environmental monitoring requires domain knowledge from multiple fields, such as perception, planning, and overall systems engineering. In this section, we present a brief survey of all these related fields and discuss the relationship to our methods and any remaining challenges. **Satellite Imagery Mapping** First, mission planning in robotics often requires a global, high-level map of the operating environment. Remote sensing is a popular technique to build maps and monitor changes in water bodies around the world because of its efficiency (C. Huang et al., 2018; X. Yang et al., 2017). The _JRC_ Global Surface Water dataset (Pekel et al., 2016) maps changes in water coverage from 1984 to 2015 at a 30 m by 30 m resolution, produced using _Landsat_ satellite imagery. Since water has a lower reflectance in the infrared channel, an effective method is to calculate water indices, such as Normalized Difference Water Index (NDWI) (McFeeters, 1996) or MNDWI (Xu, 2006), from two or more optical bands (e.g., green and near-infrared). However, extracting water data using a threshold in water indices can be nontrivial due to variations introduced by clouds, seasonal changes, and sensor-related issues. To address this, Li and Sheng (2012) and Feyisa et al. (2014) have developed techniques to select water-extraction thresholds adaptively. Our approach aggregates water indices from historical satellite images to estimate probabilities of water coverage (see Sec. 3.3). Overall, we argue that it is beneficial to build stochastic models of surface water Figure 1: Real-world challenges that motivate the use of stochastic edges in our planning setup. bodies due to their dynamic nature and imperfect knowledge derived from satellite images. **Global Mission Planning** The other significant pillar of building an ASV navigation system is mission planning. First formulated in the 1930s, the Travelling Salesman Problem (TSP) (Laporte, 1992) studies how to find the shortest path in a graph that visits every node once and returns to the starting node. Modern TSP solvers such as the _Google_ OR-tools (Perron and Furnon, 2023) can produce high-quality approximate solutions for graphs with about 20 nodes in a fraction of a second. Other variants have also been studied in the optimization community, such as the Travelling Repairman Problem (Afrati et al., 1986) that minimizes the total amount of time each node waits before the repairman arrives, and the Vehicle Routing Problem (Toth and Vigo, 2002) for multiple vehicles. In many cases, the problem graphs are built from real-world road networks, and the edges are assumed to be always traversable. In CTP (Papadimitriou and Yannakakis, 1991), however, edges can be blocked with some probability. The goal is to compute a policy that has the shortest expected path to travel from a start node to a single goal node. CTP can also be formulated as a Markov Decision Process (Bellman, 1957) and solved optimally with dynamic programming (Polychronopoulos et al., n.d.) or heuristic search (Aksakalli et al., 2016). The robotics community has also studied ways in which the CTP framework can be best used in path planning (Ferguson et al., 2004; Guo and Barfoot, 2019). Our problem setting, the Partial Covering Canadian Traveler Problem (PCCTP), lies at the intersection of TSP and CTP, where the goal is to visit a partial set of nodes on a graph with stochastic edges. A similar formulation, known as the Covering Canadian Traveler Problem (CCTP) (Liao and Huang, 2014), presents a heuristic, online algorithm named Cyclic Routing (CR) to visit every node in a complete \\(n\\)-node graph with at most \\(n-2\\) stochastic edges. A key distinction between CCTP and our setting is that CCTP assumes all nodes are reachable, whereas, in PCCTP the robot may give up on unreachable nodes located behind an untraversable edge. In our work, the user specifies the target locations of the planner, and the ASV can visit these predefined Figure 2: A high-level overview of our navigation framework for water sampling. Given a set of user-selected target locations (red icons), our algorithm identifies stochastic edges from coarse satellite images and plans a mission-level policy for ASV navigation. Aerial views of two stochastic edges from real-world experiments are shown here. locations and collect water samples for off-site analysis. Modern ASVs can also be equipped with scientific instruments, such as the YSI Sonde used in (Jeong et al., 2020), for in situ analysis of a spatial area. In this case, robotics path planning can also consider scientific values in addition to efficiency, traversability, and time constraints. Complete coverage planning first determines all areas that can be traversed and then selects either a set motion primitive (Garneau et al., 2013) or a lawnmower pattern (Karapetyan et al., 2018) to encompass a surveying region. The survey region can also be decomposed into many distinct, non-overlapping cells, and the visitation order can be optimized with the TSP algorithm (Choset, 2001). As an alternative, informative path planning adaptively plans the robot's path and goal based on real-time sensor data to maximize scientific value (Bai et al., 2021). A common approach builds a probabilistic model of the environment online with Bayesian models (W. Chen et al., 2022; Marchant Ramos, 2012) and then identifies the next target location that maximizes the information gain (Bai et al., 2016; Marchant Ramos, 2014). Informative path planning can also be performed in continuous space with a sampling-based planner, which finds the best path in the sampled tree or roadmap that maximizes information gain subject to a budget constraint (Hollinger and Sukhatme, 2014). These techniques have also been applied specifically to ASVs (Flaspohler et al., 2018; Kathen et al., 2021; Manjanna and Dudek, 2017; Peralta et al., 2023; Ten Kathen et al., 2023) and even underwater robots (Girdhar et al., 2014; Kemna et al., 2017) for marine environmental monitoring. **ASV Systems** In recent years, more ASV systems and algorithms for making autonomous decisions to monitor environments have been built. Schiaretti et al. (2017) classify the autonomy level for ASVs into 10 levels based on control systems, decision-making, and exception handling. Many works consider the mechanical, electrical, and control subsystems of their ASV designs (Ang et al., 2022; Ferri et al., 2015; Madeo et al., 2020). Jeong et al. (2020) optimized the ASV design to minimize the interference on sensor readings caused by the propulsion system and hull design. Dash et al. (2021) validated the use and accuracy of deploying ASVs for water-quality modelling by comparing the data collected from ASVs with independent sensors, and Roznere et al. (2021) confirmed that robotic water quality measurements were robust to sensor response time and robot motions. More examples of vertically integrated autonomous water-quality monitoring systems using ASVs are presented by H.-C. Chang et al. (2021), Cao et al. (2020), and Balbuena et al. (2017). The JetYak platform, introduced in Kimball et al. (2014), is a small and inexpensive ASV built for navigating and surveying in shallow or hazardous environments such as glaciers or unexplored ordnance areas. In Nicholson et al. (2018), the platform has also been retrofitted for large spatial-scale mapping of dissolved carbon dioxide and methane in a marine environment. Also modelled after JetYak, Moulton et al. (2018) proposed a more modular and flexible ASV design and discussed many valuable lessons learned to build a fleet of ASVs and their field deployments. In contrast, our main contribution is a robust mission-planning framework that is complementary to existing designs of ASV systems. **Local Motion Planning** Path planning for navigation and obstacle avoidance is a comprehensive field that has been extensively studied (Sanchez-Ibanez et al., 2021). The primary purpose of the local planner in this project is to successfully identify and follow a safe path that tracks the global path while averting locally detected obstacles in real-time. Sampling-based motion planners such as RRT* (Karaman and Frazzoli, 2011) and BIT* (Gammell et al., 2015) are favourable, owing to their probabilistically complete nature and proven asymptotic optimality given the right heuristics. Our local motion planner is based on Sehn et al. (2023), a variant of the sampling-based planner designed to follow a reference path. Using a new edge-cost metric and planning in the curvilinear space, their proposed planner can incrementally update its path to avoid new or moving obstacles without replanning from the beginning while minimizing deviation to the global reference path. Search-based algorithms, such as D* lite (Koenig and Likhachev, 2002) and Field D* (Ferguson and Stentz, 2007), commonly used in mobile robots and autonomous vehicles, operate on a discretized 2D grid and employ a heuristic to progressively locate a path from the robot's present location to the intended destination. Subsequently, the optimal solution from the path planning is submitted to a low-level controller tasked with calculating the necessary velocities or thrusts in mobile robotics systems. Parallel to the planning and control framework, other models such as direct tracking with a constrained model-predictive controller (Ji et al., 2016) and training policies for path tracking through reinforcement learning (Shan et al., 2020) have emerged as new areas of research in recent years. **Perception** Lastly, our navigation framework requires local perception modules to clarify uncertainties in our map and avoid obstacles. Vision-based obstacle detection and waterline segmentation have also received renewed attention in the marine robotics community. Recent contributions have largely focused on detecting or segmenting obstacles from RGB images using neural networks (Lee et al., 2018; Qiao et al., 2022; Steccanella et al., 2020; Tersek et al., 2023; J. Yang et al., 2019). A substantial amount of research has been dedicated to identifying waterlines (Steccanella et al., 2020; Steccanella et al., 2019; Yin et al., 2022; Zhou et al., 2022) since knowing the whereabouts of navigable waterways can often be sufficient for navigation. Several annotated datasets collected in different water environments, such as inland waterways (Cheng et al., 2021) and coastal waters (Bovcon et al., 2019, 2021) have been published by researchers. Foundational models for image segmentation, such as 'Segment Anything' (Kirillov et al., 2023), have also gathered increasing attention due to their incredible zero-shot generalization ability and are being used in tracking (Maalouf et al., 2023) or remote sensing tasks (K. Chen et al., 2023). Sonar is another popular sensor that measures distance and detects objects on or under water surfaces using sound waves. Heidarsson and Sukhatme (2011a) pioneered the use of a mechanical scanning sonar for ASV obstacle detection and avoidance and demonstrated that obstacles generated from sonar could serve as labels for aerial images (Heidarsson and Sukhatme, 2011b). Karoui et al. (2015) focused on detecting and tracking sea-surface objects and wakes from a forward-looking sonar image. Occupancy-grid mapping, a classic probabilistic technique for mapping the local environment, was used to fuse measurements from sonars and stereo cameras on a mobile ground robot (Elfes, 1989). For our perception pipeline, we combine the latest advances in computer vision, large datasets from the field, and traditional filtering techniques to make the system robust in real-world operating conditions. Despite advances, accurate sensor fusion of above-water stereo cameras and underwater sonar for precise mapping on an ASV remains a formidable research challenge. ## 3 Global Mission Planner In this section, we will describe the mathematical formulation of the planning problem and present a detailed breakdown of our algorithm. Most of the content in this section, including the problem formulation and the PCCTP-AO* algorithm, has been introduced in our previous work (Y. Huang et al., 2023). ### The Problem Formulation We are interested in planning on a graph representation of a lake where parts of the water are stochastic (i.e., uncertain traversability). Constructing such a graph using all pixels of satellite images is impractical since images are very high-dimensional. Thus, we extend previous works from CTP (Guo and Barfoot, 2019; Liao and Huang, 2014; Papadimitriou and Yannakakis, 1991) and distill satellite images into a high-level graph \\(G\\) where some stochastic edges \\(e\\) may be untraversable with probability \\(p\\). The state of a stochastic edge can be disambiguated only when the robot traverses the edge in question. The robot begins at the starting node \\(s\\) and is tasked to visit all reachable targets \\(J\\) specified by the user (e.g., scientists) before returning to the starting node. If some target nodes are unreachable because some stochastic edges block them from the starting node, the robot may give up on these sampling targets. We call this problem the Partial Covering Canadian Traveller Problem (PCCTP). Fig. 3 is a simplified graph representation of a lake with two stochastic edges. The state of the robot is defined as a collection of the following: a list of target nodes that it has visited, the current node it is at, and its knowledge about the stochastic edges. A policy sets the next node to visit, given the current state of the robot. The objective is to find the optimal policy \\(\\pi^{*}\\) that minimizes the expected cost to cover all reachable targets. In the example problem (Fig. 3), the robot can either disambiguate the left or right stochastic edge to reach the sampling location. Formally, we define the following terms: * \\(G=(V,E)\\) is an undirected graph. * \\(c:E\\rightarrow\\mathbb{R}_{\\geq 0}\\) is the cost function for an edge, which is the length of the shortest waterway between two points. * \\(p:E\\rightarrow[0,1]\\) is the blocking probability function. An edge with 0 blocking probability is deterministic; otherwise, it is stochastic. * \\(k\\) is the number of stochastic edges. * \\(s\\in V\\) is the start and return node. * \\(J\\subseteq V\\) is the subset of target nodes to visit. There are \\(|J|\\leq|V|\\) goal nodes. * \\(I=\\{\\mathrm{A},\\mathrm{T},\\mathrm{U}\\}^{k}\\) is an information vector that represents the robot's knowledge of the status of all \\(k\\) stochastic edges. A, T, and U stand for ambiguous, traversable, and untraversable, respectively. * \\(S\\subseteq J\\) is the subset of target nodes that the robot has visited. * \\(a\\) is the current node the robot is at. * \\(x=(a,S,I)\\) is the state of the robot. \\(a\\) is the current node, \\(S\\) is the set of visited targets, and \\(I\\) is the current information vector. * \\(\\pi^{*}\\) is the optimal policy that minimizes the cost \\(\\mathbb{E}_{w\\sim p(w)}\\left[\\phi\\left(\\pi\\right)\\right]\\), where \\(\\phi\\) is cost functional of the policy \\(\\pi\\) and \\(w\\) is a possible world of stochastic graph, where each stochastic edge is assigned a traversability state. ### Exactly Solving PCCTP with AO* We extend the AO* search algorithm (Aksakalli et al., 2016) used in CTP to find exact solutions to our problem. AO* is a heuristic, best-first search algorithm that iteratively builds an AO tree to explore the state space until the optimal solution is found. In this section, we will first explain how to use an AO tree to represent a PCCTP instance, then break down how to use AO* to construct the AO tree containing the optimal policy. Figure 3: A toy example graph shown on the water mask generated from _Sentinel-2_ satellite images, with the corresponding graph on an aerial view image shown on the right. The planned paths between nodes are simplified for ease of understanding. The number beside each edge of the high-level graph is the path length in km, and the number in brackets is the blocking probability, which is computed using the probability of water coverage in each pixel (represented by its shade of orange) on the path. Note that traversable and ambiguous edges are the state before any action. **AO Tree Representation of PCCTP** The construction of the AO tree is a mapping of all possible actions the robot can take and all possible disambiguation outcomes at every stochastic edge. Following Aksakalli et al. (2016), an AO tree is a rooted tree \\(T=(N,A)\\) with two types of nodes and arcs. A node \\(n\\in N\\) is either an OR node or an AND node; hence the node set \\(N\\) can be partitioned into the set of OR nodes \\(N_{O}\\) and the set of AND nodes \\(N_{A}\\). Each arc in \\(A\\) represents either an action or a disambiguation outcome and is not the same as \\(G\\)'s edges (\\(A\ eq E\\)). For all \\(n\\in N\\), a function \\(c:A\\rightarrow\\mathbb{R}_{\\geq 0}\\) assigns the cost to each arc. Also, for all \\(n\\in N_{A}\\), a function \\(p:A\\rightarrow[0,1]\\) assigns a probability to each arc. A function \\(f:N\\rightarrow\\mathbb{R}_{\\geq 0}\\) is the cost-to-go function if it satisfies the following conditions: * if \\(n\\in N_{A}\\), \\(f(n)=\\sum_{n^{\\prime}\\in N(n)}[p(n,n^{\\prime})\\times(f(n^{\\prime})+c(n,n^{ \\prime}))]\\), * if \\(n\\in N_{O}\\), \\(f(n)=\\min_{n^{\\prime}\\in N(n)}[f(n^{\\prime})+c(n,n^{\\prime})]\\), * if \\(n\\in N\\) is a leaf node, \\(f(n)=0\\). Now, we can map each node and edge such that the AO tree represents a PCCTP instance. Specifically, each node \\(n\\) is assigned a label \\((n.a,n.S,n.I)\\) that represents the state of the robot. \\(n.a\\) is the current node, \\(n.S\\) is the set of visited targets, and \\(n.I\\) is the information vector containing the current knowledge of the stochastic edges. The root node \\(r\\) is an OR node with the label \\((s,\\emptyset,\\text{AA A})\\), representing the starting state of the robot. An outgoing arc from an OR node \\(n\\) to its successor \\(n^{\\prime}\\) represents an action, which can be either visiting the remaining targets and returning to the start or going to the endpoint of an ambiguous edge via some target nodes along the way. An AND node corresponds to the disambiguation event of a stochastic edge, so it has two successors describing both possible outcomes. Each succeeding node of an OR node is either an AND node or a leaf node. A leaf node means the robot has visited all reachable target nodes and has returned to the start node. Each arc \\((n,n^{\\prime})\\) is assigned a cost \\(c\\), which is the length of travelling from node \\(n.a\\) to node \\(n^{\\prime}.a\\) while visiting the subset of newly visited targets \\(n^{\\prime}.S\\setminus n.S\\) along the way. For all Figure 4: The final AO tree after running PCCTP-AO* on the example in Fig. 3. The label inside each node is the current state of the robot. OR nodes are rectangles, and AND nodes are ellipses. Nodes that are part of the final policy are green, extra expanded nodes are yellow, and leaf nodes terminated early are orange. Some orange nodes that are terminated early are left out in this figure for simplicity. This figure is reproduced from Fig. 3 in Y. Huang et al. (2023). outgoing arcs of an AND node, the function \\(p\\) assigns the traversability probability for the stochastic edge. The cost of disambiguating that edge is its length. Once the complete AO tree is constructed, the optimal policy is the collection of nodes and arcs included in the calculation of the cost-to-go from the root of the tree, and the optimal expected cost is \\(f(r)\\). For example, the optimal action at an OR node \\(n\\) is the arc \\((n,n^{\\prime})\\) that minimizes the cost-to-go from \\(n\\), while the next action at an AND node depends on the disambiguation outcome. However, constructing the full AO tree from scratch is not practical since the space complexity is exponential with respect to the number of stochastic edges. Instead, we use the heuristic-based AO* algorithm, explained below. ``` 0: Graph \\(G(V,E)\\), cost function \\(c\\), heuristic \\(h\\), blocking probability \\(p\\), target set \\(J\\), \\(k\\) stochastic nodes, start node \\(s\\) 1:\\(n.a=s\\), \\(n.S=\\emptyset\\), \\(n.I=\\{A\\}^{k}\\) 2:\\(f(n)=h(n)\\); \\(n.\\)type = OR; \\(T.\\)root = \\(n\\) 3:while\\(T.\\)root.status \\(\ eq\\) solved do 4:\\(n=\\textsc{SelectNode}(T.\\)root) 5:for\\(n^{\\prime}\\in\\textsc{Expand}(n,T)\\)do 6:\\(f(n^{\\prime})=h(n^{\\prime})\\) 7:ifReachableSet(\\(J\\), \\(n^{\\prime}.I)\\subseteq n^{\\prime}.S\\)then 8:\\(n^{\\prime}.\\)status = solved 9:endif 10:endfor 11:Backprop(\\(n,T\\)) 12:endwhile 13:if\\(T.\\)root.\\(f==\\) inf then return No Solution 14:endif 15:return\\(T\\) 16:functionBackprop(\\(n,T\\))\\(\\triangleright\\) Update the cost of the parent of \\(n\\) recursively until the root. Same as in Guo and Barfoot (2019). 17:while\\(n\ eq T.\\)root do 18:if\\(n.\\)type == OR then 19:\\(n^{*}=\\operatorname{argmin}_{n^{\\prime}\\in N(n)}[f(n^{\\prime})+c(n,n^{\\prime})]\\) 20:\\(f(n)=f(n^{*})+c(n,n^{*})\\) 21:if\\(n^{*}.\\)status == solved then\\(n.\\)status = solved 22:endif 23:endif 24:if\\(n.\\)type == AND then 25:\\(f(n)=\\sum_{n^{\\prime}\\in N(n)}[p(n,n^{\\prime})\\times(f(n^{\\prime})+c(n,n^{ \\prime}))]\\) 26:if\\(n^{\\prime}.\\)status == solved \\(\\forall n^{\\prime}\\in N(n)\\)then 27:\\(n.\\)status = solved 28:endif 29:endif 30:\\(n=n.\\)parent 31:endwhile 32:endfunction ``` **Algorithm 1** The PCCTP-AO* Algorithm **PCCTP-AO* Algorithm** Our PCCTP-AO* algorithm (Algorithm 1 and 2) is largely based on the AO* algorithm (C. L. Chang and Slagle, 1971; Martelli and Montanari, 1978). AO* utilizes an admissible heuristic \\(h:N\\rightarrow\\mathbb{R}_{\\geq 0}\\) that underestimates the cost-to-go \\(f\\) to build the AO tree incrementally from the root node until the optimal policy is found. The algorithm expands the most promising node in the current AO tree based on a heuristic and backpropagates its parent's cost recursively to the root. This expansion-backpropagation process is repeated until the AO tree includes the optimal policy. One key difference between AO* and PCCTP-AO* is that the reachability of a target node may depend on the traversability of a set of critical stochastic edges connecting the target to the root. If a target \\(j\\in J\\) is disconnected from the current node \\(a\\) when all the stochastic edges from a particular set are blocked, then this set of edges is critical. For example, the two stochastic edges in the top-right graph of Fig. 3 are critical because target node 1 would be unreachable if both edges were blocked. Thus, a simple heuristic that assumes all ambiguous edges are traversable may overestimate the cost-to-go if skipping unreachable targets reduces the overall cost. Alternatively, we can construct the following relaxed problem to calculate the heuristic. If a stochastic edge is not critical to any target, we still assume it is traversable. Otherwise, we remove the potentially unreachable target for the robot and instead disambiguate one of the critical edges of the removed target. The heuristic is the cost of the best plan that covers all definitively reachable targets and disambiguates one of the critical stochastic edges. For example, consider computing the heuristic at starting node 0 in Fig. 5. The goal is to visit both nodes 1 and 2 if they are reachable. Node 1 is always reachable; hence we assume it is traversable in the relaxed problem. Node 2 may be unreachable, so we remove the stochastic edge (4, 2) and ask the boat to visit Node 4 instead in the relaxed problem. This heuristic is always admissible because the path to disambiguate a critical edge is always a subset of the eventual policy. We can compute this by constructing an equivalent generalized travelling salesman problem (Noon & Bean, 1993) and solve it with any optimal TSP solver. Fig. 4 shows the result of applying PCCTP-AO* to the example problem in Fig. 3. The returned policy (coloured in green nodes) tries to disambiguate the closer stochastic edge \\((2,3)\\) to reach target node 1. Note that the AO* algorithm stops expanding as soon as the lower bound of the cost of the right branch exceeds that of the left branch. This guarantees the left branch has a lower cost and, thus, is optimal. ### Estimating Stochastic Graphs From Satellite Imagery We will now explain our procedure to estimate the high-level stochastic graph from satellite images. **Water Masking** Our first step is to build a water mask of a water area across a specific period (e.g., 30 days). We use the _Sentinel-2_ Level 2A dataset (Drusch et al., 2012), which has provided multispectral images at 10 m by 10 m resolution since 2017. Each geographical location is revisited every five days by a satellite. We then select all satellite images in the target spatiotemporal window and filter out the cloudy images using the provided cloud masks. For each image, we calculate the Normalized Difference Water Index (NDWI) (McFeeters, 1996) for every pixel using green and near-infrared bands. However, the distribution of NDWI values varies significantly across different images over time. Thus, we separate water from land in each image and aggregate the indices over time. We then fit a bimodal Gaussian Mixture Model on the histogram of NDWIs to separate water pixels from non-water ones for each image. We average all water masks over time to calculate the probabilistic water mask at the target spatiotemporal window. Each pixel on the final mask represents the probability of water coverage on this 10 m by 10 m area. If the probability of water for a pixel is greater than 90%, we treat it as a deterministic water pixel. We then classify pixels Figure 5: Example of how we relax the original problem graph to calculate the heuristic \\(h(n)\\). At a high level, we construct a relaxed problem by removing all stochastic edges and unreachable nodes from the original graph. Then, the heuristic of the original problem is the cost of the relaxed problem and is always admissible. with a probability lower than 90% but greater than 50% as stochastic water pixels. Finally, we identify the boundary of all deterministic water pixels. Fig. 7 shows an overview of these steps. Figure 6: Satellite images illustrating two types of stochastic edges. Water pixels are marked in blue, with their estimated boundaries in black. The left image demonstrates several pinch points, highlighted in orange, that represent potential paths connecting water pixels from two distinct water bodies that are otherwise far or disconnected. The image on the right visually describes the concept of a windy edge. Any water pixel at least 200 metres from the boundary falls within the yellow windy area. If an edge crosses the windy area, then it is classified as a windy edge. Figure 7: Overview of the water-masking process for deriving water probabilities from satellite imagery. The procedure begins with historical Sentinel-2 satellite images displayed on the left. Water pixels are individually identified in each image by first calculating the NDWI water index and then using a bimodal Gaussian Mixture Model for classification. The results of each classification are averaged to determine the probabilities of water, which are depicted on the right. Pixels with reduced water probabilities are coloured more yellow. **Stochastic Edge Detection: Pinch Points** We can now identify those stochastic water paths (i.e., narrow straits, pinch points (Ferguson et al., 2004)) that are useful for navigation. A pinch point (e.g. Fig. 6a) is a sequence of stochastic water pixels connecting two parts of topologically far (or distinct) but metrically close water areas. Essentially, this edge is a shortcut connecting two points on the water boundary that are otherwise far away or disconnected. To find all such edges, we iterate over all boundary pixels, test each shortest stochastic water path to nearby boundary pixels, and include those stochastic paths that are shortcuts. The blocking probability of a stochastic edge is one minus the minimum water probability along the path. Since this process will produce many similar stochastic edges around the same narrow passage, we run DBSCAN (Ester et al., 1996) and only choose the shortest stochastic edge within each cluster. **Stochastic Edge Detection: Windy Edges** The second type of stochastic edges are those with strong wind. In practice, when an ASV travels on a path far away from the shore, there is a higher chance of running into a strong headwind or wave, making the path difficult to traverse. Hence, we define a water pixel to be a windy area if it is at least 200m away from any points of the water boundary. An edge is then treated as a windy edge if it crosses the windy area at some point and we assign a small probability for the event where the wind blocks the edge. An example of a windy area and an associated windy edge is shown in Fig. 6b. **Path Generation** The next step is to construct the geo-tagged path and calculate all edge costs in the high-level graph. The nodes in the high-level graph are composed of all sampling targets, endpoints of stochastic edges, and the starting node. We run A* (Hart et al., 1968) on the deterministic water pixels to calculate the shortest path between every pair of nodes except for the stochastic edges found in the previous step. Since the path generated by A* connects neighbouring pixels, we smooth them by randomized shortcutting (Geraerts & Overmars, 2007). Then, we can discard any unnecessary stochastic edges if they do not reduce the distance between a pair of nodes. For every stochastic edge, we loop over all pairs of nodes and check if setting the edge traversable would reduce the distance between the pair of nodes. Finally, we check if each deterministic edge is a windy edge and obtain the high-level graph used in PCCTP. In summary, we estimate water probabilities from historical satellite images with adaptive NDWI indexing and build a stochastic graph connecting all sampling locations and pinch points. The resulting compact graph representing a PCCTP instance can be solved optimally with an AO* heuristic search. ## 4 Simulations In this chapter, we will verify the efficacy of our PCCTP planning framework in a large-scale simulation of mission planning on real lakes. The testing dataset and simulation results from this section can be reproduced from our previous work in Y. Huang et al. (2023). ### Testing Dataset We evaluate our mission-planning framework on Canadian lakes selected from the _CanVec Series_ Ontario dataset (Natural Resources Canada, 2019). Published by _Natural Resources Canada_, this dataset contains geospatial data of over 1.1 million water bodies in Ontario. Considering a practical mission length, lakes are filtered such that their bounding boxes are 1-10 km by 1-10 km. Then, water masks of the resulting 5190 lakes are generated using _Sentinel-2_ imagery across 30 days in June 2018-2022 (Drusch et al., 2012). We then detect any pinch points on the water masks and randomly sample five different sets of target nodes on each lake, each with a different number of targets. The starting locations are sampled near the shore to mimic real deployment conditions. Furthermore, we generate high-level graphs and windy edges from the water mask. Graphs with no stochastic edges are removed as well as any instances with more than ten stochastic edges due to long run times. Ultimately, we evaluate our algorithm on 2217 graph instances, which come from 1052 unique lakes. ### Baseline Planning Algorithms The simplest baseline is an online greedy algorithm that always goes to the nearest unvisited target node assuming all ambiguous edges are traversable. For a graph with \\(k\\) stochastic edges, we simulate all \\(2^{k}\\) possible worlds, each with a different traversability permutation, and evaluate our greedy actor on each one. The greedy actor recomputes a plan at every step and queries the simulator if it encounters a stochastic edge to disambiguate it. Also, it checks the reachability of every target node upon discovering an untraversable edge and gives up on any unreachable targets. A more sophisticated baseline is the optimistic TSP algorithm. Instead of always going to the nearest target node, it computes the optimal tour to visit all remaining targets assuming all ambiguous edges are traversable. Similar to the greedy actor, TSP recomputes a tour at every step and may change its plan after encountering an untraversable edge. The expected cost is computed via a weighted sum on all \\(2^{k}\\) possible worlds. In contrast to PCCTP, both baselines require onboard computation to update their optimistic plans, whereas PCCTP precomputes a single optimal policy that is executed online. Lastly, we modify the CR algorithm, originally a method for CCTP (Liao and Huang, 2014), to solve PCCTP. CR precomputes a cyclic sequence to visit all target nodes using the Christofides algorithm (Christofides, 1976) and tries to visit all target nodes in multiple cycles while disambiguating stochastic edges. If a target node turns out to be unreachable, we allow CR to skip this node in its traversal sequence. Figure 8: Results of PCCTP and baselines in simulation. In (a), the performance of PCCTP is compared against three baselines. Our proposed method achieves the lowest average expected regret and outperforms the next-best baseline by 1.8km in the extreme case. Note that the stochastic edges include both windy edges and pinch points. In (b), only the CPU execution time for PCCTP is shown, since all baselines are online methods. ### Results Fig. 8a compares our algorithm against all baselines. To measure the performance across various graphs of different sizes, we use the average expected regret over all graphs. The expected regret of a policy \\(\\pi\\) for one graph \\(G\\) is defined as \\[\\mathbb{E}_{w}[\\text{Regret}(\\pi)]=\\sum_{w}[p(w)(\\phi(\\pi,w)-\\phi(\\pi^{p},w))],\\] where \\(\\pi^{p}\\) is a privileged planner with knowledge of the states of all stochastic edges, \\(\\phi\\) is the cost functional, and \\(w\\) is a possible state of (the stochastic edges of) the graph. The cost \\(\\phi(\\pi^{p},w)\\), calculated using a TSP solver for each state, serves as a lower bound to the costs incurred by the policy \\(\\phi(\\pi,w)\\). A low expected regret indicates that the policy \\(\\pi\\) will find efficient paths to visit all target locations and disambiguate the stochastic edges without prior knowledge of their states. PCCTP precomputes the optimal policy in about 50 seconds on average in our evaluation, and there is no additional cost online. Compared to the strongest baseline (TSP), our algorithm saves the robot about 1%(50m) of travel distance on average and 15%(1.8km) in the extreme case. Although the advantage is not statistically significant on average, our planner still offers advantages in many specific scenarios and edge cases, such as those with high blocking probabilities or long stochastic edges. The performance of PCCTP may be further enhanced if the estimated blocking probabilities of the stochastic edges are refined based on historical data. We also find that the performance gap between our algorithms and baselines becomes more significant with more windy edges. In fact, if the only type of stochastic edges in our graph is pinch points (i.e., the number of windy edges is 0), the performance gap is almost negligible between PCCTP and the optimistic TSP baseline. The main reason is that most pinch points only reduce the total trip distance by hundreds of meters on a possible state of the graph. Pinch points are most likely to be found either on the edges of a lake or as the only water link connecting two water bodies. In the first case, these pinch points are unlikely to be a big shortcut. As for the latter case, if the pinch point is the only path connecting the starting location to a target node, disambiguating this edge has to be part of the policy. On the other hand, windy edges passing through the centre of a lake are often longer, and the gap between the optimal and suboptimal policy is much more significant. **Computational Complexity** The worst-case complexity of our optimal search algorithm is \\(O(|J|!\\times k!\\times 2^{k})\\), which depends on number of stochastic edges \\(k\\) and number of target nodes \\(|J|\\). The complexity is exponential in nature because there are \\(2^{k}\\) possible states of the stochastic edges, and all possible orders to visit all nodes and disambiguate stochastic edges need to be enumerated without a good heuristic in the worst case. In practice, however, our implementation performs efficiently on standard laptop CPUs. The median runtime of our algorithm, implemented in Python, is less than one second, and 99% of the instances run under 3 minutes. Nonetheless, approximately 0.5% of instances with eight nodes and 4.2% with ten nodes require more than five minutes to process. We believe the runtime can be considerably improved by rewriting in a more efficient language, such as C++. More importantly, we argue that this one-time cost can occur offline before deploying the robot into a water-sampling mission. Although the worst-case runtime of the AO* algorithm can increase exponentially as the graph increases in size, the number of target locations in each graph cannot grow infinitely for real-world water-sampling missions. Hence, the runtime of PCCTP is not a concern for practical applications. ## 5 Autonomous Navigation System This section will explain our local navigation framework in detail and how the robot can execute the mission-level policy and safely follow its planned trajectory. ### Stochastic Edge Disambigutaion One crucial aspect required for fully autonomous policy execution is the capacity to disambiguate stochastic edges. Our approach is to build a robust autonomy framework (Fig. 9) that relies less on lower-level components such as perception and local planning to execute a policy successfully. In more general terms, the mission planner precomputes the navigation policies from satellite images given user-designated sampling locations. During a mission, the robot will try to follow the global path published by the policy. Sensor inputs from a stereo camera and sonar scans are processed and filtered via a local occupancy-grid mapper. The local planner then tries to find a path in the local frame that tracks the global plan and avoids any obstacles detected close to the future path of the robot. When the robot is disambiguating a stochastic edge, the policy executor will independently decide the edge's traversability based on the GPS location of the robot and a timer. A stochastic edge is deemed traversable if the robot reaches the endpoint of the prescribed path of this edge within the established time limit. If it fails to do so, the edge is deemed untraversable. There is no explicit traversability check on an ambiguous stochastic edge, such as a classifier or a local map. The timer allows us to address complications we cannot directly sense, such as heavy prevailing winds or issues with the local planner. Following this, the executor branches into different policy cases depending on the outcome of the disambiguation. We made significant improvements to our local navigation framework compared to our previous work in (Y. Huang et al., 2023). Similar to before, the traversability assessment uses a timer and GPS locations to classify an edge's traversability without directly relying on the result of obstacle detections or local mapping. Instead, we made design decision changes to the local planning architecture and improvements to individual modules. Below, we will explain all the important components and highlight any changes we made. Figure 9: The autonomy modules of our navigation system. Global mission planners are coloured in green, sensors inputs are labelled in blue, localization and local mapping nodes are shaded in orange while planning and control nodes are in purple. We have also specifically indicated the modules where we made significant improvements over our previous work Y. Huang et al. (2023). ### Terrain Assessment with Stereo Camera An experienced human paddler or navigator can easily estimate the traversability of a lake by visually distinguishing water from untraversable terrains, obstacles, or any dynamic objects. In our previous work in Y. Huang et al. (2023), the video stream collected from the stereo camera is processed geometrically by estimating the water surface from point clouds and clustering point clouds above the water surface as obstacles. This process was prone to stereo-matching errors due to sun glares and calm water surfaces, and could not detect any obstacles on the water surface such as a shallow rock. To address these shortcomings, we use semantic information from RGB video streams and neural stereo disparity maps to estimate traversable waters in front of the robot and identify obstacles. We learn a water segmentation network and bundle it with a temporal filter to estimate the waterline in image space and remove outliers. The estimated waterline is then projected to 3D using the disparity map and used to update the occupancy grid. We provide more details in the following sections. #### 5.2.1 Water Segmentation Network The most important factor in training a robust neural network for water segmentation is a large and diverse dataset. The characteristics of water's appearance exhibit considerable variation contingent on factors such as wind, reflections, and ambient brightness, as demonstrated in Fig. 10. Yet, the stereo camera falters in difficult lighting conditions due to the lack of dynamic range, culminating in inadequately exposed images and the emergence of artifacts such as shadows, lens flare, and noises. Our image dataset is collected from previous field tests in Nine Mile Lake and a stormwater management pond at the University of Toronto. However, manual annotation of thousands of images is impractical due to its labour and time intensity. Thus, since semantic segmentation is a well-explored research area, we used a pre-trained SAM (Segment Anything Model) to automate the process of creating ground-truth labels (Kirillov et al., 2023). SAM will try to segment everything beyond just water, outputting numerous masks of different irrelevant items. While it is not yet capable of classifying labelled regions, because water normally occupies the lower half of the frame and is commonly characterized by substantial area and continuity, we can apply a heuristic that heavily favours these features, scoring regions to distinguish water mask \\(m_{\\text{water}}\\) from other masks with very high accuracy: \\[m_{\\text{water}}=\\max_{i}\\left[\\frac{A(m_{i})}{d(m_{i})+1}\\right],\\] where \\(m_{i}\\) denotes the mask of class \\(i\\) from SAM, \\(A\\) computes the total number of pixels a mask occupies, and \\(d\\) represents the vertical distance, in pixels, of the masked area's centroid from the image's bottom. Then, false positives within the identified mask will be filtered out. With manual checking, we found that this simple approach successfully labelled the entirety of our dataset without failure. An example of this Figure 10: Examples of challenging conditions for semantic segmentation and disparity mapping. process is shown in Fig. 11. Finally, we have a binary mask ready to be fed into training. Another important technique to improve the quality of neural networks is data augmentation. Our limited training set cannot match all the possible lighting and environmental conditions that the ASV may encounter in future missions. However, we would like the trained network to be robust against issues such as bad Figure 11: The steps in the automatic process of generating ground-truth labels. Each distinct colour overlay represents a different object as segmented by SAM. The red region is the final ground-truth water mask after filtering SAM masks, which is used for training our own water segmentation network. Figure 12: Example of CutMix augmented training data. A second image with a random exposure multiplier is randomly resized and placed on top of the original image. The red region highlights the pixels that are labelled as water. exposure and reflections, which significantly affect segmentation performance. To this end, we use colour-jittering and CutMix (Yun et al., 2019) during training and we find that they greatly enhance out-of-distribution performance, yielding superior generalization in challenging weather conditions as in Fig. 10. Essentially, regions of one training image are cut and overlaid onto another, as demonstrated in Fig. 12 to encourage the model to learn more diverse and challenging features while also expanding our limited dataset. Our model architecture and pre-trained weights are adopted from the eWaSR maritime obstacle detection network based on the ResNet-18 backbone (Teresek et al., 2023). Our training dataset contains 4,000 images while our testing split contains 200 images. Images in the training set are sampled uniformly from video recordings of past experiments, while the images of the test set are held out from challenging scenarios that we manually identify. Figure 12(a) displays some examples from the test set. In addition, 10 more labels are generated randomly using CutMix during training for every original labelled image. Inspired by the semantic segmentation community (Bovcon et al., 2019; Long et al., 2015), we use Intersection over Union (IOU, also known as the Jaccard Index) for water pixels as a metric to evaluate the performance of the trained segmentation network. Let \\(n_{tp}\\) be the number of water pixels with the correct predictions, \\(n_{p}\\) be the total number of pixels classified as water by the neural network, and \\(n_{g}\\) be the total number of pixels labelled as water. The IOU can then be calculated as \\(n_{tp}/(n_{g}+n_{p}-n_{tp})\\). After training, we achieve an average IOU of 0.992 on the 200-image hold-out test set. The results of the predicted water masks after Figure 13: Example images and their predicted water mask from our test set. To better assess model capabilities, we hand-picked a diverse set of challenging scenarios, such as reflections, bad exposure, strong glares, aquatic plants, shallow areas, and windy water surfaces. Despite these challenges, our trained water segmentation network reliably produces high-quality water masks. training are shown in Figure. 13b, and our lightweight yet powerful neural network consistently produces binary masks that accurately and consistently segment water. #### 5.2.2 Waterline Estimation and Tracking There are several issues associated with the direct use of raw segmentation masks produced by a neural network. Firstly, the 2D, per-pixel water labels are not inherently suited for determining traversability in front of the robot. Secondly, depth estimations derived from the stereo camera can be severely distorted due to unfavourable conditions such as sun glare or tranquil water reflections. In Y. Huang et al. (2023), the geometry-based approach to estimating water planes from point clouds derived from depth images was highly sensitive to noise and required substantial manual adjustments. Lastly, both neural segmentation masks and depth maps can exhibit noise and inconsistency over successive timestamps. These issues necessitate avoiding the direct combination of segmentation masks with depth maps to ascertain the existence of a 3D water surface. Instead, we filter the segmentation masks both spatially and temporally to approximate a waterline in 2D image space, then project this line into 3D space. This projected line then forms the basis for traversability estimates in 3D based on stereo data. We approximate the 2D waterline as a vector comprising \\(n\\) elements, where \\(n\\) represents the image's width. Each element serves to indicate the waterline's position for that column. The fundamental premise here is that each column contains a clear division between water and everything else - such as the sky, trees, people, shoreline buildings, and other dynamic obstacles. Thus, we can presume that only the pixels below the waterline are navigable, while those above are impassable. This model works well because water surfaces are typically horizontal when viewed from the first-person perspective of the ASV. Therefore, for the purpose of evaluating the robot's forward navigation, we can safely disregard any water pixels higher than the defined waterline in the image space. The position of the waterline on every column is identified by scanning upwards from the column's bottom until a non-water region is detected using a small moving window. If \\(s\\) is the window size, the separation point is the first pixel from the bottom such that the next \\(s\\) pixels above are all non-water. Usually, the window size is five. Our filtering process consists of two stages: spatial filtering based on RANSAC (Fischler and Bolles, 1981) and employing a Kalman filter subsequently for temporal tracking of the waterline. We design the spatial Figure 14: The stereo-based waterline estimation pipeline. A waterline in the 2D image is estimated and tracked using the water segmentation masks. The 2D waterline is mapped to the depth image generated by the ZED SDK and reprojected into the 3D camera frame based on the depth of waterline pixels and camera intrinsics. The final red points in top view represent the 3D projection of the estimated waterline and separate the traversable water in front of the robot from the impassable area. filtering step to smooth the waterline and remove spatial outliers; to this end, we employ nearest-neighbour interpolation to fit the random samples in each iteration. RANSAC uses the squared loss function to compare the interpolated waterline and the raw waterline. Then, we apply a linear Kalman filter with outlier rejection to track each individual element (column) of the waterline temporally. The Kalman filter uses RANSAC-filtered waterline as observations and maintains an estimated waterline as the state. Both the state transition matrix and the observation matrix are identities. We use a chi-squared test to discard outliers, which compares the normalized innovation squared to a predetermined threshold. Using both filters, we can eliminate noises in the segmentation mask and mitigate any temporal oscillation or abrupt changes in the predicted water segmentation masks. In practice, we find that the quality of filtering is not very sensitive to the parameters of the RANSAC and the Kalman filter. At the end of the filtering process, we have a smoothed 2D waterline with one pixel per column that separates navigable water from everything else. We project this line back to the camera's 3D frame. As shown in Fig. 14, we use the depth coordinate of each waterline pixel and calculate their 3D positions with the intrinsics of the stereo camera. In the end, the 3D waterline separates the traversable water in front of our ASV from any obstacles originally above the 2D waterline. If the 2D waterline is at the horizon (i.e., it separates water and the sky), the projected 3D waterline will be very far away or close to infinity, meaning that all fields of view in front of the robot are traversable water. ### Obstacle Detection with Sonar Sonar is commonly used as a sensor in maritime applications for both ships and submarines. A specific type, the Blue Robotics Ping360 mechanical scanning sonar, serves as our primary sensing module underwater. It is mounted underwater and operates by emitting an acoustic beam within a forward-facing fan-shaped cone. This beam has a consistent width (1deg) and height (20deg). The sonar then records the echoes reflected by objects, with the reflection strength relating directly to the target's density. By measuring the return time and factoring in the speed of sound in water, the range of these echoes can be determined. The sonar's transducer can also be rotated to control the horizontal angle of the acoustic beam. Configured to scan a 120deg fan-shaped cone ahead of the boat, the sonar can complete these scans up to a range of 20m in approximately 3.5 seconds. Additionally, we also have a Ping1D Sonar Echosounder from Blue Robotics that measures water depth. The echosounder is mounted underwater and is bottom-facing. Each sonar scan yields a one-dimensional vector that corresponds to the reflection's intensity along the preset range. If an obstacle impedes the path of the acoustic beam, it prevents the beam from passing beyond the obstruction, leading to an acoustic shadow. This phenomenon facilitates obstacle detection via sonar scanning. Figure 15: A sonar scan and obstacle detection result. The scan is taken from the same scene and timestamp as in Fig. 14, and the sonar successfully detected the shoreline visible from the bird’s eye view on the left. Each sonar scan is processed individually by detecting the first local maxima above a peak threshold on the smoothed data. Consecutive sonar scans are used to filter out noise in the detected obstacles. Fig. 15a illustrates a typical sonar scan cycle that detects obstacles. A single sonar scan's raw and processed data with the resulting detected obstacle are shown in Fig. 15b. The process begins with the removal of noisy reflections within a close range (\\(<\\)2.5m) before smoothing the scan using a moving-average filter. Following this, all local maxima above a specific peak threshold (50) are detected. An obstacle is identified at the first local maxima, where the average intensity post-peak falls below the shadow threshold (5). These thresholds are tuned by hand on data collected from previous field tests in Nine Mile Lake and the stormwater management pond at the University of Toronto. A post-processing filter removes detections that do not persist across a minimum of \\(n\\) scans (with \\(n=2\\) in our configuration). This is accomplished by calculating the cosine similarity between the current intensity vector and its predecessor. If an obstacle is consistently detected \\(n\\) times, and the cosine similarity across these successive intensity vectors exceeds 0.9, along with spatial proximity, this detected obstacle point is included. In other words, any detections occurring in isolation, either spatially or temporally, are excluded. In our previous work (Y. Huang et al., 2023), sonar was only used for data collection purposes and not for local planning or navigation. Using scanning sonar, we can significantly improve our ability to detect shallow or underwater obstacles even if sonar operates at a much lower frequency than the stereo camera. ### Sensor Fusion with Local Occupancy Grid Upon receiving detections from the sonar scans and the stereo camera, they are fused into a coherent local representation to facilitate local path planning and robot control. We utilize the classic occupancy map (Elfes, 1989) for our local mapping representation. Unlike a 2D naive cost map used in our previous work (Y. Huang et al., 2023), an occupancy grid maintains a local map in a principled fashion and naturally filters and smoothes sensor measurements temporally and spatially. The traversability of each cell is determined by naively summing the separately maintained log-odds ratios for sonar and camera. Our occupancy grid is 40m Figure 16: Example of sensor fusion with occupancy map before an extruding rock. Yellow line is the waterline estimated by stereo camera, red dots indicate underwater obstacles detected by the scanning sonar, and white dots mean that the sonar did not detect an obstacle at that angle. The rock was successfully detected in the final occupancy grid despite being missed by the stereo camera. x 40m, with a cell resolution of 0.5m x 0.5m, its centre moving in sync with the robot's odometry updates. Waterline points, as detected by the stereo camera, are ray-traced in 3D back to the robot, thus lowering the occupied probability of cells within the ray-tracing range. Cells containing or adjacent to waterline points have their occupied probabilities increased. However, points exceeding a set maximum range do not affect occupied probabilities beyond the maximum range due to the decreasing reliability of depth measurements with increasing range. The protocol for updating the log-odds ratios for sonar is similar. Each sonar scan is ray-traced to clear the occupancy grid and marks any cells containing or close to the obstacles. The log-odds ratios of existing cells are decayed with incoming measurement updates, enhancing the map's adaptability to noisy localizations, false positives, and dynamic obstacles. Finally, we apply a median filter to the occupancy grid to smooth out and remove outliers. A limitation of this system is that the scanning sonar and the stereo camera observe different sections of the environment. The sonar may detect underwater obstacles invisible to the camera and vice versa for surface-level objects. Fig. 16 provides an example where a shallow rock in the front-right of the ASV is detected by the sonar but missed by the stereo camera. Without ample ground-truth data on the marine environment, reconciling discrepancies between these sensors proves challenging. Traversability estimation, especially in shallow water, is also complicated due to the potential presence of underwater flora (e.g. Fig. 1c) or terrain. As a solution, we opt for the simplest fusion method: directly summing the log-odds ratio in each cell. Additionally, we adjust the occupancy grid dilation based on the echosounder's water depth measurements, increasing the dilation radius when the ASV is in shallower water. The workflow of this strategy is shown in Fig. 16. While this strategy may only present a coarse traversability estimate, it still reliably detects the shoreline despite possible undetected smaller obstacles such as lily pads or weeds. The dilation adjustment employed in shallow water allows the ASV to navigate safely, avoiding prevalent aquatic plants near the shore. ### Local Path Tracking and Control Local path tracking is essential to ensure that the robot adheres to the global mission plan while navigating around obstacles on the local map. The desired controller should run in real-time and work well in obstacle-rich environments. The generated local paths must also be easy for the robot to follow. In our previous work, the robot's velocities were directly controlled using the Dynamic Window Approach (DWA) (Fox et al., 1997) to avoid local obstacles. However, DWA only samples a single-step velocity and may fail in cluttered environments with obstacles or cul-de-sacs. Direct tracking of the global path with model predictive control (MPC) is another popular option (Dong et al., 2020; Ji et al., 2017), but solving the optimization problem can be costly because the obstacle avoidance constraint is nonconvex in general. In this paper, we use an alternative strategy that uses a separate path planner to find a collision-free path that connects to the global Figure 17: Example of the planner replanning around an obstacle and avoiding it. Blue line is the global plan (see Sec. 3.3 for details). Green is the current local plan planned using the local occupancy grid and tries to stay close to the global plan as much as possible (see Sec. 5.5). Red is the robot’s actual trajectory estimated by the GPS. The actual trajectory of the robot is jagged due to both noisy GPS signals, wind, and decaying occupancy map. path and then tracks the collision-free path with an MPC. We employ a modified version of Lateral BIT*, as proposed by Sehn et al. (2023), to serve as both our local planner and controller. Our approach provides a stronger guarantee as BIT* is probabilistically complete and asymptotically optimal. This optimal sampling-based planner, set within the VT&R (Furgale & Barfoot, 2010) framework, follows an arbitrary global path while veering minimally around obstacles. Lateral BIT* builds upon BIT* (Gammell et al., 2015) by implementing a weighted Euclidean edge metric in the curvilinear planning domain, with collision checks performed against the occupancy grid in the original Euclidean space. Samples are pre-seeded along the whole global path in the curvilinear coordinates before random sampling in a fixed-size sampling window around the robot. The planner operates backward from a singular goal node to the current robot location without selecting any intermediate waypoints. Lateral BIT* is also an anytime planner and can be adapted for dynamic replanning. Once an initial solution is found, an MPC tracking controller can track the solution path. The MPC optimizes the velocity commands in a short horizon to minimize the deviation from the planner solution while enforcing robot kinematic models and acceleration constraints. Adopted from Sehn et al. (2023), the MPC solves the following least-squares problem: \\[\\underset{\\mathbf{T},\\mathbf{u}}{\\operatorname{argmin}}J(\\mathbf{T},\\mathbf{u })=\\sum_{k=1}^{K}\\ln(\\mathbf{T}_{\\text{ref},k}\\mathbf{T}_{k}^{-1})^{\\vee^{T}} \\mathbf{Q}_{k}\\ln(\\mathbf{T}_{\\text{ref},k}\\mathbf{T}_{k}^{-1})^{\\vee}+ \\mathbf{u}_{k}^{T}\\mathbf{R}_{k}\\mathbf{u}_{k}\\] s.t. \\[\\mathbf{T}_{k+1}=\\exp\\left((\\mathbf{P}^{T}\\mathbf{u}_{k})^{\\wedge}h\\right) \\mathbf{T}_{k},k=1,2, K\\] \\[\\mathbf{u}_{\\min,k}\\leq\\mathbf{u}_{k}\\leq\\mathbf{u}_{\\max,k},k=1,2, K\\] where \\(\\mathbf{T}\\in SE(3)\\) are poses and \\(\\mathbf{u}=[v\\ \\omega]^{T}\\) are velocities. The objective function minimizes the pose error between the reference trajectory \\(\\mathbf{T}_{\\text{ref},k}\\) and the predicted trajectory \\(\\mathbf{T}_{k}\\) while keeping the control effort \\(\\mathbf{u}_{k}\\) minimum. The two constraints are the generalized kinematic constraint and actuation limits. We tune the cost matrices \\(\\mathbf{Q}\\) and \\(\\mathbf{R}\\) to balance the cost between different degrees of freedom. We refer readers to Sec. V of Sehn et al. (2023) for more details. If a newly detected obstacle obstructs the current best solution path, the planner will truncate its planning tree from the obstacle to the robot, triggering a replan or rewire from the truncated tree to the robot's location. Because the resolution of the satellite map is low (10m/cell), our global path could be blocked by large rocks and terrains, especially the pinch points. Hence, we adjust the maximum width and length of the sampling window and tune the parameters balancing lateral deviation and the path length. If there are no viable paths locally within the sampling window and the planner cannot find a solution after 1 second, the controller will stop the ASV and stabilize it at its current location. In practice, we cannot directly control the ASV's linear velocity due to the primary source of translational velocity estimates, GPS data, is noisy and unreliable. Consequently, we map the linear velocity commands to motor thrusts through a linear relationship and close the control loop using the MPC tracking controller. Fig. 17 illustrates an example from the field test where our robot detected obstacles and effectively replanned its trajectory. In the middle image, the lateral BIT* planner finds a smooth path around the obstacle while deviating minimally from the global path. The estimated robot path on the right appears jagged due to significant GPS noise (up to 1m), wind and current influences, and the occupancy grid's time decay, causing the ASV's heading to oscillate and repeatedly rediscover the same obstacle. However, the robot successfully bypasses the obstacle without requiring manual intervention. This highlights the robustness and adaptability of our architecture in dynamic, noisy, and unpredictable environments. Real World Experiments ### Robot Our ASV platform, as depicted in Fig. 18, consists of a modified _Clearpath Heron_ ASV equipped with a GPS, IMU, Zed2i stereo camera, Ping360 scanning sonar, and a Ping1d Sonar Echosounder Altimeter. The stereo camera is positioned in a forward-facing configuration and has a maximum depth range of 35 m. The Ping360 sonar is configured to perform a 20 m by 125deg cone scan in front of the robot every 3.5 seconds, achieving a resolution of 1.8deg. All computational tasks are handled by an Nvidia Jetson AGX Xavier and the onboard Intel Atom (E3950 @ 1.60GHz) PC on the Heron. The Jetson, stereo camera, and Ping360 sonar are powered by a lithium-ion jump-starter battery with an 88 WH capacity. To maintain the Jetson's power input at 19V, a voltage regulator is employed, allowing the Jetson to operate in its 30W mode. Additionally, a 417.6-Wh NiMH battery pack supplies power to the motors and other electronics. The batteries can support autonomous operations for approximately two hours. A schematic of the electrical system is presented in Fig. 18c. The additional payloads carried by the ASV have a combined mass of roughly 9 kg. Although water samplers have not been integrated into our system, they can be easily fitted in the future. The maximum speed of our ASV is approximately 1.2 m/s. Additionally, we have a remote controller available for manual mode operation, which can be utilized for safety purposes if needed. ### System Implementation Details Our system's computational load is divided into offline and online processes (Figure 9). Prior to the mission, we precompute the high-level graph and optimal policy, which are loaded onto the onboard PC. The online tasks are distributed between two onboard computers: the Atom PC and the Jetson. An Ethernet switch connects these computers, the sonar, and Heron's WiFi Radio. The GPS and IMU are connected to the Atom PC via USB, while the echo-sounder sonar and the stereo camera are connected to the Jetson via USB. The switch allows remote SSH access and data transfer between the Atom and the Jetson. We use the ROS framework (Quigley et al., 2009) to implement our autonomy modules in C++ and Python. To synchronize time between the Jetson and the Atom PC, we employ Chrony for network time protocol setup. The Atom PC acts as the ROS master, responsible for vehicle interface, localization, updating the occupancy grids, running the local planner, and MPC controller. The Jetson handles resource-intensive tasks such as depth map processing, semantic segmentation, sonar obstacle detection, and data logging. Additionally, a ROS node hosting a web visualization page is served on the Atom PC. We also provide a Rviz visualizer to display the occupancy grid and outputs of the local planner and MPC. During the mission, the web server publishes the robot's locations and policy execution states in real-time on a web page served on the local network, using pre-downloaded satellite maps. The web visualization and Rviz can be accessed in the field from a laptop connected to Heron's WiFi. The policy executor publishes the global plan to the local planner and starts a timer when navigating a stochastic edge, but importantly does not incur any additional compute cost for planning. We periodically save the status of policy execution online, enabling easy policy reloading in case of a battery change during testing. We tune the update rates and resolutions of our sensing, perception, and planning modules based on the computational capacities of our Heron and Jetson systems. Specifically, we aim to maintain a sustainable compute load within the thermal and power limits. We avoid pushing our CPUs and GPUs to their absolute limits because doing so can lead to system unreliability and sudden frame rate drops in the field. Therefore, we set the ZED stereo camera and the neural depth pipeline to publish at 5Hz with a resolution of 640 x 480. The semantic segmentation network, optimized using Nvidia's TensorRT framework, runs with a latency of less than 50ms. Sonar obstacle detection operates at 20Hz, synchronized with the arrival rate of new sonar scans. The occupancy grid map, with a resolution of 0.5m per cell, updates at 10Hz. The lateral BIT* local planner runs asynchronously in a separate thread, sampling 400 points initially and 150 points in each subsequent batch. The maximum dimensions of the sampling window are 40m in length and 30m in width. The MPC retrieves the planned path and calculates the desired velocity at 10Hz, using a step size of 0.1s and a 40-step lookahead horizon. ### Testing Site Our planning algorithm was evaluated at Nine Mile Lake in McDougall, Ontario, Canada. Detailed test sites and the three executed missions can be found in Fig. 19. The Lower Lake Mission in Fig. 19a repeats the field test from our prior work (Y. Huang et al., 2023), involving a 3.7 km mission with five sampling points, three of which are only reachable after navigating a stochastic edge. The stochastic edge at the Figure 18: Our _Clearpath Heron_ ASV for monitoring water quality during a field test. The ASV is equipped with various sensors including GPS, IMU, underwater scanning sonar, sonar altimeter, and a stereo camera. It also contains an Nvidia Jetson and an Atom PC for processing the data from these sensors. The locations where the sonars are mounted are depicted in (b), while the power and communication setups are illustrated in (c). bottom-left compels the ASV to manoeuvre through a thin opening amid substantial rocks not discernible in the _Sentinel-2_ satellite images. Besides repeating the old experiment from our prior work, we added two additional missions in the lake's upper areas. Also, to assess our local mapping and planning stack's capabilities, an ablation mission was executed to see if the robot could safely navigate the stochastic edge at the bottom-left of the Lower Lake Mission. The policy in Upper Lake Mission (Short) was directly generated from the Fig. 3 water mask. In fact, the high-level graph in Fig. 3 and policy in Fig. 4 is a simplified toy version of our testing policy in the Upper Lake Mission (Short). The expected length of the policy is 1.0km long. We observed that our NovAtel GPS receiver's reliability was impaired by large trees on the left stochastic edge in Fig. 19b. On the right stochastic edges of the same subfigure, shallow regions, lily pads, and weeds were numerous. Lastly, we extended this short mission to include three additional sampling sites and another stochastic edge at the lake's farthest point. The expected length of this Upper Lake Mission (Long) in Fig. 19c is approximately 3.3 km Despite having only 5 nodes, the Upper Lake Mission (Long) is still significant due to the complexity introduced by stochastic edges, resulting in 54 contingencies and a policy tree depth of 12. This demonstrates that even small-scale missions require our proposed approach to generate a robust global policy, and the local planner must effectively manage large uncertainties in traversability to execute the mission safely. ### Results of Mission Planner The aim of our field experiments is to test if our autonomy stack can successfully execute a global mission policy correctly and fully autonomously, without any manual interventions. Results are summarized in Table 1 and we provide the overview and analysis of our results below. **Lower Lake Mission** We undertook the lower lake mission twice - first, using both sonar and camera and second, using only the camera. The ASV successfully reached 4/5 targeted locations during both trials, with the exception of the bottom-left location. This was due to the ASV's inability to autonomously navigate through large rocks within the designated time frame. When contrasted with prior experiments noted in Y. Huang et al. (2023), our trials showed marked improvement, with only a single manual intervention required due to algorithmic failure during the first run, and none during the second. The intervention was necessitated by the ASV's collision with a tree trunk (the one in Fig. 1b) it failed to identify, resulting in manual manoeuvring to remove the obstruction. In both trials, the policy executor deemed the bottom-left stochastic edge untraversable because the local planner did not find a path through large rocks within the time limit. The ASV was then safely directed back to the last sampling location and starting location. Moreover, these trials demonstrated a significant improvement in the stability of our navigational autonomy compared to the same field test conducted last year. These can be attributed to several factors. First, \\begin{table} \\begin{tabular}{c c c c c} \\hline Mission & Sensors Used & Node Visited & \\# of Interventions & Appeared In \\\\ \\hline Lower Lake Mission & Camera Only & 4/5 & 3 & Y. Huang et al. (2023) \\\\ Lower Lake Mission & Camera Only & 3/5 & 3 & Y. Huang et al. (2023) \\\\ \\hline Lower Lake Mission & Sonar + Camera & 4/5 & 1 & This Work \\\\ Lower Lake Mission & Camera Only & 4/5 & 0 & This Work \\\\ \\hline Upper Lake Mission (Short) & Sonar + Camera & 1/1 & 0 & This Work \\\\ Upper Lake Mission (Short) & Sonar + Camera & 1/1 & 0 & This Work \\\\ (Left Edge Blocked) & Sonar + Camera & 1/1 & 0 & This Work \\\\ Upper Lake Mission (Short) & Sonar + Camera & 0/1 & 0 & This Work \\\\ (Left Edge Blocked) & & & 0 & This Work \\\\ \\hline Upper Lake Mission (Long) & Sonar + Camera & 5/5 & 1 & This Work \\\\ \\hline \\end{tabular} \\end{table} Table 1: Summary of the results of our tests for different policies, including any interventions due to algorithmic failure (excluding battery changes). The first two rows show the results of the Lower Lake Mission from our previous work. Figure 19: Representative examples of global plans and trajectories traversed during field experiments. All stochastic edges are labelled in colour. Green line means the stochastic edge is found traversable, red means untraversable, and yellow means the edge was not explored and remained ambiguous. A battery change in (a) and a manual intervention due to large GPS noises in (c) are also labelled. the inclusion of a new semantic segmentation network for the stereo camera allowed the ASV to navigate confidently even in conditions of high sunlight glare or calm water. This was in contrast to the geometric approach in our previous work, which resulted in both numerous false positives and missing obstacles. Second, sonar detection capabilities facilitated the identification and avoidance of underwater rocks by the local planner. We were also able to fuse both sonar and stereo camera inputs with a local occupancy grid map. Third, through the incorporation of a Model Predictive Control (MPC) tracking controller, the reliance on GPS velocity estimates was removed. Lastly, the decision to use an 88Wh battery on the ASV markedly improved the Jetson's battery life, thereby negating the need for battery changes during each mission. In Table 2, we show that the Jetson and onboard PC are very power-hungry during one of the testing trials. A microcontroller inside our ASV measures the power of the onboard PC, and we use the jetson-stats tool to log the power of the Jetson. Although the measurement is anecdotal and the exact power consumption can depend on other factors, such as the state of the battery and operating temperatures, the 88WH battery powering the Jetson can certainly last through a two-hour-long experiment. **Upper Lake Mission (Short)** We performed four successful tests of this new policy on the upper lake to determine if our robot could execute different policy branches and navigate both sides of the central island, which were visibly passable based on aerial observations. The expected length is 1.0km. The success criteria were defined as either safely traversing the stochastic edges on either side within the assigned time limit or safely returning to the starting point without collisions. Initially, we executed the policy twice without modifications. As depicted in Fig. 19b, the policy guided the ASV to navigate and return along the left stochastic edge, which had a lower expected cost than the right edge. For the subsequent two trials, we deliberately triggered an early timeout to block the left edge in the policy, forcing the ASV to navigate the right edge. Throughout the four trials, the ASV executed the mission-level policy fully autonomously, except for a battery change. Navigating the left side was straightforward despite occasional GPS signal disruptions. On the right side, the ASV successfully reached the target area once. However, in the second attempt, it travelled too slowly in a shallow area with many aquatic plants (see Fig. 1c) and eventually reached the time limit, rendering the right stochastic edge untraversable. Despite this, the ASV safely returned to the starting node. Importantly, we considered a trial a success despite the ASV not reaching the designated target if the overall policy was executed autonomously. No collisions occurred during any trial. **Upper Lake Mission (Long)** We expanded the previous policy to a more extensive mission, covering a larger area of Nine Mile Lake's upper parts with the same starting point as the shorter mission. The expected length is significantly longer at 3.3km. First, the boat navigated the stochastic edges on the island's left side to reach the sample point, and it returned using the same path. Despite this, significantly deteriorated GPS signals were observed at the edge's end, preventing the mission-level policy executor from detecting the completion of the edge traversal due to GPS solution noise. Consequently, a manual restart of the policy executor was necessary. Thereafter, the ASV proceeded upward to the next sample point before making a left turn to go through a shortcut pinch point, visiting two more sample points. Following a brief stop for battery replacement, the ASV completed the remaining mission. In evaluation, our local perception pipeline performed commendably in this area despite having never previously collected data here. In particular, the synergy of sonar obstacle detection and the stereo camera's semantic waterline estimation showed high reliability in close-range shoreline and obstacle detection with very minimal false positives. Along with the previous four trials for the shorter mission, we demonstrate that our autonomous navigation architecture is effective not only in familiar environments but also in previously \\begin{table} \\begin{tabular}{c|c|c c} \\hline Device & Heron CPU & Jetson CPU & Jetson GPU \\\\ \\hline Usage(\\%) & 75.2 & 61.6 & 89.3 \\\\ Power(W) & 9.2 & 19.3 (Combined) \\\\ \\hline \\end{tabular} \\end{table} Table 2: Average usage and power consumption of our computing devices during a Lower Lake Mission. unseen conditions. ### Isolated Testing of Local Planner A main contribution of our current work is the new perception and local planner modules that can safely disambiguate stochastic edges and navigate safely and autonomously in obstacle and terrain-rich waterways without high-resolution prior maps. To verify this, we tested the local planner on a stochastic edge ten times Figure 20: Comparison of the global plan, manual traversal, and autonomous navigation through the stochastic edge. The global plan, calculated from coarse satellite images, is blocked by a rock. In (b), the ASV was able to pass the narrow opening under manual teleoperation. However, the ASV was unable to identify the opening in the local occupancy grid in autonomous mode (see. Fig. 21), so it searched for an opening in place until the time limit and returned to the start. with the exact same parameter, five times each in either direction. Success was demonstrated by either reaching the stochastic edge's other endpoint within a set time frame or returning to the starting point upon timeout of the policy executor. Without intervention, the ASV accomplished this 70% of the time. However, in three instances, it collided with or became trapped by obstacles, such as rocks and a tree trunk. The global path extracted from the _Sentinel-2_ Image was interrupted by a large rock, with only two narrow openings between the rocks, manually traversable, as demonstrated in Fig. 20 (b). One of the narrow openings is visible from the aerial view in Fig. 20 (d). Our ASV can detect these rocks; however, the over-aggressive dilation parameter obstructs the local planner from charting a path through the central Figure 21: Comparison between the robot’s occupancy grid maps and aerial image. Yellow dots are waterline estimated in 3D. Red dots are obstacles detected by sonar. Boat symbols are added to (b) and (d) for context. The global plan (blue line) is blocked by rocks, so the ASV needs to detour through the narrow opening. However, the passage is blocked on the occupancy grid due to our inaccurate detection, localization, and excessive dilation. passageway (see Fig. 21). There is another wider opening on top of the visible narrow opening, but it is over 30m away from the nearest point on the global path and thus exceeds the maximum corridor width of our local planner's curvilinear space. Relying exclusively on GPS/IMU for location and a local occupancy grid centred around the ASV poses considerable challenges in this terrain, due to imprecision in localizing obstacles relative to the robot and issues controlling tight turns and precise path tracking, escalating the collision risk in confined spaces. In order to mitigate noise and path plan conservatively, occupancy values were decayed over time, and substantial dilation was applied around occupied cells. As such, the ASV would not construct and finetune a consistent local map, but would instead overlook previously encountered obstacles. Consequently, the local planner oscillates between two temporarily obstacle-free paths in the occupancy grid, while the ASV stops and unsuccessfully searches for a traversable path locally until the timer limit is reached, as shown in Fig. 20 (a) and Fig. 20 (c). Another key reason for the low quality of the occupancy grid is the difficulty of fusing sonar and stereo camera measurements, especially at longer ranges. Since sensor fusion occurs solely within the occupancy map, both sensors need to detect an obstacle simultaneously at the same location in the map for accurate fusion. This can be challenging due to a variety of reasons. For instance, depth measurements produced by the stereo camera tend to be noisier over a larger range. Our camera is not capable of detecting underwater obstacles detected by sonar. Additionally, our system lacks effective uncertainty measures for updating sonar and stereo observations within the occupancy map, especially when the two sources provide conflicting data. For example, the ASV simply did not detect the tree trunk. Thus, our sensor fusion mechanism proves effective only over shorter ranges where the sonar and camera are more likely to align. If it is possible to extend the range of our perception modules, the ASV could formulate more optimized navigation paths, preventing collisions with obstacles such as rocks. ## 7 Lessons Learned In this section, we outline insights garnered from our field tests, emphasizing successful design aspects related to field-tested ASV navigation systems and suggesting potential improvements for future iterations. **Timer** Primarily, we found that using a timer to disambiguate stochastic edges was simple, robust, and practical. Integration of a timer within our ROS-based system was easy and could accommodate unexpected hindrances such as strong winds, making stochastic edges difficult to traverse. This allowed for uninterrupted policy execution even when the local planner failed to identify viable paths through a traversable stochastic edge. Essentially, the inclusion of a timer fostered independence between the execution of our mission-level policy and the selection of local planners, enabling the ASV to conduct water sampling missions irrespective of local planner errors. **Localization** A critical limitation of our system lies in the absence of precise GPS localization. Our system necessitates a seamless integration of local mapping with broader satellite maps to facilitate accurate navigation in complex scenarios, such as those illustrated in Fig. 20. A GPS alternative, such as SLAM, would introduce redundancy, bolstering navigation robustness when GPS signals become compromised due to obstructions, interferences, or adverse weather conditions. Furthermore, minimizing localization noise could enhance speed and steering control, enabling the ASV to operate more swiftly and smoothly. **Occupancy Grid** As demonstrated in the previous section, our occupancy grid map also struggles with sensor fusion - particularly over long ranges where sonar and stereo camera measurements can contradict. These inconsistencies necessitated the introduction of a time-decay factor and significant dilation around obstacles. As a result, we observed a 'drunken sailor' phenomenon, wherein the ASV constantly navigates within a confined space without any real progress. We think that semantic SLAM integration with the stereo camera could ameliorate local occupancy map issues. If SLAM can provide a locally consistent and metrically accurate map of higher quality, the decay factor in the occupancy grid becomes unnecessary and the planner will not oscillate. While SLAM is impractical in open water due to the absence of stationary features near the robot, it becomes viable in densely obstacle-populated scenes such as pinch points or shorelines. Localizing the robot against semantic-based local features could lead to more accurate localization and, furthermore, improve obstacle-relative pose estimation and traversability assessment. As we can store and grow the map as the robot explores unknown areas, the planner can also work with a static occupancy grid and avoid any oscillation. Furthermore, we also recommend better exploration strategies to build local maps and search for traversable paths rather than fixing the planning domain size around the precomputed global path from inaccurate satellite images. As the map can be expanded when the robot explores unknown areas, the planner can work with a fixed occupancy grid to avoid oscillation. Additionally, more effective local map building and traversable path searching strategies might provide better solutions than confining the planning domain size around inaccurate satellite images' precomputed global path. **Evasive Manoeuvres** Our system currently lacks evasive manoeuvres. Despite collisions with obstacles, the robot could feasibly retreat and navigate back to unobstructed waters. However, our local planner often fails to detect forward obstacles, continuing to chart a forward path after collisions. Both the stereo camera and sonar have minimum range limitations, resulting in undetected proximate obstacles. We could introduce the timer mechanism to prompt evasive manoeuvres. For instance, if the ASV remains stationary despite forward movement instructions from the planner and controller, it should back up and reset its local planner to circumnavigate the same area. While the ASV may struggle to self-extricate from a beach or shallow rock without human assistance, evasive manoeuvres could facilitate the avoidance of obstacles such as tree trunks or aquatic plants. **Sonar** The incorporation of sonar in our system entails both advantages and drawbacks. Positively, it enabled the detection and circumvention of underwater obstacles, beyond the stereo camera's capabilities. Conversely, the sonar's slow scanning rate (3 seconds per scan) restricts it from being the solitary onboard perception sensor. Additionally, our heuristic-based obstacle detection method fails to recognize minor obstacles, such as lilypads or weeds. While the sonar effectively gauges obstacle distances from the ASV, it cannot determine the depth of underwater obstacles since it scans horizontally. This depth ambiguity complicates traversability estimation, which relies on exact water and underwater obstacle depth knowledge. Moreover, merging sonar with the stereo camera proves challenging due to their observing different world sections. **System Integration** While designing autonomy algorithms with general marine navigation in mind, we recognize that the integration process was tailored to our particular ASV platform and test scenarios. The primary objective of system tuning is to optimize performance metrics such as speed, accuracy, tracking error, and reliability within the bounds of certain constraints, including latency, computational usage, and sensor capabilities. For each autonomy module, we identify key parameters that significantly impact performance. For instance, in the occupancy grid map, grid resolution, smoothing and dilation models, and measurement weights are crucial. Obstacle detection with sonar is governed by the peak threshold and the size of the smoothing window. The runtime of the Lateral BIT* planner is affected by the batch size and sampling window size. The controller's tracking performance depends on the controller cost terms and lookahead horizon. Notably, the accuracy of stereo waterline estimation is not very sensitive to the smoothing parameters but is primarily dependent on the quality and volume of the training data. Initially, we manually tuned these parameters using previously collected datasets or in simulation to establish a baseline. Subsequently, we deploy all autonomy algorithms on the real robot, testing each component individually again. Here, we utilize ROS software tools such as Rviz and the dynamic reconfigure package to evaluate each module's performance and adjust. Often, it is necessary to reduce the update rate and resolution of the algorithms to prevent performance degradation due to latency or computational constraints. We continuously record and assess our ASV's performance under varying conditions, refining them as needed before conducting field experiments with a fixed set of parameters. Our autonomy algorithms demonstrated commendable field performance, but many potential improvements from a system engineering standpoint still remain. An immediate goal is enhancing our software's efficiency to decrease computational load and power consumption on both the Atom PC and the Jetson. For instance, running semantic SLAM alongside the existing stack would require additional power and considerable software optimization to avoid straining our computers further. Aside from optimizing power use, improvements to efficiency, reliability, and usability could be advantageous, particularly for nontechnical users. Our Rviz and web interface user displays contain critical monitoring and debugging information but demand extensive navigation system familiarity. Our data logging pipeline consumes substantial storage space (about 1GB/min), imposing both storage and time cost burdens for copying and analysis. Booting up the GPS in the field was another challenge due to prolonged wait times for adequate satellite acquisition for autonomous navigation. In terms of future hardware, vegetation-proof boat hulls and propellers should be considered given the increased drag and potential damage to the propeller blades from aquatic plant interference. Furthermore, electronic connectors capable of withstanding transportation-induced vibrations and cables that shield connections from interference would enhance overall system robustness. **PCCTP Formulation** Currently, we have found PCCTP to be a robust framework for enabling longer-term autonomous environmental monitoring tasks. Our policy is designed to be resilient against environmental uncertainties, ensuring that the robot can complete its mission both safely and efficiently. In the Nine Mile Lake experiments, the ASV detected many obstacles that were missing or not clearly mapped in the satellite imagery. This demonstrates the importance of using a global mission policy when deploying autonomous robots in unfamiliar and remote marine environments, where unforeseen obstacles absent in the satellite images can halt the execution of a single task plan. By characterizing only pinch points and windy edges as stochastic edges, the problem becomes tractable to solve optimally, effectively capturing the uncertainties visible across different satellite images. In field tests, we found that edges assumed to be \"traversable\" were indeed always traversable. Furthermore, we could manually inspect and verify all possible global paths in the policy before field deployment since we found the optimal policy offline. This approach made it easy to understand the ASV's high-level objective during our field test, particularly when radio communication with the robot became unreliable. However, we have identified several potential enhancements to our problem formulation after concluding our field tests. First, the high-level policy could still result in a deadlock if a \"traversable edge\" becomes untraversable due to unexpected factors. This did not occur in our testing, but one way to mitigate this is to add a small blocking probability to these edges, However, the scalability of the algorithm may need to be improved to efficiently plan for additional stochastic edges. Second, the blocking probabilities of stochastic edges may be correlated in a real environment. For example, wind and water levels can simultaneously affect the traversability of all stochastic edges, and the same aquatic plants may proliferate across nearby stochastic edges. These factors could be modelled by using a joint distribution with covariance for the probabilities of all stochastic edges, or by using a Bayesian model with latent variables to represent common environmental factors. Third, power is often a significant constraint for fully autonomous execution, requiring the robot to return to the base for charging periodically. In PCCTP, we can address this by imposing a distance limit on the planner, ensuring that the robot must return to the start or designated charging locations. Lastly, PCCTP is a one-shot planning algorithm and does not replan online for new robot tasks. An interesting extension for PCCTP would be for lifelong water monitoring missions, where target locations can be added online, and the robot can replan its policy with its next targets as a new starting location. In this setting, the traversability estimates may also be updated online during the lifelong execution. In addition to these changes to the formulation, we envision ways to expand the PCCTP to incorporate scientific heuristics. A straightforward extension could involve using multispectral satellite bands, such as MODIS (Wu et al., 2009), to analyze water quality in the target area and automatically select target locations. If the robot is equipped with an online-capable water quality sensor like the YSI sonde, further opportunities arise. For instance, with a predefined set of target locations or scanning patterns, the ASV could optimize its policy to maximize both information gain and expected cost under uncertain traversability conditions. Our problem formulation would then need to be extended to a multi-objective framework, employing either a weighted sum of objectives or Pareto-based approaches (W. Chen and Liu, 2021; Salzman et al., 2023). Ifthe ASV is tasked with repeatedly patrolling the same area, an online approach might be more suitable, allowing the robot to continually update its model of traversability and the scientific value of the target area. However, efficiently solving these optimization problems remains a challenge. **Field Logistics** Our field logistics proved successful largely due to employing a motorboat, facilitating rapid transportation of the robot, personnel, and supplies to remote testing locations on the lake. During trials, staying in close proximity to the robot or flying a drone for tracking was straightforward using a motorboat. In case of forgotten crucial equipment, swift return trips to the base camp for recovery were possible. Our field tests, spanning three days, were completed as planned, despite limited time and battery life. ## 8 Conclusions For a robot to be effective in real-world environments, it must adapt to variations and uncertainties stemming from natural or human factors, despite potential mismatches between the real world and the planning model. Therefore, a robust mission planning framework for long-term autonomy should possess several key qualities: resilience to allow continuous operation without failure, adaptability to incorporate uncertainties specific to the task and environment, and efficiency in meeting critical performance metrics such as time, throughput, and energy cost. With these criteria in mind, we have proposed a framework for planning mission-level autonomous navigation policies offline using satellite images. Our mission planner treats the uncertainty in these images as stochastic edges and formulates a solution to the Partial Covering Canadian Traveller Problem (PCCTP) on a high-level graph. We introduce PCCTP-AO*, an optimal, informed-search-based method capable of finding a policy with the minimum expected cost. Tested on thousands of simulated graphs derived from real Canadian lakes, our approach demonstrates significant reductions in travel distance--ranging from 1% (50m) to 15% (1.8km). We then developed a GPS-, vision-, and sonar-enabled ASV navigation system to execute these preplanned policies. We proposed a conceptually simple yet robust timer-based approach to disambiguate stochastic edges. Local mapping modules integrate a neurally estimated waterline from the stereo camera with underwater obstacles detected by sonar, while the local motion planner ensures obstacle avoidance in adherence to the precomputed global path. Our ASV navigation system has successfully executed three different km-scale missions a total of seven times in environments with unmapped obstacles, requiring only two interventions in total. Additionally, we achieved a 70% success rate in an isolated test of our local planner. Our findings highlight that while the system performs robustly, traversability assessment and localization continue to be bottlenecks for local mapping and motion planning. We hope that the lessons learned from this development process will foster future advances in long-term autonomy algorithms and ASV environmental monitoring systems. #### Acknowledgments We would like to acknowledge the Natural Sciences and Engineering Research Council of Canada (NSERC) for supporting this research. ## References * Afrati et al. (1986) Afrati, F., Cosmadakis, S., Papadimitriou, C. H., Papageorgiou, G., & Papakostantinou, N. (1986). The Complexity of the Travelling Repairman Problem. _RAIRO-Theoretical Informatics and Applications-Informatique Theorique et Applications_, _20_(1), 79-87. * Afrati et al. (2012)Aksakalli, V., Sahin, O. F., & Ari, I. (2016). An AO* Based Exact Algorithm for the Canadian Traveler Problem. _INFORMS Journal on Computing_, _28_(1), 96-111. * Ang et al. (2022) Ang, Y.-T., Ng, W.-K., Chong, Y.-W., Wan, J., Chee, S.-Y., & Firth, L. B. (2022). An Autonomous Sailboat for Environment Monitoring. _2022 Thirteenth International Conference on Ubiquitous and Future Networks (ICUFN)_, 242-246. * Bai et al. (2021) Bai, S., Shan, T., Chen, F., Liu, L., & Englot, B. (2021). Information-Driven Path Planning. _Current Robotics Reports_, _2_(2), 177-188. * Bai et al. (2016) Bai, S., Wang, J., Chen, F., & Englot, B. (2016). Information-theoretic Exploration with Bayesian Optimization. _2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, 1816-1822. * Anchorage_, 1-8. * Bellman (1957) Bellman, R. (1957). A Markovian Decision Process. _Journal of Mathematics and Mechanics_, 679-684. * Bovcon et al. (2019) Bovcon, B., Muhovic, J., Pers, J., & Kristan, M. (2019). The MaSTr1325 Dataset for Training Deep USV Obstacle Detection Models. _2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. * A USV-Oriented Object Detection and Obstacle Segmentation Benchmark. * Cao et al. (2020) Cao, H., Guo, Z., Wang, S., Cheng, H., & Zhan, C. (2020). Intelligent Wide-Area Water Quality Monitoring and Analysis System Exploiting Unmanned Surface Vehicles and Ensemble Learning. _Water_, _12_(3), 681. * Chang & Slagle (1971) Chang, C. L., & Slagle, J. R. (1971). An Admissible and Optimal Algorithm for Searching AND/OR Graphs. _Artif. Intell._, _2_(2), 117-128. * Chang et al. (2021) Chang, H.-C., Hsu, Y.-L., Hung, S.-S., Ou, G.-R., Wu, J.-R., & Hsu, C. (2021). Autonomous Water Quality Monitoring and Water Surface Cleaning for Unmanned Surface Vehicle. _Sensors_, _21_(4). * Chen et al. (2023) Chen, K., Liu, C., Chen, H., Zhang, H., Li, W., Zou, Z., & Shi, Z. (2023). RSPrompter: Learning to Prompt for Remote Sensing Instance Segmentation based on Visual Foundation Model. * Chen & Liu (2021) Chen, W., & Liu, L. (2021). Pareto Monte Carlo Tree Search for Multi-objective Informative Planning. _arXiv preprint arXiv:2111.01825_. * Chen et al. (2022) Chen, W., Khardon, R., & Liu, L. (2022). AK: Attentive Kernel for Information Gathering. _Proceedings of Robotics: Science and Systems_. [https://doi.org/10.15607/RSS.2022.XVIII.047](https://doi.org/10.15607/RSS.2022.XVIII.047) * Cheng et al. (2021) Cheng, Y., Jiang, M., Zhu, J., & Liu, Y. (2021). Are We Ready for Unmanned Surface Vehicles in Inland Waterways? The USVInland Multisensor Dataset and Benchmark. _IEEE Robotics and Automation Letters_, _6_(2), 3964-3970. * a survey of recent results. _Ann. Math. Artif. Intell._, _31_(1), 113-126. * Christofides (1976) Christofides, N. (1976). Worst-Case Analysis of a New Heuristic for the Travelling Salesman Problem. _Operations Research Forum_, \\(3\\). * Porto_, 1-10. * Dong et al. (2020) Dong, Z., Xu, X., Zhang, X., Zhou, X., Li, X., & Liu, X. (2020). Real-time motion planning based on mpc with obstacle constraint convexification for autonomous ground vehicles. _2020 3rd International Conference on Unmanned Systems (ICUS)_, 1035-1041. [https://doi.org/10.1109/ICUS50048.2020.9274881](https://doi.org/10.1109/ICUS50048.2020.9274881) * New Opportunities for Science]. _Remote Sensing of Environment_, _120_, 25-36. [https://doi.org/https://doi.org/10.1016/j.rse.2011.11.026](https://doi.org/https://doi.org/10.1016/j.rse.2011.11.026) * Dunbabin & Marques (2012) Dunbabin, M., & Marques, L. (2012). Robots for Environmental Monitoring: Significant Advancements and Applications. _IEEE Robot. Autom. Mag._, _19_(1), 24-39. * Elfes (1989) Elfes, A. (1989). Using Occupancy Grids for Mobile Robot Perception and Navigation. _Computer_, _22_(6), 46-57. * Elfes (2012)Ester, M., Kriegel, H.-P., Sander, J., & Xu, X. (1996). A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. _Proceedings of the Second International Conference on Knowledge Discovery and Data Mining_, 226-231. * Ferguson & Stentz (2007) Ferguson, D., & Stentz, A. (2007). Field D*: An Interpolation-Based Path Planner and Replanner. _Robotics Research: Results of the 12th International Symposium ISRR_, 239-253. * Ferguson et al. (2004) Ferguson, D., Stentz, A., & Thrun, S. (2004, January). _Planning with Pinch Points_ (tech. rep.). Carnegie-Mellon Univ Pittsburgh PA Robotics Inst. * Ferri et al. (2015) Ferri, G., Manzi, A., Fornai, F., Ciuchi, F., & Laschi, C. (2015). The HydroNet ASV, a Small-Sized Autonomous Catamaran for Real-Time Monitoring of Water Quality: From Design to Missions at Sea. _IEEE J. Oceanic Eng._, _40_(3), 710-726. * Feyisa et al. (2014) Feyisa, G. L., Meilby, H., Fensholt, R., & Proud, S. R. (2014). Automated Water Extraction Index: A New Technique for Surface Water Mapping Using Landsat Imagery. _Remote Sens. Environ._, _140_, 23-35. * Fischler & Bolles (1981) Fischler, M. A., & Bolles, R. C. (1981). Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. _Commun. ACM_, _24_(6), 381-395. [https://doi.org/10.1145/358669.358692](https://doi.org/10.1145/358669.358692) * Flaspohler et al. (2018) Flaspohler, G., Roy, N., & Girdhar, Y. (2018). Near-optimal Irrevocable Sample Selection for Periodic Data Streams with Applications to Marine Robotics. _2018 IEEE International Conference on Robotics and Automation (ICRA)_, 5691-5698. * Fox et al. (1997) Fox, D., Burgard, W., & Thrun, S. (1997). The Dynamic Window Approach to Collision Avoidance. _IEEE Robotics & Automation Magazine_, _4_(1), 23-33. * Furgale & Barfoot (2010) Furgale, P., & Barfoot, T. D. (2010). Visual Teach and Repeat for Long-Range Rover Autonomy. _Journal of Field Robotics_, _27_(5), 534-560. [https://doi.org/https://doi.org/10.1002/rob.20342](https://doi.org/https://doi.org/10.1002/rob.20342) * Gammell et al. (2015) Gammell, J. D., Srinivasa, S. S., & Barfoot, T. D. (2015). Batch Informed Trees (BIT*): Sampling-based Optimal Planning via the Heuristically Guided Search of Implicit Random Geometric Graphs. _2015 IEEE International Conference on Robotics and Automation (ICRA)_. [https://doi.org/10.1109/icra.2015.7139620](https://doi.org/10.1109/icra.2015.7139620) * Garneau et al. (2013) Garneau, M.-E., Posch, T., Hitz, G., Pomerleau, F., Pradalier, C., Siegwart, R., & Pernthaler, J. (2013). Short-term Displacement of Planktonitrix Rubescens (Cyanobacteria) in a Pre-alpine Lake Observed using an Autonomous Sampling Platform. _Limnol. Oceanogr._, _58_(5), 1892-1906. * Geraerts & Overmars (2007) Geraerts, R., & Overmars, M. H. (2007). Creating High-quality Paths for Motion Planning. _Int. J. Rob. Res._, _26_(8), 845-863. * Girdhar et al. (2014) Girdhar, Y., Giguere, P., & Dudek, G. (2014). Autonomous Adaptive Exploration using Realtime Online Spatiotemporal Topic Modeling. _Int. J. Rob. Res._, _33_(4), 645-657. * Guo & Barfoot (2019) Guo, H., & Barfoot, T. D. (2019). The Robust Canadian Traveler Problem Applied to Robot Routing. _2019 International Conference on Robotics and Automation (ICRA)_, 5523-5529. * Hart et al. (1968) Hart, P. E., Nilsson, N. J., & Raphael, B. (1968). A Formal Basis for the Heuristic Determination of Minimum Cost Paths. _IEEE transactions on Systems Science and Cybernetics_, _4_(2), 100-107. * Heidarsson & Sukhatme (2011a) Heidarsson, H. K., & Sukhatme, G. S. (2011a). Obstacle Detection and Avoidance for an Autonomous Surface Vehicle using a Profiling Sonar. _2011 IEEE International Conference on Robotics and Automation_, 731-736. * Heidarsson & Sukhatme (2011b) Heidarsson, H. K., & Sukhatme, G. S. (2011b). Obstacle Detection from Overhead Imagery using Self-Supervised Learning for Autonomous Surface Vehicles. _2011 IEEE/RSJ International Conference on Intelligent Robots and Systems_, 3160-3165. * Hollinger & Sukhatme (2014) Hollinger, G. A., & Sukhatme, G. S. (2014). Sampling-based robotic information gathering algorithms. _Int. J. Rob. Res._, _33_(9), 1271-1287. * Huang et al. (2018) Huang, C., Chen, Y., Zhang, S., & Wu, J. (2018). Detecting, Extracting, and Monitoring Surface Water From Space Using Optical Sensors: A Review. _Rev. Geophys._, _56_(2), 333-360. * Huang et al. (2023) Huang, Y., Dugmag, H., Barfoot, T. D., & Shkurti, F. (2023). Stochastic Planning for ASV Navigation Using Satellite Images. _2023 IEEE International Conference on Robotics and Automation (ICRA)_. [https://doi.org/10.1109/icra48891.2023.10160894](https://doi.org/10.1109/icra48891.2023.10160894) * U.S. Gulf Coast_, 1-9. * Jeong et al. (2020)Ji, J., Khajepour, A., Melek, W. W., & Huang, Y. (2016). Path Planning and Tracking for Vehicle Collision Avoidance Based on Model Predictive Control with Multiconstraints. _IEEE Transactions on Vehicular Technology_, _66_(2), 952-964. * Ji et al. (2017) Ji, J., Khajepour, A., Melek, W. W., & Huang, Y. (2017). Path planning and tracking for vehicle collision avoidance based on model predictive control with multiconstraints. _IEEE Transactions on Vehicular Technology_, _66_(2), 952-964. [https://doi.org/10.1109/TVT.2016.2555853](https://doi.org/10.1109/TVT.2016.2555853) * Karaman & Frazzoli (2011) Karaman, S., & Frazzoli, E. (2011). Sampling-Based Algorithms for Optimal Motion Planning. _The international journal of robotics research_, _30_(7), 846-894. * Karapetyan et al. (2018) Karapetyan, N., Moulton, J., Lewis, J. S., Li, A. Q., O'Kane, J. M., & Rekleitis, I. (2018). Multi-robot Dubins Coverage with Autonomous Surface Vehicles. _2018 IEEE International Conference on Robotics and Automation (ICRA)_, 2373-2379. * Karoui et al. (2015) Karoui, I., Ojudu, I., & Legris, M. (2015). Automatic Sea-Surface Obstacle Detection and Tracking in Forward-Looking Sonar Image Sequences. _IEEE Trans. Geosci. Remote Sens._, _53_(8), 4661-4669. * Kathen et al. (2021) Kathen, M. J. T., Flores, I. J., & Reina, D. G. (2021). An Informative Path Planner for a Swarm of ASVs Based on an Enhanced PSO with Gaussian Surrogate Model Components Intended for Water Monitoring Applications. _Electronics_, _10_(13), 1605. * Kemna et al. (2017) Kemna, S., Rogers, J. G., Nieto-Granda, C., Young, S., & Sukhatme, G. S. (2017). Multi-Robot Coordination through Dynamic Voronoi Partitioning for Informative Adaptive Sampling in Communication-Constrained Environments. _2017 IEEE International Conference on Robotics and Automation (ICRA)_, 2124-2130. * Kimball et al. (2014) Kimball, P., Bailey, J., Das, S., Geyer, R., Harrison, T., Kunz, C., Manganini, K., Mankoff, K., Samuelson, K., Sayre-McCord, T., Straneo, F., Traykovski, P., & Singh, H. (2014). The WHOI Jetyak: An Autonomous Surface Vehicle for Oceanographic Research in Shallow or Dangerous Waters. _2014 IEEE/OES Autonomous Underwater Vehicles (AUV)_, 1-7. * Kirillov et al. (2023) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C., Lo, W.-Y., Dollar, P., & Girshick, R. (2023). Segment Anything. _arXiv:2304.02643_. * Koenig & Likhachev (2002) Koenig, S., & Likhachev, M. (2002). D* Lite. _Eighteenth national conference on Artificial intelligence_, 476-483. * Laporte (1992) Laporte, G. (1992). The Traveling Salesman Problem: An Overview of Exact and Approximate Algorithms. _European Journal of Operational Research_, _59_, 231-247. * Lee et al. (2018) Lee, S.-J., Roh, M.-I., Lee, H.-W., Ha, J.-S., & Woo, I.-G. (2018). Image-Based Ship Detection and Classification for Unmanned Surface Vehicle using Real-Time Object Detection Neural Networks. _The 28th International Ocean and Polar Engineering Conference_. * Li & Sheng (2012) Li, J., & Sheng, Y. (2012). An Automated Scheme for Glacial Lake Dynamics Mapping using Landsat Imagery and Digital Elevation Models: a Case Study in the Himalayas. _Int. J. Remote Sens._, _33_(16), 5194-5213. * Liao & Huang (2014) Liao, C.-S., & Huang, Y. (2014). The Covering Canadian Traveller Problem. _Theoretical Computer Science_, _530_, 80-88. [https://doi.org/https://doi.org/10.1016/j.tcs.2014.02.026](https://doi.org/https://doi.org/10.1016/j.tcs.2014.02.026) * Long et al. (2015) Long, J., Shelhamer, E., & Darrell, T. (2015). Fully Convolutional Networks for Semantic Segmentation. _Proceedings of the IEEE conference on computer vision and pattern recognition_, 3431-3440. * Maalouf et al. (2023) Maalouf, A., Jadhav, N., Jatavallabhula, K. M., Chahine, M., Vogt, D. M., Wood, R. J., Torralba, A., & Rus, D. (2023). Follow Anything: Open-set detection, tracking, and following in real-time. _arXiv preprint arXiv:2308.05737_. * Madeo et al. (2020) Madeo, D., Pozzebon, A., Mocenni, C., & Bertoni, D. (2020). A Low-Cost Unmanned Surface Vehicle for Pervasive Water Quality Monitoring. _IEEE Trans. Instrum. Meas._, _69_(4), 1433-1444. * MahmoudZadeh et al. (2022) MahmoudZadeh, S., Abbasi, A., Yazdani, A., Wang, H., & Liu, Y. (2022). Uninterrupted Path Planning System for Multi-USV Sampling Mission in a Cluttered Ocean Environment. _Ocean Eng._, _254_, 111328. * Manjanna & Dudek (2017) Manjanna, S., & Dudek, G. (2017). Data-driven Selective Sampling for Marine Vehicles using Multi-Scale Paths. _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, 6111-6117. * Marchant & Ramos (2012) Marchant, R., & Ramos, F. (2012). Bayesian Optimisation for Intelligent Environmental Monitoring. _2012 IEEE/RSJ International Conference on Intelligent Robots and Systems_, 2242-2249. * Mardard et al. (2015)Marcant, R., & Ramos, F. (2014). Bayesian optimisation for informative continuous path planning. _2014 IEEE International Conference on Robotics and Automation (ICRA)_, 6136-6143. * Martelli & Montanari (1978) Martelli, A., & Montanari, U. (1978). Optimizing Decision Trees Through Heuristically Guided Search. _Commun. ACM_, _21_, 1025-1039. * McFeeters (1996) McFeeters, S. K. (1996). The Use of the Normalized Difference Water Index (NDWI) in the Delineation of Open Water Features. _International Journal of Remote Sensing_, _17_, 1425-1432. * Moulton et al. (2018) Moulton, J., Karapetyan, N., Bukhbaum, S., McKinney, C., Malebary, S., Sophocleous, G., Li, A. Q., & Rekleitis, I. (2018). An Autonomous Surface Vehicle for Long Term Operations. _OCEANS 2018 MTS/IEEE Charleston_, 1-10. * CanVec Series - Hydrographic Features. * Nicholson et al. (2018) Nicholson, D. P., Michel, A. P. M., Wankel, S. D., Manganini, K., Sugrue, R. A., Sandwith, Z. O., & Monk, S. A. (2018). Rapid Mapping of Dissolved Methane and Carbon Dioxide in Coastal Ecosystems Using the ChemYak Autonomous Surface Vehicle. _Environ. Sci. Technol._, _52_(22), 13314-13324. * Noon & Bean (1993) Noon, C. E., & Bean, J. C. (1993). An Efficient Transformation of the Generalized Traveling Salesman Problem. _INFOR Inf. Syst. Oper. Res._, _31_(1), 39-44. * Odetti et al. (2020) Odetti, A., Bruzzone, G., Altosole, M., Viviani, M., & Caccia, M. (2020). SWAMP, an Autonomous Surface Vehicle Expressly Designed for Extremely Shallow Waters. _Ocean Eng._, _216_, 108205. * Papadimitriou & Yannakakis (1991) Papadimitriou, C. H., & Yannakakis, M. (1991). Shortest Paths Without a Map. _Theoretical Computer Science_, _84_(1), 127-150. * Pekel et al. (2016) Pekel, J.-F., Cottam, A., Gorelick, N., & Belward, A. S. (2016). High-Resolution Mapping of Global Surface Water and its Long-Term Changes. _Nature_, _540_(7633), 418-422. * Peralta et al. (2023) Peralta, F., Reina, D. G., & Toral, S. (2023). Water Quality Online Modeling using Multi-objective and Multi-agent Bayesian Optimization with Region Partitioning. _Mechatronics_, _91_, 102953. * Perron & Furnon (2023) Perron, L., & Furnon, V. (2023, August 8). _Or-tools_ (Version v9.7). [https://developers.google.com/optimization/Polychronopoulos](https://developers.google.com/optimization/Polychronopoulos), G. H., et al. (n.d.). Stochastic Shortest Path Problems with Recourse. _Networks_. * Qiao et al. (2022) Qiao, D., Liu, G., Li, W., Lyu, T., & Zhang, J. (2022). Automated Full Scene Parsing for Marine ASVs using Monocular Vision. _J. Intell. Rob. Syst._, _104_(2). * Quigley et al. (2009) Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A. Y., et al. (2009). ROS: an Open-Source Robot Operating System. _ICRA workshop on open source software_, _3_(3.2), 5. * Roznere et al. (2021) Roznere, M., Jeong, M., Maechling, L., Ward, N. K., Brentrup, J. A., Steele, B., Bruesewitz, D. A., Ewing, H. A., Weathers, K. C., Cottingham, K. L., & Quattrini Li, A. (2021). Towards a Reliable Heterogeneous Robotic Water Quality Monitoring System: An Experimental Analysis. _Experimental Robotics_, 139-150. * Salzman et al. (2023) Salzman, O., Felner, A., Hernandez, C., Zhang, H., Chan, S.-H., & Koenig, S. (2023). Heuristic-search Approaches for the Multi-objective Shortest-path Problem: Progress and Research Opportunities. _Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence_, (Article 757), 6759-6768. * Sanchez-Ibanez et al. (2021) Sanchez-Ibanez, J. R., Perez-del-Pulgar, C. J., & Garcia-Cerezo, A. (2021). Path Planning for Autonomous Mobile Robots: A Review. _Sensors_, _21_(23), 7898. * A New Detailed Definition of Autonomy Levels. _Computational Logistics_, 219-233. * Sehn et al. (2023) Sehn, J., Collier, J., & Barfoot, T. D. (2023). Off the beaten track: Laterally weighted motion planning for local obstacle avoidance. * Shan et al. (2020) Shan, Y., Zheng, B., Chen, L., Chen, L., & Chen, D.-w. (2020). A Reinforcement Learning-Based Adaptive Path Tracking Approach for Autonomous Driving. _IEEE Transactions on Vehicular Technology_, _69_, 10581-10595. * Steccanella et al. (2020) Steccanella, L., Bloisi, D. D., Castellini, A., & Farinelli, A. (2020). Waterline and Obstacle Detection in Images from Low-Cost Autonomous Boats for Environmental Monitoring. _Rob. Auton. Syst._, _124_, 103346. * Steccanella et al. (2019) Steccanella, L., Bloisi, D., Blum, J., & Farinelli, A. (2019). Deep Learning Waterline Detection for Low-Cost Autonomous Boats. _Intelligent Autonomous Systems 15_, 613-625. * Steccanella et al. (2019)Tang, X., Pei, Z., Yin, S., Li, C., Wang, P., Wang, Y., & Wu, Z. (2020). Practical Design and Implementation of an Autonomous Surface Vessel Prototype: Navigation and Control. _International Journal of Advanced Robotic Systems_, _17_(3), 1729881420919949. * Ten Kathen et al. (2023) Ten Kathen, M. J., Samaniego, F. P., Flores, I. J., & Reina, D. G. (2023). AquaHet-PSO: An Informative Path Planner for a Fleet of Autonomous Surface Vehicles With Heterogeneous Sensing Capabilities Based on Multi-Objective PSO. _IEEE Access_, _11_, 110943-110966. * an embedded-compute-ready maritime obstacle detection network. * Toth & Vigo (2002) Toth, P., & Vigo, D. (2002). _The Vehicle Routing Problem_ (P. Toth & D. Vigo, Eds.). Society for Industrial; Applied Mathematics. [https://doi.org/10.1137/1.9780898718515](https://doi.org/10.1137/1.9780898718515) * Vasilj et al. (2017) Vasilj, J., Stancic, I., Grujic, T., & Music, J. (2017). Design, development and testing of the modular unmanned surface vehicle platform for marine waste detection. _Journal of Multimedia Information System_, _4_(4), 195-204. [https://doi.org/10.9717/JMIS.2017.4.4.195](https://doi.org/10.9717/JMIS.2017.4.4.195) * Wu et al. (2009) Wu, M., Zhang, W., Wang, X., & Luo, D. (2009). Application of MODIS Satellite Data in Monitoring Water Quality Parameters of Chaohu Lake in China. _Environ. Monit. Assess._, _148_(1-4), 255-264. * Xu (2006) Xu, H. (2006). Modification of Normalised Difference Water Index (NDWI) to Enhance Open Water Features in Remotely Sensed Imagery. _Int. J. Remote Sens._, _27_(14), 3025-3033. * Yang et al. (2019) Yang, J., Li, Y., Zhang, Q., & Ren, Y. (2019). Surface Vehicle Detection and Tracking with Deep Learning and Appearance Feature. _2019 5th International Conference on Control, Automation and Robotics (ICCAR)_, 276-280. * Yang et al. (2017) Yang, X., Zhao, S., Qin, X., Zhao, N., & Liang, L. (2017). Mapping of Urban Surface Water Bodies from Sentinel-2 MSI Imagery at 10 m Resolution via NDWI-Based Image Sharpening. _Remote Sensing_, _9_(6), 596. * Yin et al. (2022) Yin, Y., Guo, Y., Deng, L., & Chai, B. (2022). Improved PSPNet-based Water Shoreline Detection in Complex Inland River Scenarios. _Complex & Intelligent Systems_. * Yun et al. (2019) Yun, S., Han, D., Oh, S. J., Chun, S., Choe, J., & Yoo, Y. (2019). CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features. * Zhou et al. (2022) Zhou, R., Gao, Y., Wu, P., Zhao, X., Dou, W., Sun, C., Zhong, Y., & Wang, Y. (2022). Collision-Free Waterway Segmentation for Inland Unmanned Surface Vehicles. _IEEE Trans. Instrum. Meas._, _71_, 1-16.
We introduce a multi-sensor navigation system for autonomous surface vessels (ASV) intended for water-quality monitoring in freshwater lakes. Our mission planner uses satellite imagery as a prior map, formulating offline a mission-level policy for global navigation of the ASV and enabling autonomous online execution via local perception and local planning modules. A significant challenge is posed by the inconsistencies in traversability estimation between satellite images and real lakes, due to environmental effects such as wind, aquatic vegetation, shallow waters, and fluctuating water levels. Hence, we specifically modelled these traversability uncertainties as stochastic edges in a graph and optimized for a mission-level policy that minimizes the expected total travel distance. To execute the policy, we propose a modern local planner architecture that processes sensor inputs and plans paths to execute the high-level policy under uncertain traversability conditions. Our system was tested on three km-scale missions on a Northern Ontario lake, demonstrating that our GPS-, vision-, and sonar-enabled ASV system can effectively execute the mission-level policy and disambiguate the traversability of stochastic edges. Finally, we provide insights gained from practical field experience and offer several future directions to enhance the overall reliability of ASV navigation systems.
Write a summary of the passage below.
240
arxiv-format/2305_12999v1.md
# Integrated Guidance and Gimbal Control for Coverage Planning with Visibility Constraints Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou and Marios M. Polycarpou The authors are with the KIOS Research and Innovation Centre of Excellence (KIOS CoE) and the Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, 1678, Cyprus. {papaioannou, savvas, pkolios, ttheocharides, christosp, mpolycar}@ucy.ac.cy ## I Introduction Over the last years we have witnessed an accelerated demand for Unmanned Aerial Vehicles (UAVs) and Unmanned Aerial Systems (UASs) in various application domains including search and rescue [1, 2, 3, 4, 5, 6], precision agriculture [7], package delivery [8], wildfire monitoring [9] and security [10, 11, 12]. This high demand is mainly fueled by the recent advancements in automation technology, avionics and intelligent systems, in combination with the proliferation and cost reduction of electronic components. Amongst the most crucial capabilities for a fully autonomous UAV system is that of path/trajectory planning [13], which play a pivotal role in designing and executing automated flight plans, required by the majority of application scenarios. The path planning problem encapsulates algorithms that compute trajectories between the desired start and goal locations. Moreover, for many tasks such as structure inspection, target search, and surveillance, there is a great need for efficient and automated coverage path planning (CPP) [14] techniques. Coverage path planning consists of finding a path (or trajectory) which allows an autonomous agent (e.g., a UAV) to cover every point (i.e., the point must be included within the agent's sensor footprint) within a certain area of interest. Despite the overall technological progress in CPP techniques over the last decades, there is still work to be done for constructing solutions to the level of realism that can support practical autonomous UAV operations. As discussed in more detail in Sec. II, the vast majority of CPP approaches mainly consider ground vehicles or robots with static and fixed sensors (i.e., the sensor mounted on the robot is not controllable and the size of the sensor's footprint does not change). These assumptions reduce the CPP problem to a path-planning problem, where a) the area of interest is first decomposed into a finite number of cells (where usually each cell has size equal to the sensor's footprint), and b) the path that passes through all cells is generated with a path-finding algorithm, thus achieving full coverage. In some approaches (e.g., [15]) the generated path is adapted to the robot's dynamic/kinematic model at a second stage. This two-stage approach however, usually produces sub-optimal results, in terms of coverage performance. In addition, UAVs are usually equipped with a gimballed sensor, which potentially exhibits a dynamic sensing range., a pan-tilt-zoom (PTZ) camera. Therefore, in order to optimize coverage, necessitates the implementation of CPP techniques with the ability to optimize not only the UAV's trajectory, but also the control inputs of the onboard gimballed sensor. Specifically, in this paper we investigate the coverage path planning problem for a known 2D region/object of interest, with a UAV agent, which exhibits a controllable gimbal sensor with dynamic sensing range. In particular, we propose an integrated guidance and gimbal control coverage path planning approach, where we jointly control and optimize a) the UAV's mobility input governed by its kinematic model and b) the UAV's gimbal sensor, in order to produce optimal coverage trajectories. The CPP problem is posed as a constrained optimal control problem, where the goal is to optimize a set of mission-specific objectives subject to coverage constraints. As opposed to the majority of existing techniques, in this work we consider the CPP problem in the presence of state-dependent visibility-constraints. In particular, the proposed approach integrates ray-casting into the planning process in order to determine which parts of the region/object of interest are visible through the UAV's sensor at any point in time, thus enabling the generation of realistic UAV trajectories. Specifically, the contributions of this work are the following: * We propose an integrated guidance and gimbal control CPP approach for the problem of coverage planning in the presence of kinematic and sensing constraints including state-dependent visibility constraints i.e., we are simulating the physical behavior of sensor signals in order to account for the parts of the scene that are blocked by obstacles and thus cannot be observed by the UAV's sensor at a given pose. * We formulate the CPP problem as a constrained optimal control problem, in which the UAV mobility and gimbal inputs are jointly controlled and optimized according to a specified set of optimality criteria, and we solve it using mixed integer programming (MIP). The performance of the proposed approach is demonstrated through a series of numerical experiments. The rest of the paper is organized as follows. Section II summarizes the related work on coverage path planning with ground and aerial vehicles. Then, Section III develops the system model, Section IV defines the problem tackled and Section V discusses the details of the proposed coverage planning approach. Finally, Section VI evaluates the proposed approach and Section VII concludes the paper and discusses future work. ## II Related Work In coverage path planning (CPP) we are interested in determining the path/trajectory that enables an autonomous agent to observe with its sensor every point in a given environment. Early works e.g., [16, 17], treated the CPP problem as a path-planning problem, by decomposing the environment into several non-overlapping cells, and then employing a path-planning algorithm [18] to find the path that passes through every cell. Notably, in [16, 17] the authors propose cellular decomposition coverage algorithms, where the free-space of the environment is decomposed into non-intersecting regions or cells, which can be covered by the robot, one by one using simple back-and-forth motions, thus covering the whole area. Extensions [19, 20, 21] of this approach have investigated the coverage/sweeping pattern, the sensor's footprint and the region traversal order for optimal coverage. In [22], the authors propose a CPP approach based on spanning trees, for covering, in linear time, a continuous planar area with a mobile robot equipped with a square-shaped sensor footprint. In the subsequent work [23], the plannar area to be covered is incrementally sub-divided into disjoint cells on-line, allowing for real-time operation. The spanning tree approach presented in [22] and [23] is extended in [24] for multi-robot systems. In [25], the bbc [26] is used to compute a graph-based representation of the environment, and then a coverage clustering algorithm is proposed which divides the coverage path among multiple robots. The problem of coverage path planning has also been investigated with multiple ground robots in [27]. The authors of [27] propose an approach that minimizes the total coverage time by transforming the CPP problem into a network flow problem. In [28], a graph-based coverage algorithm is proposed for a CPP problem variation, which enables a team of robots to visit a set of predefined locations according to a specified frequency distribution. Another CPP variation is investigated in [29, 30]. The authors propose decentralized, adaptive control laws to drive a team of mobile robots to an optimal sensing configuration which achieves maximum coverage of the area of interest. The same problem is investigated in [31] with heterogeneous robots that exhibit different sensor footprints. Interestingly, the CPP problem has recently gained significant attention due to its importance in various UAV-based applications, including emergency response [5, 6, 32], critical infrastructure inspection [33] and surveillance [34]. A UAV-based coverage path planning approach utilizing exact cellular decomposition in polygonal convex areas is proposed in [35], whereas in [15] the CPP problem is adapted for a fixed-wing UAV. In [15] the ground plane is first decomposed into several non-overlapping cells which are then connected together forming a piecewise-linear coverage path. At a second stage a UAV-specific motion controller is used to convert the generated path into a smooth trajectory which the UAV can execute. An information theoretic coverage path planning approach for a single fixed-wing UAV is presented in [36]. The aircraft maintains a coverage map which uses to it make optimized control decisions on-line, in order to achieve global coverage of the environment. In [37], the authors propose a clustering-based CPP approach for searching multiple regions of interest with multiple heterogeneous UAVs, by classifying the various regions into clusters and assigning the clusters to the UAVs according to their capabilities. In [38, 39, 40] CPP techniques for energy-constraint multi-UAV systems are proposed. Moreover, in [41], a distributed coverage control approach is proposed for a multi-UAV system. Specifically, a coverage reference trajectory is computed using a Voronoi-based partitioning scheme, and then a distributed control law is proposed for guiding the UAVs to follow the reference trajectory, thus achieving full coverage. Finally, the CPP problem has also been investigated more recently with learning based techniques [42, 43]. In [42], an end-to-end deep reinforcement learning CPP approach is proposed for an autonomous UAV agent. The authors utilize a double deep Q-network to learn a coverage control policy that takes into account the UAV's battery constraints. On the other hand in [43], a coverage path planning system based on supervised imitation learning is proposed for a team of UAV agents. This approach plans coordinated coverage trajectories which allow unexplored cells in the environment to be visited by at least one UAV agent. The problem of coverage path planning (CPP) is also related to the view planning problem (VPP) and its variations, where the objective is to find the minimum number of viewpoints that completely cover the area of an object of interest. The relationship between these two problems is showcased in [44], in which the CPP problem is posed as a view planning problem. Specifically, the authors in [44] propose an algorithm based on the traveling salesman problem (TSP), which incorporates visibility constraints and finds the tour of minimum length which allows a UAV equipped with a fixed downward facing camera to inspect a finite sets of points on the terrain. On the other hand, the authors in [45] propose a variation of the original VPP problem, termed traveling VPP, where the objective is the minimization of the combined view and traveling cost i.e., the cost to minimize combines the view cost which is proportional to the number of viewpoints planned, and the traveling cost which accounts for the total distance that the robot needed to travel in order to cover all points of interest. In general, VPP approaches operate in a discrete state-space setting and are mostly concerned with the selection of an optimal sequence of sensing actions (taken from the finite set of all admissible actions) which achieve full coverage of the object of interest. In contrast, the proposed approach can be used to generate continuous trajectories which are governed by kinematic and sensing constraints. The interested reader is directed to [14, 46, 47] for a more detailed examination of the various coverage path planning techniques in the literature, including the view planning problem. To summarize, in comparison with the existing techniques, in this work we propose a coverage planning approach which integrates ray-casting into the planning process, in order to simulate the physical behavior of sensor signals and thus determine which parts of the scene are visible through the UAV's onboard camera. The proposed approach takes into account both the kinematic and the sensing constraints of the UAV agent, to achieve full coverage of an object of interest in the presence of obstacles. In particular, the coverage path planning problem is posed in this work as a constrained open-loop optimal control problem which incorporates the UAV's kinematic and sensing model to produce optimal coverage trajectories in accordance with the specified mission objectives. Finally, the proposed mathematical formulation can be solved optimally with off-the-shelf mixed integer programming (MIP) optimization tools [48]. ## III System Model ### _Agent Kinematic Model_ This work assumes that an autonomous agent (e.g., a UAV), is represented as a point-mass object which maneuvers inside a bounded surveillance area \\(\\mathcal{W}\\subset\\mathbb{R}^{2}\\). The agent kinematics are governed by the following discrete-time linear model [49]: \\[x_{t}=\\Phi x_{t-1}+\\Gamma u_{t-1}, \\tag{1}\\] where \\(x_{t}=[p_{t}(x),p_{t}(y),\ u_{t}(x),\ u_{t}(y)]^{\\top}\\) denotes the agent's state at time \\(t\\) in cartesian \\((x,y)\\) coordinates, which consists of the agent's position \\([p_{t}(x),p_{t}(y)]^{\\top}\\in\\mathbb{R}^{2}\\) and velocity \\([\ u_{t}(x),\ u_{t}(y)]^{\\top}\\in\\mathbb{R}^{2}\\) in the \\(x\\) and \\(y\\) directions. The term \\(u_{t}=[f_{t}(x),f_{t}(y)]^{\\top}\\in\\mathbb{R}^{2}\\) denotes the control input, i.e., the amount of force applied in each dimension in order to direct the agent according to the mission objectives. The matrices \\(\\Phi\\) and \\(\\Gamma\\) are given by: \\[\\Phi=\\begin{bmatrix}\\mathbf{I}_{2\\times 2}&\\delta t\\cdot\\mathbf{I}_{2\\times 2 }\\\\ \\mathbf{0}_{2\\times 2}&\\phi\\cdot\\mathbf{I}_{2\\times 2}\\end{bmatrix},\\ \\Gamma=\\begin{bmatrix} \\mathbf{0}_{2\\times 2}\\\\ \\gamma\\cdot\\mathbf{I}_{2\\times 2}\\end{bmatrix}, \\tag{2}\\] where \\(\\delta t\\) is the sampling interval, \\(\\mathbf{I}_{2\\times 2}\\) and \\(\\mathbf{0}_{2\\times 2}\\) are the 2 by 2 identity matrix and zero matrix respectively, with \\(\\phi\\) and \\(\\gamma\\) given by \\(\\phi=(1-\\eta)\\) and \\(\\gamma=m^{-1}\\delta t\\), where the parameter \\(\\eta\\) is used to model the (air) drag coefficient and \\(m\\) is the agent mass. Given a known initial agent state \\(x_{0}\\), and a set of control inputs \\(\\{u_{t}|t=0,..,T-1\\}\\), inside a finite planning horizon of length \\(T\\), the agent trajectory can be obtained for time-steps \\(t=[1,..,T]\\) by the recursive application of Eqn. (1) as: \\[x_{t}=\\Phi^{t}x_{0}+\\sum_{\\tau=0}^{t-1}\\Phi^{t-\\tau-1}\\Gamma u_{\\tau}. \\tag{3}\\] Therefore, the agent's trajectory can be designed and optimized to meet the desired mission objectives by appropriately selecting the control inputs \\(\\{u_{t}|t=0,..,T-1\\}\\), inside the given planning horizon of length \\(T\\). We should point out here that although the agent kinematic model in Eqn. (1) does not fully captures the low-level UAV aerodynamics (which are platform dependent), it allows us to design and construct high-level (i.e., mission-level) coverage trajectories which in turn can be used as desired reference trajectories to be tracked with low-level closed-loop guidance controllers found on-board the UAVs [50, 51, 52]. ### _Agent Sensing Model_ The agent is equipped with a gimbaled camera with optical zoom, which is used for sensing its surroundings and performing various tasks e.g., searching objects/regions of interest, detecting targets, etc. The camera field-of-view (FoV) or sensing footprint is modeled in this work as an isosceles triangle [53, 54] parameterized by its angle at the apex \\(\\varphi\\) and its height \\(h\\), which are used to model the FoV angle opening and sensing range respectively. We should point out here that any convex 2D shape can be used to model the camera FoV. Using the parameters \\(\\varphi\\) and \\(h\\) the camera FoV side length \\((\\ell_{s})\\) and base length \\((\\ell_{b})\\) are computed as: \\[\\ell_{s}=h\\times\\text{cos}(\\varphi/2)^{-1},\\ \\text{and},\\ \\ell_{b}=2\\ell_{s} \\times\\text{sin}(\\varphi/2). \\tag{4}\\] Therefore, the set of vertices \\((\\mathcal{V}_{o})\\) of the triangular FoV camera projection, for an agent centered at the origin, and facing downwards are given by \\(\\mathcal{V}_{o}=[v_{1},v_{2},v_{3}]_{\\cdot}\\) where \\(v_{1}=[0,0]^{\\top}\\), \\(v_{2}=[-\\ell_{b}/2,-h]^{\\top}\\) and \\(v_{3}=[\\ell_{b}/2,-h]^{\\top}\\) so that: \\[\\mathcal{V}_{o}=\\begin{bmatrix}0&-\\ell_{b}/2&\\ell_{b}/2\\\\ 0&-h&-h\\end{bmatrix}, \\tag{5}\\] The camera FoV can be rotated (on the \\(xy\\)-plane) around the agent's position \\(x^{\\text{pos}}=[p(x),p(y)]^{\\top}\\) by an angle \\(\\theta\\in\\bar{\\Theta}\\) (with respect to \\(x\\)-axis), by performing a geometric transformation consisting of a rotation operation followed by a translation: \\[\\mathcal{V}=R(\\theta)\\mathcal{V}_{o}+x^{\\text{pos}}, \\tag{6}\\] where \\(\\mathcal{V}\\) is the rotated camera FoV in terms of its vertices, \\(\\theta\\) is the control signal and \\(R(\\theta)\\) is a 2D rotation matrix given by: \\[R(\\theta)=\\begin{bmatrix}\\text{cos}(\\theta)&\\text{sin}(\\theta)\\\\ -\\text{sin}(\\theta)&\\text{cos}(\\theta)\\end{bmatrix}. \\tag{7}\\] We should mention here that in this work the rotation angle \\(\\theta\\) takes its values from a finite set of all admissible rotation angles \\(\\bar{\\Theta}=\\{\\theta_{1},..,\\theta_{|\\bar{\\Theta}|}\\}\\), where \\(|\\bar{\\Theta}|\\) denotes the set cardinality. The agent can be placed in any desired position and orientation (i.e., pose) so at some time-step \\(t\\), by adjusting the control signals \\(u_{t}\\) and \\(\\theta_{t}\\in|\\bar{\\Theta}|\\) i.e., \\(\\mathcal{V}_{t}=R(\\theta_{t})\\mathcal{V}_{o}+x_{t}^{\\text{pos}}\\). The agent's onboard camera is also equipped with an optical zoom functionality, which can alter the FoV characteristics in order to better Fig. 1: The figure illustrates all the possible sensor FoV configurations for two different zoom-levels \\(\\xi_{1}\\) and \\(\\xi_{2}\\), and for 5 different rotation angles \\(\\theta_{1},..,\\theta_{5}\\), when the agent’s position is equal to \\(x_{t}^{\\text{pos}}\\). In this example, the total number of FoV configurations is equal to \\(|\\bar{\\Theta}|\\times|\\bar{\\Xi}|=10\\). suit the mission objectives and constraints. In particular, it is assumed that a zoom-in operation narrows down the FoV (i.e., reduces the FoV angle opening \\(\\varphi\\)) however, it increases the sensing range \\(h\\), as shown in Fig. 1. In particular, we assume that the camera exhibits a finite set of predefined zoom levels denoted as \\(\\tilde{\\Xi}=\\{\\xi_{1}, ,\\xi_{|\\tilde{\\Xi}|}|\\xi_{i}\\in\\mathbb{R},\\xi_{i}\\geq 1\\}\\), which are used to scale the FoV parameters. The zoom-in functionality is thus defined for a particular zoom level \\(\\xi\\in\\Xi\\) as: \\[h^{\\prime}=h\\times\\xi,\\ \\text{and},\\ \\varphi^{\\prime}=\\varphi\\times\\xi^{-1}, \\tag{8}\\] where \\((h^{\\prime},\\varphi^{\\prime})\\) are the new parameters for the FoV angle and range, after applying the optical zoom level \\(\\xi\\). A visual representation of the camera model is illustrated in Fig. 1. We can now denote the agent's sensing state at time-step \\(t\\) as \\(\\mathcal{S}_{t}(\\theta_{t},\\varepsilon_{t}^{\\text{pos}},\\xi_{t})\\), which jointly accounts for the agent's position and orientation. The notation \\(\\mathcal{S}_{t}(\\theta_{t},x_{t}^{\\text{pos}},\\xi_{t})\\), is used here to denote the area inside the agent's sensing range determined by the FoV vertices (i.e., the convex hull) as computed by Eqn. (6). Therefore, the total area covered by the agent's FoV within a planning horizon of length \\(T\\) can thus be obtained by: \\[\\mathcal{S}_{1:T}=\\bigcup_{t=1}^{T}\\ \\mathcal{S}_{t}(\\theta_{t},x_{t}^{\\text{pos} },\\xi_{t}), \\tag{9}\\] where \\(\\theta_{t}\\in\\bar{\\Theta}\\), \\(\\xi_{t}\\in\\tilde{\\Xi}\\) and the agent position \\(x_{t}^{\\text{pos}}\\) has been computed from the application of a set of mobility controls inputs \\(\\{u_{\\tau}|\\tau=0,..,t-1\\}\\) according to Eqn. (3). We should point out here that the agent kinematic and sensing models described above can easily be extended to 3 dimensions (i.e., the agent kinematics in Eqn. (1) can be extended in 3D by accounting for the \\(z\\) dimension, the triangular 2D FoV translates to a regular pyramid in 3D, the 2D rotation matrix \\(R\\) becomes a 3D rotation matrix in 3D, etc). Consequently, the proposed approach discussed in detail in Sec.V can also be extended in 3D environments with some modifications. However, in order to make the analysis of the proposed approach easier to follow and more intuitive, the problem in this paper has been formulated in a two dimensional space, which already has some key challenges. ## IV Problem Statement Let an arbitrary bounded convex planar region \\(\\mathcal{C}\\subset\\mathcal{W}\\) to denote a single object or region of interest, that we wish to cover with our autonomous agent, with boundary \\(\\partial\\mathcal{C}\\), as shown in Fig. 2. The proposed approach can be used to generate the coverage plan for the area enclosed in \\(\\mathcal{C}\\) in the case where the region \\(\\mathcal{C}\\) is traversable. On the other hand when \\(\\mathcal{C}\\) is not traversable (i.e., \\(\\mathcal{C}\\) represents an inaccessible, to the agent, object or region of interest), the proposed technique is used to generate the coverage plan for the region's boundary \\(\\partial\\mathcal{C}\\). For brevity, we will formulate the problem assuming the latter scenario (i.e., generating coverage plans for covering the boundary of a region/object of interest), however the proposed formulation can be applied for both scenarios. In a high level form, the problem tackled in this work can be formulated as follows: In (P1) we are interested in finding the agent's mobility (i.e., \\(\\mathbf{U}_{T}=\\{u_{0},..,u_{T-1}\\}\\)) and sensor (i.e., \\(\\mathbf{\\Theta}_{T}=\\{\\theta_{1},..,\\theta_{T}\\}\\), \\(\\mathbf{\\Xi}_{T}=\\{\\xi_{1},..,\\xi_{T}\\}\\)) control inputs, over the planning horizon of length \\(T\\), i.e., \\(\\mathcal{T}=\\{1,..,T\\}\\), which optimize a certain set of optimality criteria encoded in the state-dependent multi-objective cost function \\(\\mathcal{J}_{\\text{coverage}}(\\mathbf{X}_{T},\\mathbf{U}_{T},\\mathbf{\\Theta}_{T },\\mathbf{\\Xi}_{T})\\), where \\(\\mathbf{X}_{T}=\\{x_{1},..,x_{T}\\}\\), subject to the set of constrains shown in Eqn. (10b)-(10e). In particular, Eqn. (10b) represents the agent's kinematic constraints as described in Sec. III-A. Then, Eqn. (10c) represents obstacle avoidance constraints with a specified set of obstacles \\(\\Psi\\), and the constraint in (10d) (i.e., coverage constraint) is used to guarantee that during the planning horizon the whole boundary of the region/object of interest will be covered by the agent's sensor FoV. We should mention here that the notation \\(\\mathcal{S}^{\\prime}_{1:T}\\subseteq\\mathcal{S}_{1:T}\\) refers to the reduced FoV coverage obtained when obstacles block the sensor signals (i.e., camera-rays) from passing through, thus creating occlusions. In this work we use a set of visibility constraints to distinguish between parts of the scene \\(p\\subset\\partial\\mathcal{C}\\) that belong to the visible field-of-view i.e., \\(p\\in\\mathcal{S}^{\\prime}_{1:T}\\) and parts \\(p\\) that are occluded. In order to model the visible field-of-view \\(\\mathcal{S}^{\\prime}_{1:T}\\) we use ray-casting to simulate the physical behavior of the camera-rays and account for the occluded regions. Therefore, the constraint in Eqn. (10d), enforces the generation of coverage trajectories, which take into account which parts of the scene are visible through the agent's camera at any point in time. Finally, the constraints in Eqn. (10e) restrict the agent's state and control inputs within the desired bounds. In the next section, we discuss in more detail how we have tackled the problem discussed above. We should mention here that in this work the following assumptions are made: a) the agent has self-localization capability (e.g., via accurate GPS positioning), b) the environment (i.e., object of interest, obstacles, etc.) is known a-priori, and c) the visual data acquisition process is noise-free. However, in certain scenarios in which the assumptions above no longer apply, the agent's visual localization accuracy at the planning time is of essence in generating optimal coverage trajectories. In such scenarios the proposed approach can be combined with active visual localization techniques [55], in order to improve the agent's visual localizability, and generate accurate coverage trajectories. ## V Integrated Guidance and Gimbal Control Coverage Planning In this section we design a mixed integer quadratic program (MIQP) in order to tackle the optimal control problem of integrated guidance and gimbal control coverage planning, as described in problem (P1). ### _Preliminaries_ The proposed approach first proceeds by sampling points \\(p\\in\\partial\\mathcal{C}\\) on the region's boundary, generating the set of points \\(\\mathcal{P}=\\{p_{1},..,p_{|\\mathcal{P}|}\\}\\subset\\partial\\mathcal{C}\\), where \\(|\\mathcal{P}|\\) is the set cardinality, thus creating a discrete representation of the region's boundary that needs to be covered, as depicted in Fig. 2. Equivalently, a set of points \\(\\mathcal{P}\\subset\\mathcal{C}\\) are sampled from \\(\\mathcal{C}\\) in the scenario where the region of interest is traversable. Essentially, the object of interest is represented in this work as a point-cloud, which in practice can be obtained with a number of scene/object reconstruction techniques [56, 57]. The coverage constraint shown in Eqn. (10d) can now be written as: \\[p\\in\\mathcal{S}^{\\prime}_{1:T},\\ \\forall p\\in\\mathcal{P}, \\tag{11}\\] and thus we are looking to find the optimal agent mobility and sensor control inputs over the planning horizon, which satisfy the constraint in Eqn. (11), i.e., the set of points \\(\\mathcal{P}\\) must be covered by the agent's sensor FoV. We should mention here that the methodology used to generate \\(\\mathcal{P}\\) (e.g., systematic selection or sampling procedure) is up to the designer and in accordance to the problem requirements. In the special scenario examined in this work, in which the region (or object) of interest \\(\\mathcal{C}\\) is not traversable, and thus acting as an obstacle to the agent's trajectory, a set of obstacle avoidance constraints are implemented i.e., Eqn. (10c) to prevent collisions between the agent and the obstacle. Intuitively, in such scenarios the agent must avoid crossing the region's boundary \\(\\partial\\mathcal{C}\\). We denote as \\(\\Delta\\mathcal{C}\\), the piece-wise linear approximation of the boundary \\(\\partial\\mathcal{C}\\), that contains the line segments \\(L_{p_{i}}=L_{i,j}=\\{p_{i}+r(p_{j}-p_{i})|r\\in[0,1],j\ eq i\\}\\), which are formed when connecting together the pair of points \\((p_{i},p_{j})_{i\ eq j}\\in\\mathcal{P}\\), such that the resulting line segments \\(L_{i,j}\\) belong to the boundary of the convex hull of \\(\\mathcal{P}\\), as shown in Fig. 2. Let us assume that \\(|\\mathcal{P}|\\) points \\(\\{p_{1},..,p_{|\\mathcal{P}|}\\}\\) have been sampled from \\(\\partial\\mathcal{C}\\), and that \\(\\Delta\\mathcal{C}\\) contains \\(|\\mathcal{P}|\\) line segments \\(\\{L_{p_{1}},..,L_{p_{|\\mathcal{P}|}}\\}\\), where each line segment \\(L_{p_{i}},i=1,..,|\\mathcal{P}|\\) lies on the line \\(\\tilde{L}_{p_{i}}=\\{x\\in\\mathbb{R}^{(2,1)}|\\alpha_{i}^{\\top}x=\\beta_{i}\\}\\), where the line coefficients in the vector \\(\\alpha_{i}\\) determine the outward normal to the \\(i_{\\text{th}}\\) line segment and \\(\\beta_{i}\\) is a constant. The area inside the region of interest \\(\\mathcal{C}\\) is thus modeled as a convex polygonal obstacle with boundary \\(\\Delta\\mathcal{C}\\) defined by \\(|\\mathcal{P}|\\) linear equations \\(\\tilde{L}_{p_{i}},i=1,..,|\\mathcal{P}|\\). A collision occurs when at some point in time \\(t\\in\\mathcal{T}\\), the agent's position \\(x_{t}^{\\text{pos}}\\) resides within the area defined by the region's boundary \\(\\Delta\\mathcal{C}\\) or equivalently the following system of linear inequalities is satisfied: \\[\\alpha_{i}^{\\top}x_{t}^{\\text{pos}}<\\beta_{i},\\ \\forall i\\in[1,..,|\\mathcal{P}|]\\,. \\tag{12}\\] Hence, a collision can be avoided with \\(\\mathcal{C}\\) at time \\(t\\) i.e., \\(x_{t}^{\\text{pos}}\ otin\\mathcal{C}\\), when \\(\\exists\\ i\\in[1,..,|\\mathcal{P}|]:\\alpha_{i}^{\\top}x_{t}^{\\text{pos}}\\geq\\beta_{i}\\). This is implemented with a set of mixed integer linear constraints as shown in Eqn. (13a)-(13c). \\[-\\alpha_{i}^{\\top}x_{t}^{\\text{pos}}-Mb_{t,i}^{\\text{collision}} \\leq-\\beta_{i},\\ \\forall t,i, \\tag{13a}\\] \\[\\sum_{i=1}^{|\\mathcal{P}|}b_{t,i}^{\\text{collision}}\\leq(|\\mathcal{ P}|-1),\\ \\forall t,\\] (13b) \\[b_{t,i}^{\\text{collision}}\\in\\{0,1\\},\\ \\forall t,i. \\tag{13c}\\] Specifically, the constraint in Eqn. (13a), uses the binary variable \\(b_{t,i}^{\\text{collision}}\\) to determine whether the \\(i_{\\text{th}}\\) inequality i.e., \\(\\alpha_{i}^{\\top}x_{t}^{\\text{pos}}<\\beta_{i}\\) of Eqn. (12) is satisfied at some time \\(t\\in\\mathcal{T}\\) by setting \\(b_{t,i}^{\\text{collision}}=1\\), where \\(M\\) is a large positive constant. Then, the constraint in Eqn. (13b) makes sure that for any time-step \\(t\\), the binary variable \\(b_{t,i}^{\\text{collision}}\\) is activated less than \\(|\\mathcal{P}|-1\\) times, to ensure that the agent's position \\(x_{t}^{\\text{pos}}\\) does not reside inside the obstacle. These constraints, can be applied for any number of convex polygonal obstacles \\(\\psi\\in\\Psi\\), that need to be avoided, by augmenting the variable \\(b_{t,i}^{\\text{collision}}\\) with an additional index to indicate the obstacle number. ### _Visibility Constraints_ In the previous section we have described how the region's boundary \\(\\partial\\mathcal{C}\\) that needs to be covered, is defined as a piece-wise linear approximation \\(\\Delta\\mathcal{C}\\), and we have also shown how the non-traversable area inside the region of interest \\(\\mathcal{C}\\), is modeled with a set of obstacle avoidance constraints, which are used to prevent the agent from passing-through. In this section, we devise a set of visibility constrains, which allows us to determine which parts of the region's boundary are visible (i.e., not blocked by an obstacle) through the agent's camera, given a certain agent pose. In this work, we use the term camera-rays to denote the light rays that are captured by the camera's optical sensor. Without loss of generality, let us assume that at each time-step \\(t\\), a finite set of camera-rays enter the optical axis, denoted as \\(\\mathcal{K}_{\\theta_{t},x_{t}^{\\text{pos}},\\xi_{t}}=\\{K_{1},..,K_{|\\mathcal{C}|}\\}\\), where \\(\\theta_{t},x_{t}^{\\text{pos}},\\xi_{t}\\) determine the agent's pose and subsequently the FoV configuration, as illustrated in Fig. 3(a)-(b). The individual ray \\(K_{i}\\) is given by the line-segment \\(K_{i}=\\{x_{t}^{\\text{pos}}+s(\\kappa_{i}-x_{t}^{\\text{pos}})|s\\in[0,1]\\}\\), where \\(x_{t}^{\\text{pos}}\\) is the ray's origin given by the agent's position at time \\(t\\) and \\(\\kappa_{i}\\in\\mathbb{R}^{2}\\) is a fixed point on the base of the triangle which defines the camera FoV and determines the ray's end point. We can now define the notion of visibility as follows: The point \\(p_{i}\\in\\mathcal{P},i\\in[1,..,|\\mathcal{P}|]\\) on the region's boundary \\(\\partial\\mathcal{C}\\), which exists on the line-segment \\(L_{p_{i}}\\in\\Delta\\mathcal{C}\\) is said to be visible at time-step \\(t\\) i.e., \\(p_{i}\\in\\mathcal{S}^{\\prime}_{t}(\\theta_{t},x_{t}^{\\text{pos}},\\xi_{t})\\) when: \\[p_{i}\\in\\mathcal{S}_{t}(\\theta_{t},x_{t}^{\\text{pos}},\\xi_{t})\\ \\wedge\\ \\exists K\\in\\mathcal{K}_{\\theta_{t},x_{t}^{\\text{pos}},\\xi_{t}}:(K\\otimes\\Delta \\mathcal{C})=L_{p_{i}}, \\tag{14}\\] where the operation \\(K\\otimes\\Delta\\mathcal{C}\\) is defined as the intersection of the camera-ray \\(K\\), with the set of line segments in \\(\\Delta\\mathcal{C}\\), and returns the nearest (i.e., closest distance with respect to the ray's origin) line-segment \\(L\\in\\Delta\\mathcal{C}\\) which the ray \\(K\\) intersects with. In the case where a ray \\(K\\) exhibits no intersections with any line-segment, \\(0\\) is returned instead. In essence, the point \\(p_{i}\\) is visible at time \\(t\\), when both constraints in Eqn. (14) are satisfied i.e., a) \\(p_{i}\\) is included inside the agent's camera FoV \\(\\mathcal{S}_{t}(\\theta_{t},x_{t}^{\\text{pos}},\\xi_{t})\\) and b) there exists a camera-ray \\(K\\) which first intersects with the line-segment \\(L_{p_{i}}\\) which contains the point \\(p_{i}\\). In other words there is a camera-ray \\(K\\) which does not intersect with any parts of the boundary (i.e., line segments) \\(L\\in\\Delta\\mathcal{C}\\) prior to \\(L_{p_{i}}\\), as illustrated in Fig. 3(b). Let the camera-ray \\(K\\) (i.e., \\(K=\\{x_{t}^{\\text{pos}}+s(\\kappa-x_{t}^{\\text{pos}})|s\\in[0,1]\\}\\)) to have \\(x\\) and \\(y\\) cartesian coordinates given by \\(x_{t}^{\\text{pos}}(x)+s[\\kappa(x)-x_{t}^{\\text{pos}}(x)]\\) and \\(x_{t}^{\\text{pos}}(y)+s[\\kappa(y)-x_{t}^{\\text{pos}}(y)]\\) respectively. Also, let the \\(x\\) and \\(y\\) cartesian coordinates of the a line-segment \\(L_{p_{i}}=L_{i,j}\\in\\Delta\\mathcal{C}\\) on the boundary (i.e.,\\(\\{p_{i}+r(p_{j}-p_{i})|r\\in[0,1],j\ eq i\\}\\)) to be \\(p_{i}(x)+r[p_{j}(x)-p_{i}(x)]\\) and \\(p_{i}(y)+r[p_{j}(y)-p_{i}(y)]\\) respectively. The intersection \\(K\\otimes L_{i,j}\\) of the camera-ray with the line segment can be computed by solving the following system of linear equations for the two unknowns i.e., \\((s,r)\\): \\[\\begin{bmatrix}\\kappa(x)-x_{\\ell}^{\\text{pos}}(x)&p_{i}(x)-p_{j}(x)\\\\ \\kappa(y)-x_{\\ell}^{\\text{pos}}(y)&p_{i}(y)-p_{j}(y)\\end{bmatrix}\\begin{bmatrix} s\\\\ r\\end{bmatrix}=\\begin{bmatrix}p_{i}(x)-\\kappa(x)\\\\ p_{i}(y)-\\kappa(y)\\end{bmatrix}. \\tag{15}\\] An intersection exists if the pair \\((s,r)\\in[0,1]\\), and the intersection point can be recovered by substituting either \\(s\\) or \\(r\\) into the respective line-segment equations. Subsequently, the ray-casting process described above must be performed for each camera-ray \\(K\\), amongst all sets of possible camera-ray configurations \\(\\mathcal{K}_{\\theta_{t},x_{\\ell}^{\\text{pos}},\\xi_{t}}\\) and line-segments of the boundary \\(\\Delta\\mathcal{C}\\), to determine which parts of the scene (i.e., points on the region's boundary), are visible through the agent's camera at each time-step \\(t\\in\\mathcal{T}\\). This makes the ray-casting computation very computationally expensive. Observe here that the agent's position \\(x_{t}^{\\text{pos}}\\) is a continuous variable, which depends on the unknown mobility control inputs i.e., Eqn. (3). More importantly the direct implementation of Eqn. (14)-(15) requires the inclusion of non-linear and non-convex constraints in the control problem tackled, which are very hard be handled efficiently. For this reason, in this work an alternative approximate procedure is employed, which allows the ray-casting functionality described above, to be integrated into a mixed integer quadratic program (MIQP) which in turn can be solved to optimality with readily available optimization tools. In essence the surveillance area \\(\\mathcal{W}\\) is first decomposed into a finite number of cells, and then the agent's visible FoV is computed within each cell, for all possible camera-ray configurations. This enables the proposed approach to learn a set of state-dependent visibility constraints which simulate the physical behavior of camera-rays originating within each cell, and which can be embedded into a mixed integer quadratic program. Let the rectangular grid \\(\\mathcal{G}=\\{c_{1},..,c_{\\mathbb{G}|}\\}\\) to denote the discretized representation of the surveillance area \\(\\mathcal{W}\\), which is composed of cells \\(c_{i},i=1,..,|\\mathcal{G}|\\) such that \\(\\bigcup_{i=1}^{|\\mathcal{G}|}c_{i}=\\mathcal{G}\\). To implement the logical conjunction of the visibility constraint in Eqn. (14) we introduce 3 binary variables namely \\(b^{\\mathcal{S}_{t}}\\), \\(b^{x_{t}^{\\text{pos}}}\\) and \\(b^{\\mathcal{K}}\\), which are defined as follows: \\[b^{\\mathcal{S}_{t}}_{p,m,t}=1,\\text{ \\emph{iff} }\\exists t\\in \\mathcal{T},m\\in\\mathcal{M}:p\\in\\mathcal{S}_{t}(m,x_{t}^{\\text{pos}}), \\tag{16}\\] \\[b^{x_{t}^{\\text{pos}}}_{c,t}=1,\\text{ \\emph{iff} }\\exists t\\in\\mathcal{T}:x_{t}^{\\text{pos}}\\in c,\\] (17) \\[b^{\\mathcal{K}}_{c,p}=1,\\text{ \\emph{iff} }\\exists K\\in\\mathcal{K}_{\\forall\\theta,\\xi}^{c}:(K\\otimes\\Delta \\mathcal{C})=L_{p}, \\tag{18}\\] where \\(\\mathcal{M}\\) denotes the set of all pairwise combinations \\(m=(\\theta,\\xi)\\) of \\(\\theta\\in\\bar{\\Theta}\\) and \\(\\xi\\in\\bar{\\Xi}\\). Hence, the total number of FoV configurations is equal to \\(|\\mathcal{M}|=|\\bar{\\Theta}|\\times|\\bar{\\Xi}|\\) and thus \\(\\mathcal{S}_{t}(\\theta_{t},x_{t}^{\\text{pos}},\\xi_{t})\\) is abbreviated as \\(\\mathcal{S}_{t}(m,x_{t}^{\\text{pos}}),m\\in\\mathcal{M}\\). Above, \\(c\\in\\mathcal{G}\\) is a rectangular cell, part of the surveillance area \\(\\mathcal{W}\\) and \\(p\\in\\partial\\mathcal{P}\\) is a point on the region's boundary \\(\\partial\\mathcal{C}\\) which is also connected to some line-segment \\(L_{p}\\in\\Delta\\mathcal{C}\\). The binary variable \\(b^{\\mathcal{S}_{t}}_{p,m,t}\\) in Eqn. (16) is activated when the point \\(p\\) is included inside the \\(m_{\\text{th}}\\in\\mathcal{M}\\) FoV configuration when the agent's position is \\(x_{t}^{\\text{pos}}\\), at time-step \\(t\\). Then, the binary variable \\(b^{\\mathcal{K}}_{c,p}\\) in Eqn. (17) shows at which cell \\(c\\) the agent with position \\(x_{t}^{\\text{pos}}\\) resides at any point in time \\(t\\). Finally the constraint in Eqn. (18) indicates with the binary variable \\(b^{\\mathcal{K}}_{c,p}\\) if the point \\(p\\) is visible when the agent is inside cell \\(c\\), where the notation \\(\\mathcal{K}_{\\forall\\theta,\\xi}^{c}\\) indicates the sets of camera-ray configurations for all possible combinations of sensor inputs \\(\\theta\\) and \\(\\xi\\). To be more specific, \\(b^{\\mathcal{K}}_{c,p}\\) is learned offline, by pre-computing for each cell \\(c\\in\\mathcal{G}\\) and for each camera-ray \\(K\\in\\mathcal{K}_{\\forall\\theta,\\xi}^{c}\\) the visible part of the boundary \\(\\Delta\\mathcal{C}\\) via the ray-casting process discussed in the previous paragraph. This is illustrated in Fig. 3(c). We should note here that the agent position is sampled uniformly \\(n_{s}\\) times from within each cell \\(c\\), generating a set of camera-ray configurations \\(\\{i^{\\text{'}}\\mathcal{K}_{\\forall\\theta,\\xi}^{c}|i=[1,..,n_{s}]\\}\\). Therefore for each candidate agent position \\(x_{i}^{\\text{pos}}\\in i=[1,..,n_{s}]\\) we seek to find the visible points on the boundary. Thus, a point \\(p\\) is visible at some time-step \\(t\\in\\mathcal{T}\\), for some combination of sensor input controls \\(m\\in\\mathcal{M}\\), and agent position \\(x_{t}^{\\text{pos}}\\) i.e., \\(p\\in\\mathcal{S}_{t}^{\\text{'}}(m,x_{t}^{\\text{pos}})\\) when: \\[\\exists\\ m,t:b^{\\mathcal{S}_{t}}_{p,m,t}\\wedge\\left\\langle\\underset{c=1}{ \\overset{|\\mathcal{G}|}{\\cup}}(b^{x_{t}^{\\text{pos}}}_{c,t}\\wedge b^{\\mathcal{ K}}_{c,p})\\right\\rangle=1. \\tag{19}\\] Specifically a point \\(p\\) is visible when both parts of the conjunction in Eqn. (19) are true: a) the point is included inside the agent's sensor FoV which is determined by the agent position \\(x^{\\text{pos}}\\) and sensor control inputs \\(\\theta\\) and \\(\\xi\\), and encoded by the binary variable \\(b^{\\mathcal{S}_{t}}_{p,m,t}\\) and b) the agent with position \\(x_{t}^{\\text{pos}}\\) resides inside the cell \\(c\\in\\mathcal{G}\\) (encoded by \\(b^{x_{t}^{\\text{pos}}}_{c,t}\\)) at time \\(t\\), from which originates a camera-ray \\(K\\in\\mathcal{K}_{\\forall\\theta,\\xi}^{c}\\) which first intercepts the line \\(L_{p}\\) which contains point \\(p\\) (i.e., encoded in the learned variable \\(b^{\\mathcal{K}}_{c,p}\\)). We should note here that the individual terms of the logical disjunction inside the square brackets are mutually exclusive as the agent cannot occupy two distinct cells at the same time. The ray-casting information encoded in \\(b^{\\mathcal{K}}_{c,p}\\) has been learned offline for each cell \\(c\\), and thus during the optimization phase, we need to find the agent's pose which results in a FoV configuration which observes point \\(p\\) (i.e., \\(b^{\\mathcal{S}}_{p,m,t}\\)) and also determine whether the agent resides inside a cell \\(c\\in\\mathcal{G}\\) (indicated by \\(b^{x_{t}^{\\text{pos}}}_{c,t}\\)) from which point \\(p\\) is visible (as indicated by \\(b^{\\mathcal{K}}_{c,p}\\)). In the next Fig. 2: In the scenario above we are interested in finding the optimal mobility and sensor control inputs for a single UAV agent, which enable full coverage of the boundary \\(\\partial\\mathcal{C}\\) of a given region of interest \\(\\mathcal{C}\\). \\(\\Delta\\mathcal{C}\\) denotes the piece-wise linear approximation of the boundary which is composed of a finite number of line segments. The points on the boundary \\(p_{i}\\) and \\(p_{j}\\) are connected with the line segment \\(L_{i,j}\\in\\Delta\\mathcal{C}\\), where \\(\\alpha\\) denotes its outward normal vector. Full coverage is achieved when every point \\(p\\in\\mathcal{P}\\) is included inside the agent’s sensor FoV. section, we will show how the above constraints are embedded in the proposed coverage controller. ### _Coverage Controller_ The complete formulation of the proposed integrated guidance and sensor control coverage planning approach is shown in problem (P2). Specifically, in this section we will show how the high-level problem shown in (P1) is converted into a mixed integer quadratic program (MIQP), which can be solved exactly using readily available optimization tools [48]. To summarize, our goal in (P2) is to jointly find the mobility and sensor control inputs inside a planning horizon, which optimize a mission-specific objective function i.e., \\(\\mathcal{J}_{\\text{coverage}}\\) subject to visibility and coverage constraints. #### Iii-C1 Constraints Guidance control is achieved by appropriately selecting the agent's mobility control inputs \\(\\mathbf{U}_{T}=\\{u_{t}:t\\in[0,..,T-1]\\}\\) governed by its kinematic constraints i.e. Eqn. (20b). On the other hand, sensor control is achieved via the constraints in Eqn. (20c)-(20d). More specifically, in Eqn. (20c) we construct the FoV configurations for all possible pairwise combinations \\(\\{m=(\\theta,\\xi)\\}\\in\\mathcal{M}\\) of the sensor inputs i.e., rotational angle (\\(\\theta\\)) and zoom-level (\\(\\xi\\)). Specifically, the continuous variable \\(\\mathcal{V}^{\\prime}_{m}\\) represents a 2 by 3 matrix containing the sensor's FoV vertices for the \\(m_{\\text{th}}\\) FoV configuration. In essence for each zoom-level \\(\\xi\\in\\bar{\\Xi}\\) (which determines the FoV parameters \\(\\phi\\) and \\(h\\)), the FoV is rotated at the origin for each admissible angle \\(\\theta\\in\\bar{\\Theta}\\), thus creating a total of \\(|\\mathcal{M}|=|\\bar{\\Theta}|\\times|\\bar{\\Xi}|\\) FoV configurations indexed by \\(m\\) as shown. Subsequently, all the FoV configurations are translated to the agent's position \\(x_{t}^{\\text{ros}}\\) at time \\(t\\) as shown in Eqn. (20d). Therefore, the UAV's pose at each time-step inside the planning horizon is completely specified by the constraints in Eqn.(20b)-(20d). The constraint in Eqn. (20e), uses the function \\(\\mathcal{L}(.)\\) which takes as input the vertices of the \\(m_{\\text{th}}\\) FoV configuration at some time \\(t\\) i.e., \\(\\mathcal{V}_{m,t}\\) and returns a set of linear constraints of the form: \\[A^{\\mathcal{V}}_{n,m,t}\\times x\\leq B^{\\mathcal{V}}_{n,m,t}, \\tag{21}\\] where \\(n=[1,..,3]\\) and thus \\(A^{\\mathcal{V}}_{m,t}\\) is a 3 by 2 matrix, \\(B^{\\mathcal{V}}_{m,t}\\) is a 3 by 1 column vector, and \\(x\\) is column vector representing an arbitrary point in \\(\\mathbb{R}^{2}\\). Given two vertices of the triangular FoV, the function \\(\\mathcal{L}(.)\\) works by finding **Problem (P2):** Coverage Controller \\[\\operatorname*{arg\\,min}_{\\mathbf{U}_{T},\\bar{\\Theta}_{T},\\bar{ \\Xi}_{T}} \\mathcal{J}_{\\text{coverage}}(\\mathbf{X}_{T},\\mathbf{U}_{T},\\boldsymbol{\\Theta}_{T}, \\boldsymbol{\\Xi}_{T})\\] (20a) **subject to:** \\[t=[1,..,T] \\tag{20b}\\] \\[\\mathcal{V}^{\\prime}_{m}=R(\\theta)\\mathcal{V}_{o}(\\xi) \\forall\\{(\\theta,\\xi)\\}\\in\\mathcal{M}\\] (20c) \\[\\mathcal{V}_{m,t}=\\mathcal{V}^{\\prime}_{m}+x_{t}^{\\text{pos}} \\forall m\\] (20d) \\[A^{\\mathcal{V}}_{n,m,t},B^{\\mathcal{V}}_{n,m,t}=\\mathcal{L}( \\mathcal{V}_{m,t}) \\forall m,n=[1,..,3]\\] (20e) \\[A^{\\mathcal{V}}_{n,m,t}\\times p +\\] (20f) \\[b_{n,p,m,t}(M-B^{\\mathcal{V}}_{n,m,t})\\leq M \\forall n,p,m,t\\] \\[3b^{\\mathcal{S}_{i}}_{p,m,t}-\\sum_{n=1}^{3}b_{n,p,m,t}\\leq 0 \\forall p,m,t\\] (20g) \\[A^{\\mathcal{G}}_{k,c},B^{\\mathcal{G}}_{k,c}=\\mathcal{L}( \\mathcal{G}_{c}) \\forall c,k=[1,..,4]\\] (20h) \\[A^{\\mathcal{G}}_{k,c}\\times x_{t}^{\\text{pos}}+\\] (20i) \\[\\tilde{b}_{k,c,t}(M-B^{\\mathcal{G}}_{k,c})\\leq M \\forall k,c,t\\] \\[4b^{x_{t}^{\\text{pos}}}_{c,t}-\\sum_{k=1}^{4}\\tilde{b}_{k,c,t}\\leq 0 \\forall c,t\\] (20j) \\[\\sum_{m=1}^{|\\mathcal{M}|}\\mathcal{F}^{\\text{sel}}_{m,t}=1 \\forall t\\] (20k) \\[b^{\\mathcal{S}_{i}}_{p,m,t}=\\ \\mathcal{F}^{\\text{sel}}_{m,t}\\wedge\\] (20l) \\[b^{\\mathcal{S}_{i}}_{p,m,t}\\wedge\\left[\\bigvee_{c=1}^{| \\mathcal{G}|}(b^{x_{t}^{\\text{pos}}}_{c,t}\\wedge b^{\\mathcal{K}}_{c,p})\\right] \\forall p,m,t\\] \\[\\sum_{t=1}^{T}\\sum_{m=1}^{|\\mathcal{M}|}b^{\\mathcal{S}_{i}^{i}}_{ p,m,t}\\geq 1 \\forall p\\] (20m) \\[x_{0},x_{0},x_{t}^{\\text{pos}}\ otin\\psi,\\ \\forall\\psi\\in\\Psi\\] (20n) \\[x_{0},x_{t}\\in\\mathcal{X},\\ u_{t}\\in\\mathcal{U},\\ \\theta\\in\\bar{\\Theta},\\ \\xi\\in\\bar{\\Xi}\\] \\[m\\in[1,..,|\\mathcal{M}|],p\\in[1,..,|\\mathcal{P}|],c\\in[1,..,| \\mathcal{G}|]\\] \\[\\mathcal{F}^{\\text{sel}}_{m,t},b_{n,p,m,t}\\in\\{0,1\\}\\] \\[b^{\\mathcal{S}}_{p,m,t},\\tilde{b}_{k,c,t},b^{x_{t}^{\\text{pos}}}_{ c,t},b^{\\mathcal{S}_{i}^{i}}_{p,m,t},b^{\\mathcal{S}_{i}^{i}}_{c,p}\\in\\{0,1\\}\\] Fig. 3: The figure illustrates how the proposed approach tackles the visibility problem by simulating the physical behavior of camera rays. (a) In the coverage problem investigated in this work, every point \\(p\\in\\mathcal{P}\\) on the boundary approximation \\(\\Delta\\mathcal{C}\\) of the region of interest must be covered by the agent’s sensor FoV, shown with the pink triangle. (b) The sensor’s FoV is composed of a finite number of camera-rays \\(K_{1},..,K_{|\\mathcal{K}|}\\in\\mathcal{K}_{b_{t},x_{t}^{\\text{pos}},\\xi_{t}}\\) parameterized by the agent’s pose as shown above. A point \\(p\\) is visible through the agent’s camera when it resides inside the camera FoV and exists some ray \\(K\\) which first intersects the line segment \\(L_{p}\\) which contains the point \\(p\\). (c) We learn a set of visibility constraints by decomposing the surveillance area into a number of cells \\(c_{1},..,c_{|\\mathcal{G}|}\\), thus creating a rectangular grid \\(\\mathcal{G}\\). For each cell we learn the visible parts of the scene by checking the intersection of the boundary’s line segments with the camera-rays for all possible combinations of camera-ray configurations \\(\\mathcal{K}_{\\theta_{t},x_{t}^{\\text{pos}},\\xi_{t}}\\). the equation of the line which passes through these two vertices. In total 3 line equations are constructed which fully specify the convex hull of the triangular FoV i.e., a point \\(x\\in\\mathbb{R}^{2}\\) belongs to the convex hull of the triangular FoV iff \\(A^{\\mathcal{V}}_{m,t}\\times x\\leq B^{\\mathcal{V}}_{n,m,t},\\forall n=[1,..,3]\\). More specifically, \\(A^{\\mathcal{V}}_{m,t}\\) and \\(B^{\\mathcal{V}}_{m,t}\\) contain the coefficients of the lines which form the triangular FoV, where in particular the matrix \\(A^{\\mathcal{V}}_{n,m,t}\\) contains the outward normal to the \\(n_{\\text{th}}\\) line. We can write the line equation which passes from two FoV vertices i.e., \\((v^{1}_{x},v^{1}_{y})\\) and \\((v^{2}_{x},v^{2}_{y})\\) as \\(ax+by=c\\), where the coefficients \\(a=v^{1}_{y}-v^{2}_{y}\\) and \\(b=v^{2}_{x}-v^{1}_{x}\\) define the normal on the line i.e., \\(\\vec{n}=(a,b)\\) and \\(c=v^{2}_{x}v^{1}_{y}-v^{1}_{x}v^{2}_{y}\\), thus \\(A^{\\mathcal{V}}_{m,t}=[a,b]\\) and \\(B^{\\mathcal{V}}_{1,m,t}=c\\). To summarize, we can determine whether an arbitrary point \\(x\\in\\mathbb{R}^{2}\\) is included inside the sensor's FoV by checking if the system of linear inequalities in Eqn. (21) is satisfied. The constraint in Eqn. (21) is implemented as shown in (P2) with the constraints shown in Eqn. (20f)-(20g) i.e.,: \\[A^{\\mathcal{V}}_{n,m,t}\\times p+b_{n,p,m,t}(M-B^{\\mathcal{V}}_{ n,m,t})\\leq M,\\ \\forall n,p,m,t,\\] \\[3b^{\\mathcal{S}_{t}}_{p,m,t}-\\sum_{n=1}^{3}b_{n,p,m,t}\\leq 0,\\ \\forall p,m,t.\\] In essence, these constraints allows us to check whether some point \\(p\\in\\mathcal{P}\\) is covered by the agent's sensor i.e., \\(p\\in\\mathcal{S}_{t}(m,x^{\\text{pos}}_{t})\\), when the agent is at position \\(x^{\\text{pos}}_{t}\\) at time-step \\(t\\), and the sensor's FoV is at the \\(m_{\\text{th}}\\) configuration. As we discussed earlier the matrices \\(A^{\\mathcal{V}}_{m,t}\\) and \\(B^{\\mathcal{V}}_{m,t}\\) contain the coefficients of the sensor's FoV for every possible configuration \\(m\\in\\mathcal{M}\\) and time-step \\(t=[1,..,T]\\). With this in mind, we use the binary variable \\(b_{n,p,m,t}\\) to decide whether some point \\(p\\in\\mathcal{P}\\), resides inside the negative half-plane which is created by the \\(n_{\\text{th}}\\) line, of the \\(m_{\\text{th}}\\) FoV configuration at time \\(t\\). When this happens the \\(b_{n,p,m,t}\\) is activated and the inequality in Eqn. (20f) is satisfied, i.e., \\(A^{\\mathcal{V}}_{m,m,t}\\times p\\leq B^{\\mathcal{V}}_{n,m,t}\\). On the other hand when \\(A^{\\mathcal{V}}_{n,m,t}\\times p>B^{\\mathcal{V}}_{n,m,t}\\), the constraint in Eqn. (20f) is satisfied by setting \\(b_{n,p,m,t}=0\\) and using the large positive constant \\(M\\). Subsequently, the constraint in Eqn. (20g) uses the binary variable \\(b^{\\mathcal{S}_{t}}_{p,m,t}\\) to determine whether the point \\(p\\) resides at time \\(t\\) inside the \\(m_{\\text{th}}\\) configuration of the FoV. Thus \\(b^{\\mathcal{S}_{t}}_{p,m,t}\\) is activated only when \\(\\sum_{n=1}^{3}b_{n,p,m,t}=3\\), which signifies that the point \\(p\\) is covered by the sensor's FoV. Similarly, the next 3 constraints shown in Eqn. (20h) - (20j) (also shown below) use the same principles discussed above, to determine whether the agent with position \\(x^{\\text{pos}}_{t}\\) resides inside cell \\(c\\in\\mathcal{G}\\) at time \\(t\\). \\[A^{\\mathcal{G}}_{k,c},B^{\\mathcal{G}}_{k,c}=\\mathcal{L}(\\mathcal{ G}_{c}),\\ \\forall k=[1,..,4],c,\\] \\[A^{\\mathcal{G}}_{k,c}\\times x^{\\text{pos}}_{t}+\\tilde{b}_{k,c,t} (M-B^{\\mathcal{Q}}_{k,c})\\leq M,\\ \\forall k,c,t,\\] \\[4b^{\\mathcal{E}^{\\text{pos}}_{t}}_{c,t}-\\sum_{k=1}^{4}\\tilde{b} _{k,c,t}\\leq 0,\\ \\forall c,t.\\] Specifically, the constraint Eqn. (20h) uses the function \\(\\mathcal{L}(\\mathcal{G}_{c})\\) on the grid cells and returns in the matrices \\(A^{\\mathcal{G}}_{k,c}\\) and \\(B^{\\mathcal{Q}}_{k,c}\\), the coefficients of the linear inequalities which define the convex hull of every cell \\(c\\in\\mathcal{G}\\) in the grid. A point \\(x\\in\\mathbb{R}^{2}\\) resides inside a rectangular cell \\(c\\) iff \\(A^{\\mathcal{G}}_{c}\\times x\\leq B^{\\mathcal{G}}_{c}\\), where \\(A^{\\mathcal{G}}_{c}\\) is a 4 by 2 matrix and \\(B^{\\mathcal{Q}}_{c}\\) is a 4 by 1 column vector. Therefore, the constraint in Eqn. (20i) uses the binary variable \\(\\tilde{b}_{k,c,t}\\) to determine whether the agent's position satisfies the \\(k_{\\text{th}}\\) inequality i.e., \\(A^{\\mathcal{G}}_{k,c}\\times x^{\\text{pos}}_{t}\\leq B^{\\mathcal{Q}}_{k,c}, \\forall k=[1,..4]\\). Subsequently, the binary variable \\(b^{\\mathcal{E}^{\\text{pos}}_{t}}_{c,t}\\) in Eqn. (20j) is activated when \\(x^{\\text{pos}}_{t}\\) resides inside cell \\(c\\) at time \\(t\\). Next, we make use of the constraint in Eqn. (20k) i.e., \\[\\sum_{m=1}^{|\\mathcal{M}|}\\mathcal{F}^{\\text{self}}_{m,t}=1,\\ \\forall t,\\] to account for the fact that at any point in time \\(t\\), only one FoV configuration is active. In other words we would like to prevent the scenario where more than one sets of sensor input controls are applied and executed at some particular time-step \\(t\\). To do this we define the binary variable \\(\\mathcal{F}^{\\text{self}}\\) consisting of \\(|\\mathcal{M}|\\) rows and \\(T\\) columns, such that \\(\\mathcal{F}^{\\text{self}}_{m,t}\\in\\{0,1\\},\\forall m\\in\\mathcal{M},t\\in \\mathcal{T}\\), and we require that at each time-step \\(t\\) only one FoV configuration is active by enforcing the sum of each column to be equal to one, as shown in Eqn. (20k). We can now determine whether some point \\(p\\) belongs to the visible FoV as: \\[b^{\\mathcal{S}^{\\prime}}_{p,m,t}=\\mathcal{F}^{\\text{self}}_{m,t}\\wedge b^{ \\mathcal{S}_{t}}_{p,m,t}\\wedge\\left[\\bigvee_{c=1}^{|\\mathcal{G}|}(b^{x^{\\text{ pos}}_{c,t}}_{c,t}\\wedge b^{\\mathcal{K}}_{c,p})\\right],\\ \\forall p,m,t,\\] where the binary variable \\(b^{\\mathcal{S}^{\\prime}_{t}}_{p,m,t}\\) is activated when the point \\(p\\in\\mathcal{P}\\) is visible at time \\(t\\), and specifically resides inside the \\(m_{\\text{th}}\\) FoV configuration i.e., \\(p\\in\\mathcal{S}^{\\prime}_{t}(m,x^{\\text{pos}}_{t})\\). As it is shown, in the conjunction above, the binary variable \\(b^{\\mathcal{S}^{\\prime}_{t}}_{p,m,t}\\) captures the sensor's pose, which is determined by the agent's position \\(x^{\\text{pos}}_{t}\\) at time \\(t\\) and sensor orientation given by the \\(m_{\\text{th}}\\) FoV configuration, and determines whether the point \\(p\\) resides inside the sensor's FoV i.e., constraints in Eqn. (20f)-(20g). Then the binary variable \\(b^{x^{\\text{eff}}_{t}}_{c,t}\\) checks if the agent at time \\(t\\) resides inside a particular cell \\(c\\) i.e., Eqn. (20h) - (20j) and together with the learned variable \\(b^{\\mathcal{K}}_{c,p}\\) (as discussed in Sec. V-B), determine whether the point \\(p\\) is visible given that the agent is at cell \\(c\\). To summarize, a point \\(p\\) belongs to the visible FoV when there exists some cell \\(c\\) from which the point \\(p\\) is visible and at time-step \\(t\\) the agent's position \\(x^{\\text{pos}}_{t}\\) resides inside that cell \\(c\\) and exists a FoV configuration \\(m\\) such that point \\(p\\) is covered by the agent's sensor at time \\(t\\). Lastly, in order to make sure that only one FoV configuration is active at each time-step (i.e., one set of sensor controls is applied), we use the FoV selector \\(\\mathcal{F}^{\\text{self}}_{m,t}\\) as shown in Eqn. (20k). Finally, we can ensure that during the planning horizon, every point \\(p\\in\\mathcal{P}\\) will be covered at least once by the agent's sensor via the constraint in Eqn. (20m): \\[\\sum_{t=1}^{T}\\sum_{m=1}^{|\\mathcal{M}|}b^{\\mathcal{S}^{\\prime}_{t}}_{p,m,t}\\geq 1,\\ \\forall p,\\] where we require that there exists at least one FoV configuration (i.e., one set of sensor input controls) \\(m\\in\\mathcal{M}\\) at some time-step \\(t\\in\\mathcal{T}\\), such that the binary variable \\(b^{\\mathcal{S}^{\\prime}_{t}}_{p,m,t}\\) is activated i.e., \\(b^{\\mathcal{S}^{\\prime}_{t}}_{p,m,t}=1\\) for every point \\(p\\in\\mathcal{P}\\). Therefore, determining the agent's mobility and sensor control inputs such that all points are included The last constraint shown in Eqn. (20a), implements the obstacle avoidance constraints as discussed in Sec. V-A. To summarize, we have derived a set of constraints i.e., Eqn.(20b)-(20a) which jointly account for the agent's kinematic and sensing model, integrate visibility constraints into the coverage planning problem and guarantee full coverage inside the planning horizon, given that a feasible solution exists. Finally, we should note that the formulation of the coverage controller shown in problem (P2) can be easily extended to tackle the problem in 3D environments, however this has been left for future works. Next, we discuss in detail the design of the objective function, which can be used in order to optimize a set of mission-related performance criteria. #### V-A2 Objectives The problem of integrated UAV guidance and sensor control which is studied in this work is a core component for many applications and tasks including surveillance, emergency response, and search-and-rescue missions. Take for instance a UAV-based search-and-rescue mission where the objective is to search an area of interest and locate as quickly as possible people in need. In such scenarios, the mission's completion time is of the highest importance for saving lives. In other cases the UAV's efficient battery utilization is imperative for the success of the mission. Motivated by the objectives and requirements discussed above we design a multi-objective cost function \\(\\mathcal{J}_{\\text{coverage}}\\) to allow for the characterization of several mission-related optimality criteria, and trade-offs amongst these. In this work, \\(\\mathcal{J}_{\\text{coverage}}\\) is composed of a set of sub-objectives, which sometimes might be competing. More specifically, we define the overall coverage objective \\(\\mathcal{J}_{\\text{coverage}}\\) as: \\[\\mathcal{J}_{\\text{coverage}}=\\left(w_{1}J_{1}+w_{2}J_{2}+\\ldots+w_{n}J_{n} \\right), \\tag{22}\\] where \\(J_{i}\\) represents the \\(i_{\\text{th}}\\) sub-objective and \\(w_{i}\\) is the tuning weight associated with the \\(i_{\\text{th}}\\) sub-objective. Therefore, the weights are used to emphasize or deemphasize the importance of each sub-objective according to the mission goals. Next we design several possible sub-objectives which can be used to drive an efficient coverage planning mission. **Mission completion time (\\(J_{1}\\)):** As discussed earlier one of the most important objectives in a coverage planning scenario is the minimization of the mission's completion time. In other words, we are interested in finding the optimal UAV control inputs (i.e., mobility and sensor controls), which when executed allow the agent to cover all points of interest \\(p\\in\\mathcal{P}\\) as quickly as possible, thus minimizing the time needed to conduct a full coverage of the region of interest. This can be defined as follows: \\[J_{1}=\\sum_{p=1}^{|\\mathcal{P}|}\\sum_{m=1}^{|\\mathcal{M}|}\\sum_{t=1}^{T}\\left( b_{p,m,t}^{\\mathcal{S}_{t}^{\\prime}}\\times\\frac{t}{T}\\right). \\tag{23}\\] In essence by minimizing \\(J_{1}\\), we are minimizing the product of the binary variable \\(b_{p,m,t}^{\\mathcal{S}_{t}^{\\prime}}\\) with the factor \\((t/T)\\), over the planning horizon of length \\(T\\), for all points \\(p\\in\\mathcal{P}\\) and FoV configurations \\(m\\in\\mathcal{M}\\). Effectively, \\((t/T)\\) in Eqn. (23) acts as a weight to \\(b_{p,m,t}^{\\mathcal{S}_{t}^{\\prime}}\\) which increases over time. This drives the optimizer to find the optimal mobility and sensor control inputs which allow the agent to cover all points \\(p\\in\\mathcal{P}\\) as quickly as possible or equivalently \\(b_{p,m,t}^{\\mathcal{S}_{t}^{\\prime}}\\) is activated for each point at the earliest possible time-step. Finally, we should note here that the agent's control inputs are directly linked with \\(b_{p,m,t}^{\\mathcal{S}_{t}^{\\prime}}\\), since the agent's pose is jointly determined by its mobility and sensor controls i.e., Eqn. (20c)-(20d) and also \\(b_{p,m,t}^{\\mathcal{S}_{t}^{\\prime}}\\) is only activated when point \\(p\\) is visible. **Energy Efficiency (\\(J_{2}\\)):** Energy-aware operation is another essential objective for various applications. In essence we are interested in prolonging the UAV's operation time (i.e., minimizing the battery drain), by optimizing the UAV's mobility control inputs (i.e., the amount of force applied), thus generating energy-efficient coverage trajectories. Although, the proposed coverage planning formulation, does not directly uses a battery model for the UAV, it is assumed that the UAV mobility control inputs are directly linked with the battery usage. Therefore, energy efficient coverage planning can be achieved by appropriately selecting the UAV's mobility control inputs. Specifically, it is assumed that the generation of smooth UAV trajectories with reduced abrupt changes in direction and speed can lead to improved battery usage, thus we define the energy-aware coverage planning sub-objective as: \\[J_{2}=\\sum_{t=1}^{T-1}||u_{t}-u_{t-1}||_{2}^{2}+\\sum_{t=0}^{T-1}|u_{t}|, \\tag{24}\\] where we minimize a) the sum of deviations between consecutive control inputs and b) the cumulative magnitude of the absolute value of individual controls, thus leading to energy optimized coverage planning. **Sensor Control Effort (\\(J_{3}\\)):** The last objective aims to minimize the sensor deterioration due to excessive and/or improper usage i.e., by reducing the utilization of the gimbaled sensor during the mission. This allows us to maintain the sensor's healthy status and prolong its lifespan. For this reason, we define as sensor control effort the deviation between successive FoV configurations and thus \\(J_{3}\\) is defined by: \\[J_{3}=\\sum_{t=1}^{T-1}\\sum_{m=1}^{|\\mathcal{M}|}||\\mathcal{F}_{m,t+1}^{ \\text{el}}-\\mathcal{F}_{m,t}^{\\text{el}}||_{2}^{2}, \\tag{25}\\] which favors the generation of coverage trajectories which achieve full coverage with minimum gimbal utilization. To summarize, in this section we have described a set (not exhaustive) of sub-objectives, which can be used to compose the overall multi-objective cost function \\(\\mathcal{J}_{\\text{coverage}}\\) for the coverage path planning problem we examine in this work. These sub-objectives can be prioritized depending on the problem requirements, while new ones can be added according to the mission specifications. We should mention here that the objective \\(J_{3}\\) described above can also be incorporated into the objective \\(J_{2}\\) i.e., energy efficiency, to account for the overall energy expenditure (i.e., energy expenditure from motion control and from gimbal control) of the system. ## VI Evaluation ### _Simulation Setup_ In order to evaluate the proposed integrated guidance and gimbal control coverage approach we have conducted a thorough simulation analysis. More specifically, the evaluation is divided into three parts. In the first part we investigate the effect of the visibility constraints on the coverage planning behavior. In the second part of the evaluation we showcase the proposed approach for various mission related optimality criteria, and finally in the third part, we analyze the generated coverage trajectories for various parameters of the inputs. The simulation setup used for the evaluation of the proposed approach is as follows: The agent kinematics are expressed by Eqn. (1) with \\(\\delta t=1\\)s, agent mass \\(m=3.35\\)kg and drag coefficient \\(\\eta=0.2\\). For demonstration purposes, the control input (i.e., input force) \\(u_{t}=[f_{t}(x),f_{t}(y)]\\) is bounded in each dimension according to \\(|f_{t}(x|y)|\\leq 3\\)N, and the agent velocity is bounded according to \\(|\ u_{t}(x|y)|\\leq 2\\)m/s. The agent's FoV angle is set to \\(\\phi=30\\)deg, and the sensing range \\(h=7\\)m. The camera zoom-levels are set to \\(\\bar{\\Xi}=\\{1,2\\}\\), thus the camera characteristics for zoom-level \\(\\xi=1\\) and \\(\\xi=2\\) are given by \\((\\phi=30,h=7)\\) and \\((\\phi=15,h=14)\\) respectively. In total we consider 4 camera rotation angles i.e., \\(\\bar{\\Theta}=\\{-85,-28,28,85\\}\\) which are used to rotate the camera FoV according to Eqn. (7), leading to a total of 8 possible FoV configurations i.e., \\(|\\mathcal{M}|=8\\). We have used the ray-tracing procedure with \\(|\\mathcal{K}|=5\\) camera-rays in a surveillance region \\(\\mathcal{W}\\) that has a total area of \\(60\\times 20\\)m\\({}^{2}\\). The region/object of interest \\(\\mathcal{C}\\) is represented by a bell-shaped curve (as illustrated in Fig. 2), given by \\(f(x)=a\\times\\text{exp}\\left(\\frac{-(x-y)^{2}}{2c^{2}}\\right)\\) with \\(a,b\\) and \\(c\\) set to 10, 40 and 2 respectively. The region of interest is assumed to be non traversable, and thus we are interested in generating coverage trajectories to cover a total of 11 points \\(\\mathcal{P}=\\{p_{1},..,p_{11}\\}\\) sampled from the region's boundary \\(\\partial\\mathcal{C}\\). Finally, we should mention that the visibility constraints have been learned on a discretized representation \\(\\mathcal{G}\\) of the surveillance area, where \\(\\mathcal{G}\\) contains 16 square cells (as illustrated in Fig. 3) of size \\(10\\)m \\(\\times 5\\)m. To summarize, the visibility constraints have been learned according to Sec. V-B, with \\(|\\mathcal{M}|=8,|\\mathcal{P}|=11,|\\mathcal{G}|=16,n_{s}=15\\) and \\(|\\Delta\\mathcal{C}|=11\\). The results have been obtained with Gurobi v9 solver, running on a 2.5GHz laptop computer. ### _Results_ #### V-B1 **Visibility Constraints** With the first experiment, shown in Fig. 4, we aim to investigate the impact of the visibility constraints on the trajectory generation process, and gain insights on the coverage planning behavior of the proposed approach. Specifically, Fig. 4(a) shows the coverage trajectory, agent velocity, and mobility control inputs, within a planning horizon of \\(T=10\\) time-steps when the visibility constraints are enabled, whereas Fig. 4(b) shows the exact same scenario with the visibility constraints disabled. The region of interest is shaded in pink (i.e., the bell-shaped curve), the agent's trajectory is marked with \\(-\\zeta-\\), the agent's start and stop positions are marked with \\(\\star\\) and \\(\\times\\) respectively, and the points on the boundary to be covered are marked with \\(\\bullet\\). The figure also shows the FoV configuration at each time-step, indicated by the isosceles triangles, where the black solid lines and the gray dashed lines correspond to the first (\\(\\xi_{1}\\)) and second (\\(\\xi_{2}\\)) zoom-levels respectively. Finally, the agent's trajectory and the points to be covered are color-coded according to the time-step which are observed, as shown in the figure legend. Therefore, according to Fig. 4(a), point \\(p_{1}\\) (colored dark blue) is covered at time-step \\(t=2\\), point \\(p_{2}\\) (colored light blue) is covered at \\(t=3\\), point \\(p_{3}\\) (colored light green) is included inside the agent's FoV at \\(t=4\\), point \\(p_{4}\\) (colored black) is the first point to be viewed by the agent at time-step \\(t=1\\), and so on and so forth. We should mention that for this experiment we have set \\(\\mathcal{J}_{\\text{coverage}}=1\\) (i.e., we are minimizing a constant), and thus the solely goal of the optimization in this experiment is to satisfy the coverage constraints (i.e., cover all points). As we can observe from Fig. 4(a), the agent starts from the left side of the bell-shaped curve, and appropriately selects its mobility and sensor control inputs which achieve full coverage. More importantly, we can observe that although the FoV can extend all the way through the object of interest (e.g., at \\(t=1\\) points \\(p_{4}\\) and \\(p_{8}\\) are inside the FoV), the use of visibility constraints, which simulate the physical behavior of camera-rays, allow the identification of occlusions (e.g., at \\(t=1\\) point \\(p_{8}\\) is occluded, and becomes visible at \\(t=7\\) as shown). Therefore, the agent can identify at each time-step which points are visible through its camera and plan its coverage trajectory as needed. For this reason, in this test the agent goes over the bell-shaped curve, towards the other side of the curve, in order to cover the occluded points i.e., at \\(t=7\\) points \\(p_{7}\\) and \\(p_{8}\\) are covered, at \\(t=8\\) point \\(p_{6}\\) is covered, at \\(t=9\\) the point \\(p_{9}\\) is covered and the remaining points (\\(p_{10}\\) and \\(p_{11}\\)) are covered at time-step \\(t_{10}\\). In addition, it is shown that the obstacle avoidance constraints restrict the agent from passing through the object of interest. On the other hand, observe from Fig. 4(b) that when the visibility constraints are disabled, the agent cannot distinguish between visible and occluded parts of the scene i.e., at \\(t=1\\) the points \\(p_{8}\\) and \\(p_{9}\\) (colored black) are occluded but observed, similarly point \\(p_{7}\\) (colored green) is Fig. 4: The figure illustrates the impact of the visibility constraints on the generated coverage trajectories. (a) Visibility constraints are enabled, (b) Visibility constraints disabled. The visibility constraints simulate the physical behavior of camera-rays, therefore allowing the agent to distinguish between visible and occluded points. occluded at \\(t=5\\) but it is observed as shown in the figure. This is because the sensor's visible FoV is not modeled adequately without the use of the visibility constraints, and as a result the generated trajectory does not resembles a realistic coverage path. #### V-B2 **Coverage Objectives** The purpose of the next experiment is to investigate in more detail different coverage planning strategies by optimizing the sub-objectives discussed in Sec. V-C. More specifically, we will show how the coverage plan changes when optimizing the mission completion time (\\(J_{1}\\)), the energy efficiency (\\(J_{2}\\)), the sensor control effort (\\(J_{3}\\)), and a weighted combination of those. Figure 5 shows the coverage planning trajectories along with the agent position, velocity, and input force over time for the same scenario, when optimizing the individual sub-objectives \\(J_{1},J_{2},\\) and \\(J_{3}\\), within a planning horizon of length 20 time-steps. As it can be seen from Fig. 5(a), when optimizing the mission completion time (\\(J_{1}\\)), the set of 11 points \\(\\mathcal{P}\\) is fully covered at time-step 7. Note that the agent's trajectory and the points to be covered are color-coded according to the time-step at which the coverage occurs. In this sense the last point in Fig. 5(a) which is color-coded light green is covered at time-step 7. The time-step at which all points are covered is also shown in the agent position plot, and marked with a black circle. We should point out here that for visual clarity the graphs show the FoV configurations only for the time-instances for which a point is included inside the sensor's FoV. Next, Fig. 5(b) shows the coverage trajectory for the sub-objective which minimizes the agent's energy expenditure. As it is shown, in this scenario the applied input force which is used for guidance is driven to zero over time. In addition we can observe that the agent moves in small increments (i.e., consecutive positions are close to each other) as opposed to the previous scenario, also indicated by the velocity plot. Also, observe how the agent utilizes its sensor to achieve full coverage (which occurs at time-step 20), while optimizing for energy efficiency. In this scenario, 6 out of the 8 camera FoV configurations are used over the planning horizon, until full coverage is achieved. Finally, Fig. 5(c), shows that the minimization of the sensor control effort (\\(J_{3}\\)), forces the agent to complete the mission by utilizing just 1 out of the 8 possible FoV configurations. Essentially, in this scenario the camera remains fixed as shown in the figure. In this scenario full coverage was achieved at time-step 14 as shown in the graph. Also observe that in this scenario the mobility controls fluctuate significantly as opposed to the previous scenario. These three different sub-objectives can be incorporated into a multi-objective cost function, as discussed in Sec. V-C, and by adjusting the emphasis given to each sub-objective the desired coverage behavior can be obtained as shown Fig. 5: The figure illustrates the generated coverage planning trajectories over a planning horizon of length 20 time-steps, for 3 different sub-objectives. (a) Mission completion time (\\(J_{1}\\)), (b) Energy efficiency (\\(J_{2}\\)), and (c) Sensor control effort (\\(J_{3}\\)). Fig. 6: The figure illustrates the coverage planning trajectories over a planning horizon of 20 time-steps, for two different multi-objective cost functions. (a) \\(\\mathcal{J}_{a}=10J_{1}+0.5J_{2}+0.1J_{3}\\), and (b) \\(\\mathcal{J}_{b}=0.1J_{1}+10J_{2}+0.5J_{3}\\). in the next experiment i.e., Fig. 6. More specifically, we have run the coverage planning with two different multi-objective cost functions i.e., \\(\\mathcal{J}_{a}=10J_{1}+0.5J_{2}+0.1J_{3}\\) and \\(\\mathcal{J}_{b}=0.1J_{1}+10J_{2}+0.5J_{3}\\) shown in Fig. 6(a) and Fig. 6(b) respectively. In both scenarios we aim to optimize for energy efficiency by including, and appropriately weighting sub-objective \\(J_{2}\\) in the multi-objective cost function. As it is illustrated the applied force is minimized and driven to zero for both scenarios. However, due to the higher emphasis given to \\(J_{2}\\) in the second scenario, \\(\\mathcal{J}_{b}\\) optimizes energy savings more aggressively also indicated by the generated coverage trajectory. Observe that in objective \\(\\mathcal{J}_{a}\\) greater emphasis is given to mission completion time (i.e., \\(J_{1}\\)), which results in faster coverage (at time-step 7). On the other hand in \\(\\mathcal{J}_{b}\\) full coverage is achieved at time-step 17. Finally, we observe that by weighting more on the sub-objective \\(J_{3}\\) in \\(\\mathcal{J}_{b}\\), allows the agent minimize the gimbal rotations by maintaining the camera fixed and at the same time optimize for energy efficiency as shown in Fig. 6(b). Our next experiment shows the behavior of the proposed coverage planning approach for a traversable region of interest, indicated by the pink shaded rectangle shown in Fig. 7. In this scenario, without loss of generality, we assume an obstacle-free region of interest. More specifically, in Fig. 7(a) the region of interest is approximated with a total of \\(|\\mathcal{P}|=15\\) equally spaced points (shown as \\(\\bullet\\)), which need to be covered by the agent, initially located at \\((x,y)=(5,5)\\) (shown as \\(\\star\\)). Figure 7(b) shows the agent's coverage trajectory when optimizing the coverage objective \\(\\mathcal{J}=0.1J_{1}+J_{2}\\) inside a planning horizon of length \\(T=12\\) time-steps. In this scenario, the agent's FoV angle \\(\\phi\\) and sensing range \\(h\\) are set to \\(35\\)deg, and \\(5\\)m respectively, and the camera can be rotated in 5 ways i.e., \\(\\bar{\\Theta}=\\{-85,-42.5,0,42.5,85\\}\\). In order to make the illustrations easier to read, in this scenario we do not make use of the zoom functionality i.e., \\(\\bar{\\Xi}=\\{1\\}\\) which leads to a total of 5 possible FoV configurations i.e., \\(|\\mathcal{M}|=5\\). As shown in Fig. 7(b) the agent's mobility control inputs and camera rotations are appropriately selected and optimized according to the coverage objective \\(\\mathcal{J}\\) to achieve full coverage of the region of interest, i.e., as shown in the figure the points are colored-coded according to the time-step they are observed by the agent. Next, Fig. 7(c)(d) shows the same setup for 15 points which are sampled uniformly from the region of interest. As shown in Fig. 7(d) the generated coverage plan enables the agent to cover all 15 points over the planning horizon. Our last experiment investigates how various configurations of the FoV parameters \\(\\phi\\) and \\(h\\) (i.e., opening angle and sensing range) affect the coverage performance and more specifically the mission completion time. For this experiment, we use the simulation setup discussed in the beginning of this section, with \\(\\bar{\\Xi}=\\{1\\}\\) and optimizing \\(J_{1}\\). We perform 50 Monte Carlo trials, where we sample the agent position randomly within a disk of radius 5m centered at \\((x,y)=(30,6)\\) for each combination of the parameters \\((\\phi,h)\\in\\Phi\\times H\\), where \\(\\Phi\\in\\{20,40,60,80,100\\}\\)deg and \\(H=\\{5,8,11,14\\}\\)m. In particular, Fig. 8 shows the average coverage completion time for all configurations of the parameters. As we can observe the time needed for full coverage drops from 20sec to below 15sec when we increase the sensing range from \\(h=5\\)m to \\(h=14\\)m for the scenario where the angle opening is set to \\(\\phi=20\\) deg. Similarly, for fixed sensing range set at \\(h=5\\)m, the coverage time reduces approximately by 50\\(\\%\\) as the angle opening increases from 20deg to 100deg. Overall, as we can observe the from Fig. 8 the mission completion time improves as the FoV increases both in terms of \\(\\phi\\) and \\(h\\). Finally, observe that the extremities of the bell shaped object of interest are 10m tall (at the very top) and 10m wide (at the base), which means that without the use of the proposed visibility constraints and with a camera configuration of \\(\\phi=100\\)deg and \\(h=14\\)m, the UAV agent could have observed all points and finish the mission in a couple of seconds, as the entire object of interest would have been included inside it's sensor's footprint. However, this erroneous behavior is prevented in this work by the integration of ray-casting into the proposed coverage controller. ## VII Conclusion In this work we have proposed an integrated guidance and gimbal control approach for coverage path planning. In the proposed approach the UAV's mobility and sensor control inputs are jointly optimized to achieve full coverage of a given region of interest, according to a specified set of optimality criteria including mission completion time, energy efficiency and sensor control effort. We have devised a set of visibility constraints in order to integrate ray-casting to the proposed coverage controller, thus allowing the generation of optimized coverage trajectories according to the sensor's visible field-of-view. Finally, we have demonstrated how the constrained Fig. 8: The figure shows the mission completion time for various parameter configurations of the FoV, i.e., \\(\\phi\\in\\{20,40,60,80,100\\}\\) and \\(h\\in\\{5,8,11,14\\}\\). Fig. 7: The figure illustrates the coverage plan for two different scenarios which exhibit a traversable area of interest, indicated by the pink shaded rectangle. (a)(b) The agent’s coverage plan when the area of interest is approximated by \\(|\\mathcal{P}|=15\\) equally spaced points shown as \\(\\bullet\\), (c)(d) The agent’s trajectory used to cover \\(|\\mathcal{P}|=15\\) points uniformly sampled from the area of interest. The numbers on the trajectory indicate time-steps. optimal control problem tackled in this work can be formulated as a mixed integer quadratic program (MIQP), and solved using off-the-shelf tools. Extensive numerical experiments have demonstrated the effectiveness of the proposed approach. Future directions include the extension of the proposed approach in 3D environments, the evaluation of the proposed approach in real-world settings, and extensions to multiple agents. ## Acknowledgments This work is supported by the European Union's Horizon 2020 research and innovation programme under grant agreement No 739551 (KIOS CoE), from the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development, and from the Cyprus Research and Innovation Foundation under grant agreement EXCELENCE/0421/0586 (GLIMPSE). ## References * [1]J. Aanchiy, D. Batsikhan, B. Kim, W. G. Lee, and S.-G. Lee (2013) Time-efficient and complete coverage path planning based on flow networks for multi-robots. International Journal of Control, Automation and Systems11 (2), pp. 369-376. Cited by: SSI. * [2]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [3]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [4]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [5]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [6]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [7]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [8]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [9]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [10]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [11]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [12]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [13]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [14]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [15]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [16]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [17]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [18]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [19]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [20]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [21]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [22]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [23]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [24]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [25]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [26]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [27]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [28]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [29]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [30]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [31]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [32]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [33]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [34]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [35]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [36]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [37]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [38]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [39]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [40]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [41]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [42]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [43]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [44]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [45]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [46]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [47]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [48]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [49]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [50]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [51]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [52]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [53]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [54]J. Chen, C. Du, Y. Zhang, P. Han, and W. Wei (2021) A clustering-based coverage path planning method for autonomous heterogeneous UAVs. IEEE Transactions on Intelligent Transportation Systems. Cited by: SSII-A. * [55]J. D. D. Nas, R. Sewani, J. Wang, and M. K. Tiwari* [42] M. Theile, H. Bayerlein, R. Nai, D. Gesbert, and M. Caccamo, \"UAV coverage path planning under varying power constraints using deep reinforcement learning,\" in _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2020, pp. 1444-1449. * [43] G. Sanna, S. Godio, and G. Guglieri, \"Neural Network Based Algorithm for Multi-UAV Coverage Path Planning,\" in _2021 International Conference on Unmanned Aircraft Systems (ICUAS)_. IEEE, 2021, pp. 1210-1217. * [44] P. Maini, P. Tokekar, and P. Sujit, \"Visual monitoring of points of interest on a 2.5 d terrain using a uav with limited field-of-view constraint,\" _IEEE Transactions on Aerospace and Electronic Systems_, vol. 57, no. 6, pp. 3661-3672, 2021. * [45] P. Wang, R. Krishnamurti, and K. Gupta, \"View planning problem with combined view and traveling cost,\" in _Proceedings 2007 IEEE International Conference on Robotics and Automation_. IEEE, 2007, pp. 711-716. * [46] E. Galceran and M. Carreras, \"A survey on coverage path planning for robotics,\" _Robotics and Autonomous Systems_, vol. 61, no. 12, pp. 1258-1276, 2013. * [47] W. R. Scott, G. Roth, and J.-F. Rivest, \"View planning for automated three-dimensional object reconstruction and inspection,\" _ACM Computing Surveys (CSUR)_, vol. 35, no. 1, pp. 64-96, 2003. * [48] R. Anand, D. Aggarwal, and V. Kumar, \"A comparative analysis of optimization solvers,\" _Journal of Statistics and Management Systems_, vol. 20, no. 4, pp. 623-635, 2017. * [49] C. E. Luis, M. Vukosarljev, and A. P. Schoellig, \"Online trajectory generation with distributed model predictive control for multi-robot motion planning,\" _IEEE Robotics and Automation Letters_, vol. 5, no. 2, pp. 604-611, 2020. * [50] G. Garcia and S. Keshmiri, \"Nonlinear model predictive controller for navigation, guidance and control of a fixed-wing uav,\" in _AIAA guidance, navigation, and control conference_, 2011, p. 6310. * [51] F. Gavilan, R. Varquez, and E. F. Camacho, \"An iterative model predictive control algorithm for uav guidance,\" _IEEE Transactions on Aerospace and Electronic Systems_, vol. 51, no. 3, pp. 2406-2419, 2015. * [52] I. D. Cowling, O. A. Yakimenko, J. F. Whlhorne, and A. K. Cokae, \"A prototype of an autonomous controller for a quadrotor uav,\" in _2007 European Control Conference (ECC)_. IEEE, 2007, pp. 001-4008. * [53] Q. Wang, Y. Gao, J. Ji, C. Xu, and F. Gao, \"Visibility-aware trajectory optimization with application to aerial tracking,\" in _2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2021, pp. 5249-5256. * [54] B. Penin, P. R. Giordano, and F. Chaumette, \"Vision-based reactive planning for aggressive target tracking while avoiding collisions and occlusions,\" _IEEE Robotics and Automation Letters_, vol. 3, no. 4, pp. 3725-3732, 2018. * [55] Z. Zhang and D. Scaramuzza, \"Fisher information field: an efficient and differentiable map for perception-aware planning,\" _arXiv preprint arXiv:2008.03324_, 2020. * [56] Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu, and M. Bennamoun, \"Deep learning for 3d point clouds: A survey,\" _IEEE transactions on pattern analysis and machine intelligence_, vol. 43, no. 12, pp. 4338-4364, 2020. * [57] R. A. Newcombe and A. J. Davison, \"Live dense reconstruction with a single moving camera,\" in _2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition_, 2010, pp. 1498-1505. \\begin{tabular}{c c} & Savvas Papaioannou received the B.S. degree in Electronic and Computer Engineering from the Technical University of Crete, Chania, Greece in 2011, the MS. degree in Electrical Engineering from Yale University, New Haven, CT, USA, in 2013, and the Ph.D. degree in Computer Science from the University of Oxford, Oxford, UK. in 2017. He is currently a Research Associate with the KIOS Research and Innovation Center of Excellence, University of Cyprus, Nicosia, Cyprus. His research interests include multi-agent and autonomous systems, state estimation and control, multi-target tracking, trajectory planning, and intelligent unmanned aerial vehicle (UAV) systems and applications. Dr. Papaioannou is a reviewer for various conferences and journals of the IEEE and ACM Associations, and he has served in the organizing committees of various international conferences. \\\\ \\end{tabular} \\begin{tabular}{c c} & Panayiotis Kolios is currently a Research Assistant Professor at the KIOS Research and Innovation Centre of Excellence of the University of Cyprus, He received his BEng and PhD degree in Telecommunications Engineering from King's College London in 2008 and 2011, respectively. Before joining the KIOS CoE, he worked at the Department of Communications and Internet Studies at the Cyprus University of Technology and the Department of Computer Science of the University of Cyprus (UCY). His work focuses on both basic and applied research on networked intelligent systems. Some examples of systems that fall into the latter category include intelligent transportation systems, autonomous unmanned aerial systems and the plethora of cyber-physical systems that arise within the Internet of Things. Particular emphasis is given to emergency management in which natural disasters, technological faults and man-made attacks could cause disruptions that need to be effectively handled. Tools used include graph theoretic approaches, algorithmic development, mathematical and dynamic programming, as well as combinatorial optimization. \\\\ \\end{tabular} \\begin{tabular}{c c} & Theocharis Theocharides is an Associate Professor in the Department of Electrical and Computer Engineering, at the University of Cyprus and a faculty member of the KIOS Research and Innovation Center of Excellence where he serves as the Research Director. Theocharis received his Ph.D. in Computer Engineering from Penn State University, working in the areas of low-power computer architectures and reliable system designer, where he was honored with the Robert M. Owens Memorial Scholarship, in May 2005. He has been with the Electrical and Computer Engineering department at the University of Cyprus since 2006, where he directs the Embedded and Application-Specific Systems-on-Chip Laboratory. His research focuses on the design, development, implementation, and deployment of low-power and reliable-on-chip application-specific architectures, low-power VLSI design, real-time embedded systems design and exploration of energy-reliability trade-offs for Systems on Chip and Embedded Systems. His focus lies on acceleration of computer vision and artificial intelligence algorithms in hardware, geared towards edge computing, and in utilizing reconfigurable hardware towards self-aware, evdouble edge computing systems. His research has been funded by several National and European agencies and the industry, and he is currently involved in over ten funded ongoing research projects. He serves on several organizing and technical program committees of various conferences (currently serving as the Application Track Chair for the DATE Conference), is a Senior Member of the IEEE and a member of the ACM. He is currently an Associate Editor for the ACM Transactions on Emerging Technologies in Computer Systems, IEEE Consumer Electronics magazine, IET's Computers and Digital Techniques, the ETRI journal and Springer Nature Computer Science. He also serves on the Editorial Board of IEEE Design \\& Test magazine. \\\\ \\end{tabular} Christos Panayiotou is a Professor with the Electrical and Computer Engineering (ECE) Department at the University of Cyprus (UCY). He is also the Deputy Director of the KIOS Research and Innovation Center of Excellence for which he is also a founding member. Christos has received a B.Sc. and a Ph.D. degree in Electrical and Computer Engineering from the University of Massachusetts at Amherst, in 1994 and 1999 respectively. He also received an MBA from the Isenberg School of Management, at the aforementioned university in 1999. Before joining the University of Cyprus in 2002, he was a Research Associate at the Center for Information and System Engineering (CISE) and the Manufacturing Engineering Department at Boston University (1999 - 2002). His research interests include modeling, control, optimization and performance evaluation of discrete event and hybrid systems, intelligent transportation networks, cyber-physical systems, event detection and localization, fault diagnosis, wireless, ad hoc and sensor networks, smart camera networks, resource allocation, and intelligent buildings. Christos has published more than 270 papers in international refereed journals and conferences and is the recipient of the 2014 Best Paper Award for the journal Building and Environment (Elsevier). He is an Associate Editor for the IEEE Transactions of Intelligent Transportation Systems, the Conference Editorial Board of the IEEE Control Systems Society, the Journal of Discrete Event Dynamical Systems and the European Journal of Control. During 2016-2020 he was Associate Editor of the IEEE Transactions on Control Systems Technology. He held several positions in organizing committees and technical program committees of numerous international conferences, including General Chair of the 23rd European Working Group on Transportation (EWGT2020), and General Co-Chair of the 2018 European Control Conference (ECC2018). He has also served as Chair of various subcommittees of the Education Committee of the IEEE Computational Intelligence Society. Marios M. Polycarpou is a Professor of Electrical and Computer Engineering and the Director of the KIOS Research and Innovation Center of Excellence at the University of Cyprus. He is also a Member of the Cyprus Academy of Sciences, Letters, and Arts, and an Honorary Professor of Imperial College London. He received the B.A degree in Computer Science and the B.Sc. in Electrical Engineering, both from Rice University, USA in 1987, and the M.S. and Ph.D. degrees in Electrical Engineering from the University of Southern California, in 1989 and 1992 respectively. His teaching and research interests are in intelligent systems and networks, adaptive and learning control systems, fault diagnosis, machine learning, and critical infrastructure systems. Dr. Polycarpou has published more than 350 articles in refereed journals, edited books and refereed conference proceedings, and co-authored 7 books. He is also the holder of 6 patents. Prof. Polycarpou received the 2016 IEEE Neural Networks Pioneer Award. He is a Fellow of IEEE and IFAC and the recipient of the 2014 Best Paper Award for the journal Building and Environment (Elsevier). He served as the President of the IEEE Computational Intelligence Society (2012-2013), as the President of the European Control Association (2017-2019), and as the Editor-in-Chief of the IEEE Transactions on Neural Networks and Learning Systems (2004-2010). Prof. Polycarpou serves on the Editorial Boards of the Proceedings of the IEEE, the Annual Reviews in Control, and the Foundations and Trends in Systems and Control. His research work has been funded by several agencies and industry in Europe and the United States, including the prestigious European Research Council (ERC) Advanced Grant, the ERC Synergy Grant and the EU Teaming project.
Coverage path planning with unmanned aerial vehicles (UAVs) is a core task for many services and applications including search and rescue, precision agriculture, infrastructure inspection and surveillance. This work proposes an integrated guidance and gimbal control coverage path planning (CPP) approach, in which the mobility and gimbal inputs of an autonomous UAV agent are jointly controlled and optimized to achieve full coverage of a given object of interest, according to a specified set of optimality criteria. The proposed approach uses a set of visibility constraints to integrate the physical behavior of sensor signals (i.e., camera-rays) into the coverage planning process, thus generating optimized coverage trajectories that take into account which parts of the scene are visible through the agent's camera at any point in time. The integrated guidance and gimbal control CPP problem is posed in this work as a constrained optimal control problem which is then solved using mixed integer programming (MIP) optimization. Extensive numerical experiments demonstrate the effectiveness of the proposed approach. Guidance and control, coverage planning, trajectory planning, autonomous agents, unmanned aerial vehicles (UAVs).
Write a summary of the passage below.
216
isprs/7cb3c7fb_0802_4e47_b72f_bedc261e5b49.md
# Preface: The 2022 edition of the Xxiv\\({}^{th}\\) isprs Congress Loic Landrieu \\({}^{1}\\) Ewelina Rupnik\\({}^{1}\\) Sander Oude Elberink\\({}^{2}\\) Clement Mallet\\({}^{1}\\) Nicolas Paparoditis\\({}^{3}\\) \\({}^{1}\\) Univ. Gustave Eiffel, IGN-ENSG, LASTIG, France \\({}^{2}\\) University of Twente, the Netherlands \\({}^{3}\\) Univ. Gustave Eiffel, IGN-ENSG, France [http://www.isprs2022-nice.com/](http://www.isprs2022-nice.com/) - [email protected] - [email protected] ## 1 Introduction We report key elements and figures related to the proceedings of the 2022 edition of the XXIV\\({}^{th}\\) ISPRS Congress. Despite the uncertainty and turmoil caused by the COVID-19 pandemic, the 2022 edition of the Congress is going to take place in person in Nice (France, 6-11 June 2022) and online, with a significant expected turmoil 1,600 participants have registered including 300 online participation as of April 25. The dynamic and unpredictable global health situation makes it difficult to predict participation. This year, 959 papers were submitted to the congress, an increase compared to the 2021 edition (667 papers), but below the 2020 edition (1776 papers). Combining all three editions, the XXIV ISPRS Congress processed 3402 valid submissions, leading to the publication of 2263 papers (2020: 1054, 2021: 466, 2022: 743). ## 2 Key Elements The International Program Committee (IPC) established in 2020 was kept identical to the 2020 and 2021 edition (Mallet et al., 2021), except for the replacement of the Program Chairs. The IPC includes the Congress Director, the ISAC Chair, the Program Chairs, the Chair of the ISPRS Student Consortium, Technical Commission Presidents, and Vice Presidents (TCP). To efficiently handle the large expected number of papers related to the Thematic Sessions (see below), Clement Mallet was nominated as _Thematic Session Chair_ and integrated the IPC. The templates for paper submission (both abstracts and full papers) were identical to the 2021 edition: ISPRS website. ### Tracks & Submission Process Authors had the possibility to submit their work through different tracks: * **Technical Commission tracks (5):** one track for each Technical Commission, managed by the TCP and with topics corresponding to the TC Working Groups (WG); * **Youth Forum:** managed by the ISPRS Student Consortium; * **Thematic Sessions (13):** managed by the organizers of these sessions, either by invitation or open to everyone (more details in Section 2.3). The deadline was identical for both abstracts and full papers; see Section 2.2. The main difference remains the format (2 pages with authors' names _v.s._ 6-8 pages and anonymous, respectively). The submission and review processes of each TC were monitored by the TC presidents with the help of the WG officers. ### Important dates Due to logistical constraints, the conference had to take place more than a month earlier than the previous editions, in June instead of July. To keep the paper deadline after Christmas break, we had to shorten the review period even more than in the previous editions. Our priority remained to minimize the time between submission and publication and to give sufficient time to the authors of accepted abstracts to extend their paper. * **January 17**: Deadline for abstracts & full papers; * **February 17**: Notification of authors for abstracts; * **March 4**: Notification of authors for full papers; * **7 April**: Deadline for camera-ready papers. 684 reviewers from XX countries (Africa: 14 - Asia: 187 - Europe: 346 - North America: 64 - Oceania: 22 - South America: 23 - Middle East: 28, (Figure 2) provided 2107 reviews. Thanks to an extensive recruitment campaign, the number of reviewers increased by 34% compared to the 2021 edition. ### Thematic sessions Thematic Sessions (TS) were created for the 2020 edition to promote emerging and cross-discipline topics not covered by Figure 1: Papers were submitted by 1475 Authors from 77 countries. the ISPRS Working Groups (Mallet et al., 2020). For the 2022 edition of the Congress, 10 topics were selected from the TS of 2021 as well as 3 new ones, all listed in Table 1. The same deadlines and formats applied as for the main track. Several TS welcomed only invited papers. Each TS was linked to a specific TC. The final papers are published in Volumes corresponding to their TC. ## 3 The Review Process ### Organisation The overall workflow is described in (Mallet et al., 2018). Depending on the number of papers, TCPs either directly handled the papers of their commission themselves (TC I, II, V and Youth Forum), or they decided to involve Area Chairs for reviewer assignment and decision taking (TC III and IV). The area chairs were selected from the Working Group officers. Thematic Session organizers directly acted as Area Chairs under the supervision of the Thematic Session chair. In order to preserve the double-blind peer review process for full papers and to guarantee objectivity in decision making, papers co-authored by TCPs, Area Chairs, or TS organisers were directly handled by the Program Chairs. ### Change in Reviewing This year and for the first time, the area chairs were tasked with writing meta-reviews to summarize the reviewers remarks and provide a definitive list of changes expected for the camera ready papers. Meta-reviews are standard practice in other fields such as computer vision or machine learning and fulfill multiple goals: (i) help the authors make sense of diverging reviews with possibly contradicting recommendations; (ii) clarify what is expected to change in the camera-ready version of the paper; (iii) explicitly acknowledge the reviewers work by the area chairs. The IPC also decided to extend the status of \"_Conditionally Accepted_\" to both abstracts and full papers that showed significant, but fixable flaws in their scientific quality. This year, reviewers had the opportunity to signal that papers that need significant changes before they can be accepted by ticking a button. This prompted them to explicitly list the expected changes. Authors with conditionally accepted papers mostly followed the recommendation, demonstrating the efficiency of this status in improving borderline submissions. Among the papers that were accepted but not published, 5 authors withdrew their articles, 71 did not upload a camera-ready version in time, and 6 did not manage to produce a final version compliant with the recommendations. ### Plagiarism Detection All accepted papers went through the iThenticate software in order to detect cases of plagiarism. The software provides a full report for each paper. In particular, it calculates a _similarity score_ by comparing the contribution with the iThenticate proprietary database, databases of other content providers, and documents retrieved through standard Internet search. A global similarity score is retrieved by aggregating individual matching scores. The high scores corresponded to either a strong overlap with preprints (which does not violate the ISPRS policy on preprints) or with journal papers. In the latter cases, the authors accepted to withdraw their contribution from the proceeding and to present their work as a poster (4 papers total). ### Statistics We received 959 submissions: 605 abstracts and 354 full papers. 743 articles were accepted (acceptance rate: 77.5%), including 412 abstracts (acceptance rate: 68.0%) and 354 full papers (93.5%). 524 papers are published in 5 volumes of the ISPRS Archives while 219 are published in the ISPRS Annals (22.8% of the submitted papers). The number of submitted and published papers increased compared to 2021, but remained below the first edition of the XXIVth Congress. The papers were submitted by 1475 authors from 77 countries (Africa: 32 - Asia: 478 - Europe: 752 - North America: 87 - Oceania: 17 - South America: 43 - Middle East: 66), see Figure 1. Technical Commission III had the most submissions (35.1%, Figure 4). The ratio between continents and commissions remains stable with respect to the 2020 and 2021 editions. Figure 3: Review scores for abstracts (top) and full papers (bottom) according to their acceptance status. Rejected papers also include withdrawn contributions. Figure 2: Papers were reviewed by 684 reviewers from 55 countries. The most popular Working Groups per omission were: Mobile Mapping Technology (I.7, 19 papers), Point Cloud Processing (II.3, 32 papers), Agriculture and Natural Ecosystems Modeling and Monitoring (III.10, 52 papers), Spatial Data Analysis, Statistics, and Uncertainty Modeling (IV3, 31 papers), Curriculum Development and Methodology (V.1 3 papers). The most popular thematic session was Cultural Heritage Documentation with 16 papers. We collected 2.1 reviews per submission on average (2.0 reviews for abstracts and 2.4 reviews for full papers). Again, the evaluation criteria, which led to a score between 0 and 100, captures the main strengths and weaknesses of the contribution submitted and helped to smoothly discriminate the papers that should be rejected, accepted to the Archives or the Annals (Figure 3). ## 4 Awards ### Young Author's Award Based on the review process, each Technical Commission selected one article for this award. The awardees are as follows. **TC.**: Sensor systems * **Kyriaki Mouzakidou**, Davide Antonio Cucci, Jan Skaloud (CH) for \"_On the Benefit of Concurrent Adjustment of Active and Passive Optical Sensors with GNSS & Raw Inertial Data_\". **TC.**: Photogrammetry * **Corimec Stucker**, Bingxin Ke, Yuanwen Yue, Shengyu Huang, Iro Armeni, Konrad Schindler (CH) for \"_Implicity: City Modeling from Satellite Images with Deep Implicit Occupancy Fields_\". **TC III**: Remote Sensing * **Mirjana Voelsen**, Maryam Teimouri, Franz Rottensteiner, Christian Heipke (DE, IR) for \"_Investigating 2D and 3D Convolutions for Multitemporal Land Cover Classification Using Remote Sensing Images_\". **TC IV**: Spatial Information Science * **Xiaodan Shi**, Haoran Zhang, Wei Yuan, Dou Huang, Zhiling Guo, Ryosuke Shibasaki (JP) for \"_Learning Social Compliant Multi-modal Distributions of Human Path in Crowds_\". **TC V**: Education and Outreach * **Juan Fernando Toro Herrera**, Daniela Carrion, Lorenzo Rossi, Mirko Reguzzo (IT) for \" _The Open Database of Regional Models of the International Service for the Geoid_\". ### Outstanding Reviewers The Technical Commissions appreciate the work done by all reviewers. They also want to distinguish the following reviewers as \"Outstanding Reviewers\" for their thorough reviews and deep involvement in the process: * **TC.**: Petra Helmholz (AU), Michael Cramer (DE), Stephan Nebiker (CH), Dorota Iwasacznik (DE), Naser El-Sheimy (CA). * **TC.**: Filiberto Chiabrando (IT), Michael Yang (NL), Eleonora Maseet (IT), Stuart Robson (UK), Max Hoedel (DE), Loic Landrieu (FR). * **TC.**: Michael Schmitt (DE), Timo Balz (CN), Ali Ozgun Ok (TR), Andrea Masiero (IT), Daniele Cerra (DE), Kiichiro Kumagai (JP), Megumi Yamashita (JP), Yixiang Tian (CN), Masataka Takagi (JP), Nicolas Audebert (FR), Hiroyuki Wakabayashi (JP), Jean-Baptiste Feret (FR). * **TC IV**: Reza Mahmoud (IR), Costa Cidalia (PT), Serena Coetzce (ZA), Stephan Winter (AU), Youness Dehbi (DE), Lucia Diaz-Vilario (ES), Wei Tu (CN). * **TC.**: Alfred Stein (NL), Andrea Masiero (IT), Rupert Mueller (DE), Cristana Achille (IT), Jen-Jer Jaw (TW). ## References * [1] Mallet, C., Dowman, I., Vosselman, G., Stilla, U., Halounova, L., Paparoditis, N., 2018. The Review Process for ISPRS Events. _ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences_, IV-5, 53-8. [https://www.ispss-ann-photogram-remote-sens-spatial-inf-sci.net/IV-5/53/2018/](https://www.ispss-ann-photogram-remote-sens-spatial-inf-sci.net/IV-5/53/2018/). * [2] Mallet, C., Lafarge, F., Poreba, M., Rupnik, E., Bahl, G., Girard, N., Garioud, A., Dowman, I., Paparoditis, N., 2020. Preface: The 2020 edition of the XXIVth IPSR Congress. _ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, V-1-2020, 1-6. [https://www.ispss-ann-photogram-remote-sens-spatial-inf-sci.net/V-1-2020/1/2020/](https://www.ispss-ann-photogram-remote-sens-spatial-inf-sci.net/V-1-2020/1/2020/). * [3] Mallet, C., Lafarge, F., Poreba, M., Wu, T., Bahl, G., Yu, M., Garioud, A., Chen, Y., Jiang, S., Yang, M. et al., 2021. Preface: the 2021 edition of the xxivth isprs congress. _XXIVth ISPRS Congress, 2021 edition_, 1-5. * [4] \\begin{table} \\begin{tabular}{c|c c|c c c|c c} \\hline \\multirow{2}{*}{# Abstract } \\\\ & Submitted & Archives & Submitted & Archives & Annals & \\% Archives & \\% Annals \\\\ \\hline \\hline **Total** & **605** & **412** & **354** & **112** & **219** & **54.5\\%** & **22.8\\%** \\\\ \\hline I & 71 & 47 & 40 & 13 & 25 & 54.0\\% & 22.5\\% \\\\ \\hline II & 138 & 103 & 92 & 38 & 50 & 61.3\\% & 21.7\\% \\\\ \\hline III & 209 & 133 & 128 & 41 & 84 & 51.6Figure 4: Paper submission statistics.
We report key elements and figures related to the proceedings of the 2022 edition of the XXIV\\({}^{th}\\) ISPRS Congress. Despite the uncertainty and turmoil caused by the COVID-19 pandemic, the 2022 edition of the Congress is going to take place in person in Nice (France, 6-11 June 2022) and online, with a significant expected turmoil 1,600 participants have registered including 300 online participation as of April 25. The dynamic and unpredictable global health situation makes it difficult to predict participation. This year, 959 papers were submitted to the congress, an increase compared to the 2021 edition (667 papers), but below the 2020 edition (1776 papers). Combining all three editions, the XXIV ISPRS Congress processed 3402 valid submissions, leading to the publication of 2263 papers (2020: 1054, 2021: 466, 2022: 743).
Write a summary of the passage below.
201
arxiv-format/2209_05708v1.md
# InTEn-LOAM: Intensity and Temporal Enhanced LiDAR Odometry and Mapping Shuaixin Li Bin Tian Zhu Xiaozhou Gui Jianjun Yao Wen Guangyun Li Guangyun Li Corresponding author: [email protected] (Guangyun Li) the National Innovation Institute of Defence Technology, PLA Academy of Military Sciences, Beijing 100079, China the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Science, Beijing 100081, China the Department of Geospatial Information, PLA Information Engineering University, Zhengzhou 450001, China ## 1 Introduction Autonomous robots and self-driving vehicles must have the ability to localize themselves and intelligently perceive the external surroundings. Simultaneous localization and mapping (SLAM) focuses on the issue of vehicle localization and navigation in unknown environments, which plays a major role in many autonomous driving and robotics-related applications, such as mobile mappingLi et al. (2020), space explorationEbadi et al. (2020), robot localizationFilipenko and Afanasyev (2018), and high-definition map productionYang et al. (2018). In accordance with theon-board perceptional sensors, it can be roughly classified into two categories, i.e., camera-based and LiDAR-based SLAM. Compared with images, LiDAR (Light detection and ranging) point clouds are invariant to the changing illumination and sufficiently dense for 3D reconstruction tasks. Accordingly, LiDAR SLAM solutions have become a preferred choice for self-driving car manufacturers than vision-based solutions Milz et al. (2018); Campos et al. (2020); Qin et al. (2018). Note that methods with loop closure are often called 'SLAM solutions', while those without the module are called 'odometry solutions'. However, both of them owns abilities of self-localization in unknown scenes and mapping the traversed environments.For instance, though LOAM Zhang and Singh (2017) and LeGO-LOAM Shan and Englot (2018) achieve low-drift and real-time pose estimation and mapping, only LeGO-LOAM can be referred as complete SLAM solution since it is a loop closure-enabled system. It has witnessed remarkable progress in LiDAR-based SLAM for the past decade Zhang and Singh (2017); Behley and Stachniss (2018); Shan and Englot (2018); Jiao et al. (2020); Zhou et al. (2021); Koide et al. (2019). The state-of-the-art solutions have shown remarkable performances, especially in structured urban and indoor scenes. Recent years have seen solutions for more intractable problems, e.g., fusion with multiple sensors Zhao et al. (2019); Palieri et al. (2020); Shan et al. (2020); Qin et al. (2020); Lin et al. (2021), adapting to cutting-edge solid-state LiDAR Li et al. (2021), global localization Dube et al. (2020), improving the efficiency of optimization back-end Droeschel et al. (2017); Ding et al. (2020), etc., yet many issues remain unsolved. Specifically, most conventional LO solutions currently ignore intensity information from the reflectance channel, though it reveals reflectivities of different objects in the real world. Figure 1: Overview of the proposed InTEn-LOAM system (a) The color image from the on-borad camera. (b) The projected scan-context segment image. (c) The raw point cloud from the Velodyne HDL–64 LiDAR colorized by intensity. (d) The projected cylindrical range image colorized by depth. (e) The segmented label image. (f) The estimated normal image (x, y, z). (g) The intensity image. Only refletor features are colorized, while the ground is removed. (h) Various types of feature (ground, facade, edge, reflector) extracted from the laser scan. (i) The current point features aligned with the so-far local feature map(dynamic object). An efficient incorporation approach of making use of point intensity information is still an open problem since the intensity value is not as straightforward as the range value. It is a value w.r.t many factors, including the material of target surface, the scanning distance, the laser incidence angle, as well as the transmitted energy. Besides, the laser sweep represents a snapshot of surroundings, and thus moving objects, such as pedestrians, vehicles, etc., may be scanned. These dynamic objects result in 'ghosting points' in the accumulated points map and may increase the probability of incorrect matching, which deteriorates the localization accuracy of LO. Moreover, improving the robustness of point registration in some geometric-degraded environments, e.g., long straight tunnel, is also a topic worthy of in-depth discussion. In this paper, we present InTEn-LOAM (as shown in Fig.1) to cope with the aforementioned challenges. The main contributions of our work are summarized as four-fold: * We propose an efficient range-image-based feature extraction method that is able to adaptively extract features from the raw laser scan and categorize them into four different types in real-time. * We propose a coarse-to-fine, model-free method for online dynamic object removal enabling the LO system to build a purely static map by removing all dynamic outliers in raw scans. Besides, we improved the voxel-based downsize filter, making use of the implicitly temporal information of consecutive laser sweeps to ensure the similarity between the current scan and the local map. * We propose a novel intensity-based points registration algorithm that directly leverages reflectance measurements to align point clouds, and we introduce it into the LO framework to achieve jointly pose estimation utilizing both geometric and intensity information. * Extensive experiments are conducted to evaluate the proposed system. Results show that InTEn-LOAM achieves similar or better accuracy in comparison with state-of-the-art LO systems and outperforms them in unstructured scenes with sparse geometric features. ## 2 Related Work ### Point cloud registration and LiDAR odometry Point cloud registration is the most critical problem in LiDAR-based autonomous driving, which is centered on finding the best relative transformation of point clouds. Existing registration techniques can be either categorized into feature-based and scan-based methods Furukawa et al. (2015) in terms of the type of data, or local and global methods Zong et al. (2018) in terms of the registration reference. Though the local registration requires a good transformation initial, it has been widely used in LO solutions since sequentially LiDAR sweeps commonly share large overlap, and a coarse initial can be readily predicted. For feature-based approaches, different types of encoded features, e.g., FPFH (fast point feature histogram) Rusu et al. (2009), CGF (compact geometric feature) Khoury et al. (2017), and arbitrary shapes are extracted to establish valid data associations. LOAM Zhang and Singh (2017) is one of the pioneering works of feature-based LO, which extracts plane and edge features based on the sorted smoothness of each point. Many follow-up works follow the proposed feature extraction scheme Shan et al. (2020); Li et al. (2021); Qin et al. (2020); Lin et al. (2021). For example, LeGO-LOAM Shan and Englot (2018) additionally segmented ground to bound the drift in the ground norm direction. MULLS (multi-metric linear least square) Pan et al. (2021) explicitly classifies features into six types, (facade, ground, roof, beam, pillar, and encoded points) using the principle component analysis (PCA) algorithm and employs the least square algorithm to estimate the ego-motion, which remarkably improves the LO performance, especially in unstructured environments. Yin et al. (2020) proposes a convolutional auto-encoder (CAE) to encode feature points for conducting a more robust point association. Scan-based local registration methods iteratively assign correspondences based on the closest-distance criterion. The iterative closest point (ICP) algorithm, introduced by Besl and McKay (1992), is the most popular scan registration method. Many variants of ICP have been derived for the past three decades, such as Generalized ICP (GICP) Segal et al. (2009) and improved GICP Yokozuka et al. (2021). Many LO solutions apply variants of ICP to align scans for its simplicity and low computational complexity. For example, Moosmann and Stiller (2011) employs standard ICP, while Palieri et al. (2020) and Behley and Stachniss (2018) employ GICP and normal ICP. The normal distributions transform (NDT) method, first introduced by Biber and Strasser (2003), is another popular scan-based approach, in which surface likelihoods of the reference scan are used for scan matching. Because of that, there is no need for computationally expensive nearest-neighbor searching in NDT, making it more suitable for LO with large-scale map points Zhou et al. (2021); Zhao et al. (2019); Koide et al. (2019). ### Fusion with point intensity Some works have attempted to introduce the intensity channel into scan registration. Inspired by GICP, Servos and Waslander (2017) proposes the multi-channel GICP (MCGICP), which integrates color and intensity information into the GICP framework by incorporating additional channel measurements into the covariances of points. In Khan et al. (2016), a data-driven intensity calibration approach is presented to acquire a pose-invariant measure of surface reflectivity. Based on that, Wang et al. (2021) establishes voxel-based intensity constraints to complement the geometric-only constraints in the mapping thread of LOAM. Pan et al. (2021) assigns higher weights for associations with similar intensities to suppress the effect of outliers adaptively. Besides, the end-to-end learning-based registration framework, named Deep VCP (virtual corresponding points)Lu et al. (2019), is proposed, achieving comparable accuracy to prior state-of-the-arts. The intensity channel is used to find stable and robust feature associations, which are helpful to avoid the interference of negative true matchings. ### Dynamic object removal A good amount of learning-based works related to dynamic removal have been reported in Guo et al. (2020). In general, the trained model is used to predict the probability score that a point originated from dynamic objects. The model-based approaches enable to filter out of the dynamics independently, but they also require laborious training tasks, and the segmentation performance is highly dependent on the training dataset. Traditional model-free approaches rely on differences between the current laser scan and previous scans Yoon et al. (2019); Dewan et al. (2016); Kim and Kim (2020). Though it's convenient and straightforward, only points that have fully moved outside their original position can be detected/removed. ## 3 Methodology The proposed framework of InTEn-LOAM consist of 5 submodules, i.e., feature extraction filter (FEF), scan-to-scan registration (S2S), scan-to-map registration (S2M), temporal-basedvoxel filter (TVF) and dynamic object removal (DOR) (see Fig.2). Following LOAM, the LiDAR odometry and mapping are executed on two parallel threads to improve the running efficiency. ### Feature extraction filter The workflow of FEF is summarized in Fig.3, which corresponds to the gray block in Fig.2. The FEF receives a raw scan frame and outputs four types of features, i.e., ground, facade, edge, and reflector, and two types of cylindrical images, i.e., range and label image. #### 3.1.1 **Motion compensation** Given the point-wise timestamp of a scan \\(\\mathcal{P}\\), the reference pose for a point \\(\\mathbf{p}_{i}\\in\\mathcal{P}\\) at timestamp \\(\\tau_{i}\\) can be interpolated by the relative transformation \\(\\mathbf{T}_{e,s}=[\\mathbf{R}_{e,s},\\mathbf{t}_{e,s}]\\) under the assumption of uniform motion: \\[\\mathbf{T}_{s,i}=[\\text{slerp}(\\mathbf{R}_{e,s},s_{i})^{\\top},-s_{i}\\cdot \\mathbf{T}_{e,s}^{-1}\\cdot\\mathbf{t}_{e,s}], \\tag{1}\\] where slerp(\\(\\cdot\\)) represents the spherical linear interpolation. The time ratio \\(s_{i}\\) is \\(s_{i}=\\frac{\\tau_{i}-\\tau_{s}}{\\tau_{e}-\\tau_{s}}\\), where \\(\\tau_{s}\\), \\(\\tau_{e}\\) stand for the start and end timestamps of the laser sweep, respectively. Then, the distorted points can be deskewed by transforming to the start timestamp \\(\\mathbf{T}_{s,i}\\cdot\\mathbf{p}_{i}\\in\\mathcal{P}^{\\prime}\\). #### 3.1.2 **Scan preprocess** The undisordered points \\(\\mathcal{P}^{\\prime}\\) are first preprocessed. The main steps are as below: _I. Scan projection._\\(\\mathcal{P}^{\\prime}\\) is projected into a cylindrical plane to generate range and intensity images, i.e. \\(\\mathcal{D}\\) and \\(\\mathcal{I}\\) (see Fig.1(d)and (e)). A point with 3D coordinates \\(\\mathbf{p}_{i}=[x,y,z]^{\\top}\\) can be Figure 2: Overall workflow of InTEn-LOAM. projected as a cylindrical image pixel \\([u,v]^{\\top}\\) by: \\[\\begin{pmatrix}u\\\\ v\\end{pmatrix}=\\begin{pmatrix}[1-\\arctan(y,x)\\cdot\\pi^{-1}]\\cdot\\frac{w}{2}\\\\ (\\arcsin(\\frac{z}{\\sqrt{x^{2}+y^{2}+z^{2}}})+\\theta_{d})\\cdot\\frac{h}{\\theta} \\end{pmatrix}, \\tag{2}\\] where \\(\\theta=\\theta_{d}+\\theta_{t}\\) is the vertical field-of-view of the LiDAR, and \\(w,h\\) are the width and height of the resulting image. In \\(\\mathcal{D}\\) and \\(\\mathcal{I}\\), each pixel contains the smallest range and the largest reflectance of scanning points falling into the pixel, respectively. In addition, \\(\\mathcal{P}^{\\prime}\\) is also preprocessed as segment image \\(\\mathcal{S}\\) (see Fig.1(b)) according to azimuthal and radial directions of 3D points, and each pixel contains the lowest \\(z\\). The former converter is the same as \\(u\\) in E.q.(2), while the latter is equally spaced with the distance interval \\(\\Delta\\rho\\): \\[\\rho=\\lfloor\\sqrt{(x^{2}+y^{2}+z^{2})}/\\Delta\\rho\\rfloor, \\tag{3}\\] where \\(\\lfloor\\cdot\\rfloor\\) indicates rounding down operator. Note that the size of \\(\\mathcal{S}\\) is not the same as \\(\\mathcal{D}\\). _II. Ground segmentation_. The method from Himmelsbach et al. (2010) is applied in this paper with the input of segment image \\(S\\). Each column of \\(\\mathcal{S}\\) is fitted as a ground line \\(\\mathbf{l}_{i}=a_{i}\\cdot\\rho+b_{i}\\). Then, residuals can be calculated, which represents the differences between the predicted and the observed \\(z\\): \\[r(u,v)=\\mathbf{l}_{i}(\\mathcal{D}(u,v))-\\mathcal{D}(u,v)\\cdot\\sin(\\theta_{v}), \\tag{4}\\] where \\(\\theta_{v}\\) indicates the vertical angle of the \\(v\\)th row in \\(\\mathcal{D}\\). Pixels with residuals smaller than the threshold \\(\\text{Th}_{g}\\) will be marked as ground pixels with label identity 1. Figure 3: The workflow of FEF. _III. Object clustering_. After the ground segmentation, the angle-based object clustering approach from Bogoslavskyi and Stachniss (2017) is conducted to group pixels into different clusters with identified labels and generate a label image \\(\\mathcal{L}\\) (see the label image in Fig.2). _IV. Create feature images_. We partition the intensity image \\(\\mathcal{I}\\) into \\(M\\times N\\) blocks and establish intensity histograms for each block. The extraction threshold \\(\\text{Th}_{I,n}\\) of each intensity block is adaptively determined by taking the median of the histogram. Besides, intensity difference image \\(\\mathcal{I}_{\\Delta}\\), normal image \\(\\mathcal{N}\\), curvature image \\(\\mathcal{C}\\), are created by: \\[\\begin{split}\\mathcal{I}_{\\Delta}(u,v)&=\\mathcal{I }(u,v)-\\mathcal{I}(u,v+1),\\\\ \\mathcal{N}(u,v)&=(\\Pi[\\mathcal{D}(u+1,v)]-\\Pi[ \\mathcal{D}(u,v)])\\\\ &\\quad\\times(\\Pi[\\mathcal{D}(u,v+1)]-\\Pi[\\mathcal{D}(u,v)]),\\\\ \\mathcal{C}(u,v)&=\\\\ \\frac{1}{N\\cdot\\mathcal{D}(u,v)}\\cdot\\sum_{i,j\\in N}\\left( \\mathcal{D}(u,v)-\\mathcal{D}(u+i,v+j)\\right)\\end{split} \\tag{5}\\] where \\(\\Pi[\\cdot]:\\mathcal{D}\\mapsto\\mathcal{P}\\) denotes the mapping function from a range image pixel to a 3D point. \\(N\\) is the neighboring pixels count. Furthermore, pixels in the cluster with fewer than 15 points are marked as noises and blocked. All the valid-or-not flag is stored in a binary mask image \\(\\mathcal{B}\\). #### 3.1.3 **Feature extraction** According to the above feature images, pixels of four categories of features can be extracted. Then 3D feature points, i.e., ground \\(\\mathcal{P}_{\\mathcal{G}}\\), facade \\(\\mathcal{P}_{\\mathcal{F}}\\), edge \\(\\mathcal{P}_{\\mathcal{E}}\\), and reflector \\(\\mathcal{P}_{\\mathcal{R}}\\), can be obtained per the pixel-to-point mapping relationship. Specifically, * Points correspond to pixels that meet \\(\\mathcal{L}(u,v)=1\\) and \\(\\mathcal{B}(u,v)\ eq 0\\) are categorized as \\(\\mathcal{P}_{\\mathcal{G}}\\). * Points correspond to pixels that meet \\(\\mathcal{C}(u,v)>Th_{E}\\) and \\(\\mathcal{B}(u,v)\ eq 0\\) are categorized as \\(\\mathcal{P}_{\\mathcal{E}}\\). * Points correspond to pixels that meet \\(\\mathcal{C}(u,v)<Th_{F}\\) and \\(\\mathcal{B}(u,v)\ eq 0\\) are categorized as \\(\\mathcal{P}_{\\mathcal{F}}\\). * Points correspond to pixels that meet \\(\\mathcal{I}.(u,v)>Th_{\\Delta I}\\) and \\(\\mathcal{B}(u,v)\ eq 0\\) are categorized as \\(\\mathcal{P}_{\\mathcal{R}}\\). Besides, to keep the gradient of local intensities points in pixels that meet \\(\\mathcal{I}(u,v)>Th_{I,n}\\), as well as their neighbors are all included in \\(\\mathcal{P}_{\\mathcal{R}}\\). To improve the efficiency of scan registration, the random downsample filter (RDF) is applied on \\(\\mathcal{P}_{\\mathcal{G}}\\) and \\(\\mathcal{P}_{\\mathcal{R}}\\) to obtain \\(N_{\\mathcal{G}}\\) downsampled edge features \\(\\mathcal{P}_{\\mathcal{G}}{}^{\\prime}\\) and \\(N_{\\mathcal{R}}\\) facade features \\(\\mathcal{P}_{\\mathcal{R}}{}^{\\prime}\\). To obtain \\(N_{\\mathcal{E}}\\) refined edge features \\(\\mathcal{P}_{\\mathcal{E}}{}^{\\prime}\\) and \\(N_{\\mathcal{F}}\\) refined facade features \\(\\mathcal{P}_{\\mathcal{F}}{}^{\\prime}\\), the non-maximum suppression (NMS) filter based on point curvatures is applied on \\(\\mathcal{P}_{\\mathcal{E}}\\) and \\(\\mathcal{P}_{\\mathcal{F}}\\). ### Intensity-based scan registration Similar to the geometric-based scan registration, given the initial guess of the transformation \\(\\tilde{\\mathbf{T}}_{t,s}\\) from source points \\(\\mathcal{P}_{s}\\) to target points \\(\\mathcal{P}_{t}\\), we try to estimate the LiDAR motion \\(\\mathbf{T}_{t,s}\\) by matching the local intensities of the source and target. In the case of geometric features registration, the motion estimation is solved through nonlinear iterations by minimizing the sum of Euclidean distances from each source feature to their correspondence in the target scan. In the case of reflecting features registration, however, we minimize the sum of intensity differences instead. The fundamental idea of the intensity-based point cloud alignment method proposedin this paper is to make use of the similarity of intensity gradients within the local region of laser scans to achieve scan matching. Because of the discreteness of the laser scan, sparse 3D points in a local area are not continuous, causing the intensity values of the laser sweep non-differentiable. To solve this issue, we introduce a continuous intensity surface model using the local support characteristic of the B-spline basis function. A simple intensity surface example is shown in Fig.4. #### 3.2.1 **B-spline intensity surface model** The intensity surface model presented in this paper uses the uniformly distributed knots of the B-spline; thus, the B-spline is defined fully by its degree Sommer et al. (2020). Specifically, the intensity surface is a space spanned by three \\(d\\)-degree B-spline functions on the orthogonal axes, and each B-spline is controlled by \\(d+1\\) knots on the axis. Mathematically, the B-spline intensity surface in local space is a scalar-valued function \\(\\mu(\\mathbf{p}):\\mathbb{R}^{3}\\rightarrow\\mathbb{R}\\), which builds the mapping relationship between a 3D point \\(\\mathbf{p}=[x,y,z]^{\\top}\\) and its intensity value. The mapping function is defined by the tensor product of three B-spline functions and control points \\(\\mathrm{c}_{i,j,k}\\in C\\) in the local space: \\[\\mu(\\mathbf{p}) =\\sum_{i=0}^{d+1}\\sum_{j=0}^{d+1}\\sum_{k=0}^{d+1}c_{i,j,k}b_{i}^{ d}(x)b_{j}^{d}(y)b_{k}^{d}(z) \\tag{6}\\] \\[=\\mathrm{vec}(\\mathbf{b}_{x}^{d}\\otimes\\mathbf{b}_{x}^{d}\\otimes \\mathbf{b}_{z}^{d})^{\\top}\\cdot\\mathrm{vec}(C)\\] \\[=\\mathbf{\\phi}(\\mathbf{p})^{\\top}\\cdot\\mathbf{c}\\] where \\(\\mathbf{b}^{d}\\) is the \\(d\\) degree B-spline function. We use the vectorization operator \\(\\mathrm{vec}(\\cdot)\\) and Kronecker product operator \\(\\otimes\\) to transform the above equation in the form of matrix multiplication. In this paper, the cubic (\\(d=3\\)) B-spline function is employed. #### 3.2.2 **Observation constraint** The intensity observation constraint is defined as the residual between the intensities of source points and their predicted intensities in the local intensity surface model. Fig.4 demonstrates how to predict the intensity on the surface patch for a reflector feature point. The selected point \\({}_{s}\\mathbf{p}\\in\\mathcal{P}_{s}\\) with intensity measurement \\(\\eta\\) is transformed to the model frame by \\({}_{t}\\mathbf{q}=\\mathbf{T}_{t,s}\\cdot_{s}\\mathbf{p}\\). Then the nearest point \\({}_{t}\\mathbf{q}\\in\\mathcal{P}_{t}\\) and its R-neighbor points \\({}_{t}\\mathbf{q}_{n}\\in\\mathcal{P}_{t},n=1\\cdots N\\) can be searched. Given the uniform space of the B-spline function \\(\\kappa\\), the neighborhood points \\({}_{t}\\mathbf{q}_{n}\\) can be voxelized with the center \\({}_{t}\\mathbf{q}\\) and the resolution \\(\\kappa\\times\\kappa\\times\\kappa\\) to generate control knots \\(\\mathbf{c_{\\tilde{p}}}\\) for the local intensity Figure 4: An simple example of B-spline intensity surface model. The grid surface depicts the modeled continuous intensity surface with colors representing intensities, and spheres in the center of surface grids representing contol points of the B-spline surface model. \\({}_{s}\\mathbf{p}\\) denotes the selected point, and \\(\\lfloor{}_{t}\\mathbf{q}_{n}\\rfloor\\) denotes querry points. \\({}_{s}\\mathbf{p}\\) is transformed to the reference frame of \\(\\lfloor{}_{t}\\mathbf{q}_{n}\\rfloor\\) and denoted as \\({}_{t}\\mathbf{\\tilde{p}}\\). \\({}_{t}\\mathbf{q}\\) denotes the nearest neighboring querry point of \\({}_{t}\\mathbf{\\tilde{p}}\\). surface. The control knot takes the value of the average intensities of all points in a voxel. To sum up, the residual is defined as: \\[r_{\\mathcal{I}}(\\mathbf{\\tilde{T}}_{t,s})=[\\boldsymbol{\\phi}(\\mathbf{\\tilde{T}}_{ t,s}\\cdot_{s}\\mathbf{p})^{\\top}\\cdot\\mathbf{c}_{\\mathbf{\\tilde{q}}}-\\eta]. \\tag{7}\\] Stacking normalized residuals to obtain residual vector \\(\\mathbf{r}_{\\mathcal{I}}(\\mathbf{\\tilde{T}}_{t,s})\\), and computing the Jacobian matrix of \\(\\mathbf{r}_{\\mathcal{I}}\\) w.r.t \\(\\mathbf{T}_{t,s}\\), denoted as \\(\\mathbf{J}_{\\mathcal{I}}=\\partial\\mathbf{r}_{\\mathcal{I}}/\\partial\\mathbf{T}_ {t,s}\\). The constructed nonlinear optimization problem can be solved by minimizing \\(\\mathbf{r}_{\\mathcal{I}}\\) toward zero using L-M algorithm. Note that _Lie group_ and _Lie algebra_ are implemented for the 6-DoF transformation in this paper. ### Dynamic object removal The workflow of the proposed DOR is shown in Fig.5, which corresponds to the pink block in Fig.2. Inputs of the DOR filter include the current laser points \\(\\mathcal{P}_{k}\\), the previous static laser points \\(\\mathcal{P}_{s,k-1}\\), the local map points \\(\\mathcal{M}_{k}\\), the current range image \\(\\mathcal{D}_{\\mathcal{P}_{k}}\\), the current label image \\(\\mathcal{L}_{\\mathcal{P}_{k}}\\), and the estimated LiDAR pose in the world frame \\(\\mathbf{\\tilde{T}}_{w,k}\\). The filter divides \\(\\mathcal{P}_{k}\\) into two categories, i.e., the dynamic \\(\\mathcal{P}_{d,k}\\) and static \\(\\mathcal{P}_{s,k}\\). Only static points will be appended into the local map for map update. The DOR filter introduced in this paper exploits the similarity of point clouds in the adjacent time domain for dynamic points filtering and verifies dynamic objects based on the segmented label image. #### 3.3.1 **Rendering range image for the local map** Both downsampling with coarse resolution and uneven distribution of map points may result in pixel holes in the rendered range image. Considering the great similarity of successive laser sweeps in the time domain, we use both the local map points \\(\\mathcal{M}_{k}\\) and the previous static laser points \\(\\mathcal{P}_{s,k-1}\\) to generate the to-be-rendered map points \\(\\mathcal{E}_{k}\\): \\[\\mathcal{E}_{k}=\\mathbf{T}_{w,k}^{-1}\\cdot\\mathcal{M}_{k}\\cup\\mathbf{T}_{k,k- 1}\\cdot\\mathcal{P}_{s,k-1}. \\tag{8}\\] Figure 5: The workflow of DOR. The rendered image \\(\\mathcal{D}_{\\mathcal{M}_{k}}\\) and the current scan image \\(\\mathcal{D}_{\\mathcal{P}_{k}}\\) are shown in the second and third rows of Fig.5. A pedestrian can be clearly distinguished in \\(\\mathcal{D}_{\\mathcal{P}_{k}}\\) but not in \\(\\mathcal{D}_{\\mathcal{M}_{k}}\\). #### 3.3.2 **Temporal-based dynamic points searching** Dynamic pixels in \\(\\mathcal{D}_{\\mathcal{P}_{k}}\\) can be coarsely screened out in accordance with the depth differences between \\(\\mathcal{D}_{\\mathcal{P}_{k}}\\) and \\(\\mathcal{D}_{\\mathcal{M}_{k}}\\). In particular if the depth difference at \\([u,v]^{\\top}\\) is larger than the threshold \\(Th_{\\Delta d}\\), the pixel will be marked as dynamic. Consecutively, we can also generate a binary image \\(\\mathcal{D}_{\\mathcal{B}_{k}}\\) indicating whether the pixel is dynamic or not: \\[\\begin{split}\\mathcal{D}_{\\Delta_{k}}(u,v)&=| \\mathcal{D}_{\\mathcal{M}_{k}}(u,v)-\\mathcal{D}_{\\mathcal{P}_{k}}(u,v)|>Th_{ \\Delta d}?\\\\ &\\mathcal{D}_{\\mathcal{B}_{k}}(u,v)=1:0,\\end{split} \\tag{9}\\] where \\(\\mathcal{D}_{\\mathcal{M}_{k}}(u,v)\ eq 0\\) and \\(\\mathcal{D}_{\\mathcal{P}_{k}}(u,v)\ eq 0\\). An example of \\(\\mathcal{D}_{\\mathcal{B}_{k}}\\) is shown in the fourth row of Fig.5, in which red pixels represent the static and purple pixels represent the dynamic. To improve the robustness of the DOR filter to different point depths, we use the adaptive threshold \\(Th_{\\Delta d}=s_{d}\\cdot\\mathcal{D}_{\\mathcal{P}_{k}}(u,v)\\), where \\(s_{d}\\) is a constant coefficient. #### 3.3.3 **Dynamic object validation** It can be seen from \\(\\mathcal{D}_{\\mathcal{B}_{k}}\\) that it generates a large number of false positive (FP) dynamic pixels using the pixel-by-pixel depth comparison. To handle the above issue, we utilize the label image to validate dynamic according to the fact that points originating from the same object should have the same status label. We denote the pixel number of a segmented object and the dynamic pixel number as \\(N_{i}\\) and \\(N_{d,j}\\), which can be counted from \\(\\mathcal{L}_{\\mathcal{P}_{k}}\\) and \\(\\mathcal{D}_{\\mathcal{B}_{k}}\\), respectively. Two basic assumptions generally hold in terms of dynamic points, i.e., a ground points cannot be dynamic; b. the percentage of FP dynamic pixels in a given object will not be significant. According to the above assumptions, we can validate dynamic pixels at the object level: \\[\\begin{split}\\frac{N_{d,i}}{N_{i}}\\geq Th_{N}\\ \\&\\ \\mathcal{L}_{\\mathcal{P}_{k}}(u,v)\ eq 1?\\\\ \\mathcal{D}_{\\Delta_{k}}(u,v)=\\mathcal{D}_{\\Delta_{k}}(u,v):0. \\end{split} \\tag{10}\\] The E.q.(10) indicates that only objects that is marked as the non-ground object or own the dynamic pixel ratio larger than the threshold will be recognized as dynamic. In \\(\\mathcal{D}_{\\Delta_{k}}\\), pixels belonging to dynamic objects will retain the depth differences, while the others will be reset as 0. As the depth difference image shown in the sixth row of Fig.5, though many FP dynamic pixels are filtered out after the validation, the true positive (TP) dynamic pixels from the moving pedestrian on the right side are still remarkable. Then, the binary image \\(\\mathcal{D}_{\\mathcal{B}_{k}}\\) is updated by substituting the refined \\(\\mathcal{D}_{\\Delta_{k}}\\) into E.q.(9). #### 3.3.4 **Points classification** According to \\(\\mathcal{D}_{\\mathcal{B}_{k}}\\), dynamic 3D points in extracted features can be marked using the mapping function \\(\\Pi[\\cdot]:\\mathcal{D}\\mapsto\\mathcal{P}\\). Since the static feature set is the complement of the dynamic feature set w.r.t. the full set of extracted features, the static features can be filtered by \\(\\mathcal{P}_{s,k}=\\mathcal{P}_{k}-\\mathcal{P}_{d,k}\\). ### LiDAR odometry Given the initial guess \\(\\bar{\\mathbf{T}}_{k,k-1}\\), extracted features, i.e., downsampled ground and reflector features \\(\\mathcal{P}_{\\mathcal{G}^{{}^{\\prime}}}\\) and \\(\\mathcal{P}_{\\mathcal{R}^{{}^{\\prime}}}\\), as well as refined edge and facade features \\(\\mathcal{P}_{\\mathcal{E}^{{}^{\\prime}}}\\) and \\(\\mathcal{P}_{\\mathcal{F}^{{}^{\\prime}}}\\), are utilized to estimate the optimal estimation of \\(\\mathbf{T}_{k,k-1}\\), and then the LiDAR pose \\(\\mathbf{T}_{w,k}\\) in the global frame is reckoned. The odometry thread corresponds to the green S2S block in Fig.2, and the pseudo code is shown in Algorithm 1. To improve the performance of geometric-only scan registration, the proposed LO incorporates reflector features and estimate relative motion by jointly solving the multi-metric nonlinear optimization(NLO). #### 3.4.1 **Constraint model** As shown in Fig.6, constraints are modeled as the point-to-model intensity difference (for reflector feature) and the point-to-line (for edge feature)/point-to-plane (for ground and facade feature) distance, respectively. _I.Point-to-line constraint._ Let \\(\\mathbf{p}_{i}\\in\\mathcal{P}^{\\prime}_{\\mathcal{E},k},i=1\\cdots N_{E}\\) be a edge feature point. The association of \\(\\mathbf{p}_{i}\\) is the line connected by \\(\\mathbf{q}_{j},\\mathbf{q}_{m}\\in\\mathcal{P}^{\\prime}_{\\mathcal{E},k-1}\\), which represent the closest point of \\(\\mathbf{\\bar{T}}_{k-1,k}\\cdot\\mathbf{p}_{i}\\) in \\(\\mathcal{P}^{\\prime}_{\\mathcal{E},k-1}\\) and the closest neighbor in the preceding and following scan lines to the \\(\\mathbf{q}_{j}\\) respectively. The constraint equation is formulated as the point-to-line distance: \\[\\begin{split} r_{\\mathcal{E},i}=\\|\\mathbf{v}_{j}\\times(\\mathbf{T }_{k-1,k}\\cdot\\mathbf{p}_{i})\\|,\\\\ \\mathbf{v}_{j}=\\frac{\\mathbf{q}_{j}-\\mathbf{q}_{m}}{\\|\\mathbf{q} _{j}-\\mathbf{q}_{m}\\|}.\\end{split} \\tag{11}\\] The \\(N_{E}\\times 1\\) edge feature error vector \\(\\mathbf{r}_{\\mathcal{E}}\\) is constructed by stacking all the normalized edge residuals(Line 10). _II.Point-to-plane constraint._ Let \\(\\mathbf{p}_{i}\\in\\mathcal{P}^{\\prime}_{\\mathcal{F},k},(\\mathcal{P}^{\\prime}_{ \\mathcal{G},k}),i=1\\cdots N_{F}(N_{G})\\) be a facade or ground feature point. The association of \\(\\mathbf{p}_{i}\\) is the plane constructed by \\(\\mathbf{q}_{j},\\mathbf{q}_{m},\\mathbf{q}_{n}\\) in the last ground and facade feature points, which represent the closest point of \\(\\mathbf{\\bar{T}}_{k-1,k}\\cdot\\mathbf{p}_{i}\\), the closest neighbor in the preceding and following scan lines to \\(\\mathbf{q}_{j}\\) and the closest neighbor in the same scan line to \\(\\mathbf{q}_{j}\\) respectively. The constraint equation is formulated as the point-to-plane distance: \\[\\begin{split} r_{\\mathcal{G},i}&=r_{\\mathcal{F},i} =\\mathbf{n}_{j}\\cdot(\\mathbf{T}_{k-1,k}\\cdot\\mathbf{p}_{i})\\,,\\\\ \\mathbf{n}_{j}&=\\frac{(\\mathbf{q}_{j}-\\mathbf{q}_{m} )\\times(\\mathbf{q}_{j}-\\mathbf{q}_{n})}{\\|(\\mathbf{q}_{j}-\\mathbf{q}_{m}) \\times(\\mathbf{q}_{j}-\\mathbf{q}_{n})\\|}.\\end{split} \\tag{12}\\] The \\(N_{F}\\times 1\\) facade feature error vector \\(\\mathbf{r}_{\\mathcal{F}}\\) and the \\(N_{G}\\times 1\\) ground feautre error vector \\(\\mathbf{r}_{\\mathcal{G}}\\) are constructed by stacking all normalized facade and ground residuals(Line 8-9). _III.Point-to-model intensity difference constraint._ The constraint equation is formulated as E.q.(7). The \\(N_{R}\\times 1\\) intensity feature error vector \\(\\mathbf{r}_{\\mathcal{R}}\\) is constructed by stacking all reflector features (Line 11). Figure 6: Overview of four different types of feature associations. (a) Reflector ; (b) Facade; (c) Edge; (d) Ground feature association. #### 3.4.2 Transformation estimation According to constraint models introduced above, the nonlinear least square (LS) function can be established for the transformation estimation (Line 12): \\[\\mathbf{\\tilde{T}}_{k-1,k}=\\operatorname*{argmin}_{\\mathbf{\\tilde{T}}_{k-1,k}} \\left(\\mathbf{r}_{\\mathcal{G}}^{\\top}\\mathbf{r}_{\\mathcal{G}}+\\mathbf{r}_{ \\mathcal{F}}^{\\top}\\mathbf{r}_{\\mathcal{F}}+\\mathbf{r}_{\\mathcal{E}}^{\\top} \\mathbf{r}_{\\mathcal{E}}+\\mathbf{r}_{\\mathcal{R}}^{\\top}\\mathbf{r}_{\\mathcal{ R}}\\right). \\tag{13}\\] The _special euclidean group_\\(\\exp\\left(\\boldsymbol{\\xi}_{k-1,k}^{\\wedge}\\right)=\\mathbf{T}_{k-1,k}\\) is implemented during the nonlinear optimization iteration. Then \\(\\mathbf{T}_{k-1,k}\\) can be incrementally updated by: \\[\\boldsymbol{\\xi}_{k-1,k}\\leftarrow\\boldsymbol{\\xi}_{k-1,k}+\\delta\\boldsymbol{ \\xi}_{k-1,k}. \\tag{14}\\] where \\[\\delta\\boldsymbol{\\xi}_{k-1,k}=\\left(\\mathbf{J}^{\\top}\\mathbf{J}\\right)^{-1} \\mathbf{J}^{\\top}\\mathbf{r},\\] \\[\\mathbf{J}=\\left[\\mathbf{J}_{\\mathcal{G},i}^{\\top}\\;\\cdots\\;\\mathbf{J}_{ \\mathcal{F},i}^{\\top}\\;\\cdots\\;\\mathbf{J}_{\\mathcal{E},i}^{\\top}\\;\\cdots\\; \\mathbf{J}_{\\mathcal{R},i}^{\\top}\\right]^{\\top}, \\tag{15}\\] \\[\\mathbf{r}=\\left[\\mathbf{r}_{\\mathcal{G},i}^{\\top}\\;\\cdots\\;\\mathbf{r}_{ \\mathcal{F},i}^{\\top}\\;\\cdots\\;\\mathbf{r}_{\\mathcal{E},i}^{\\top}\\;\\cdots\\; \\mathbf{r}_{\\mathcal{R},i}^{\\top}\\right]^{\\top}.\\]The Jacobian matrix of constraint equation w.r.t. \\(\\mathbf{\\xi}_{k-1,k}\\) is denoted as \\(\\mathbf{J}\\). Matrix components are listed as follow. \\[\\begin{split}\\mathbf{J}_{\\mathcal{G},i}&=\\frac{ \\partial\\mathbf{r}_{\\mathcal{G},i}}{\\partial\\delta\\mathbf{\\xi}_{k-1,k}}=\\mathbf{n} _{j,m,n}^{\\top}\\cdot\\frac{\\partial(\\mathbf{T}_{k-1,k}\\mathbf{p}_{i})}{ \\partial\\delta\\mathbf{\\xi}_{k-1,k}},\\\\ \\mathbf{J}_{\\mathcal{F},i}&=\\frac{\\partial\\mathbf{ r}_{\\mathcal{F},i}}{\\partial\\delta\\mathbf{\\xi}_{k-1,k}}=\\mathbf{n}_{j,m,n}^{\\top} \\cdot\\frac{\\partial(\\mathbf{T}_{k-1,k}\\mathbf{p}_{i})}{\\partial\\delta\\mathbf{\\xi}_ {k-1,k}},\\\\ \\mathbf{J}_{\\mathcal{E},i}&=\\frac{\\partial\\mathbf{ r}_{\\mathcal{E},i}}{\\partial\\delta\\mathbf{\\xi}_{k-1,k}}\\\\ &=\\frac{(\\mathbf{v}_{j,m}^{\\wedge}(\\mathbf{T}_{k-1,k}\\mathbf{p}_ {i}))^{\\top}}{\\|\\mathbf{v}_{j,m}^{\\wedge}(\\mathbf{T}_{k-1,k}\\mathbf{p}_{i})\\| }\\cdot\\mathbf{v}_{j,m}^{\\wedge}\\cdot\\frac{\\partial(\\mathbf{T}_{k-1,k}\\mathbf{ p}_{i})}{\\partial\\delta\\mathbf{\\xi}_{k-1,k}},\\\\ \\mathbf{J}_{\\mathcal{R},i}&=\\frac{\\partial\\mathbf{ r}_{\\mathcal{R},i}}{\\partial\\delta\\mathbf{\\xi}_{k-1,k}}\\\\ &=\\frac{\\partial\\mathbf{\\phi}(\\mathbf{T}_{k-1,k}\\mathbf{p}_{i})^{ \\top}}{\\partial(\\mathbf{T}_{k-1,k}\\mathbf{p}_{i})}\\cdot\\frac{\\partial(\\mathbf{ T}_{k-1,k}\\mathbf{p}_{i})}{\\partial\\delta\\mathbf{\\xi}_{k-1,k}}\\cdot\\mathbf{c}_{ \\tilde{\\mathbf{q}}_{i}}.\\end{split} \\tag{16}\\] ### LiDAR mapping There is always an inevitable error accumulation in the LiDAR odometry, resulting in a discrepancy \\(\\Delta\\mathbf{T}_{k}\\) between the estimated and actual pose. In other words, the estimated transform from the LiDAR odometry thread is not the exact transform from the LiDAR frame \\(\\{L\\}\\) to the world frame \\(\\{W\\}\\) but from \\(\\{L\\}\\) to the drifted world frame \\(\\{W^{\\prime}\\}\\): \\[\\mathbf{T}_{w,k}=\\Delta\\mathbf{T}_{k}\\mathbf{T}_{w^{\\prime},k}. \\tag{17}\\] One of the main tasks of LiDAR mapping thread is optimizing the estimated pose from the LO thread by the scan-to-map registration (green S2M block in Fig.2). The other is managing the local static map (brown TVF and pink DOR blocks in Fig.2). The pseudo code is shown in Algorithm 2. #### 3.5.1 **Local feature map construction** In this paper, the pose-based local feature map construction scheme is applied. In particular, the pose prediction \\(\\mathbf{\\tilde{T}}_{w,k}\\) is calculated by E.q.(17) under the assumption that the drift between \\(\\Delta\\mathbf{T}_{k}\\) and \\(\\Delta\\mathbf{T}_{k-1}\\) is tiny (Line 2). Feature points scanned in the vicinity of \\(\\mathbf{\\tilde{T}}_{w,k}\\) are merged (Line 4) and filtered (Line 5) to construct the local map \\(\\mathcal{M}_{k}\\). Let \\(\\Gamma(\\cdot)\\) denotes the filter, and \\(n\\in N\\) denotes timestamps of surrounding scans. The local map is built by: \\[\\mathcal{M}_{k}=\\Gamma\\left(\\sum_{n\\in N}\\mathbf{T}_{w,n}\\cdot\\mathcal{P}_{s, n}\\right). \\tag{18}\\] The conventional voxel-based downsample filter voxelizes the point cloud and retains one point for each voxel. The coordinate of retained point is averaged by all points in the same voxel. However, for the point intensity, averaging may cause the loss of similarity between consecutive scans. To maintain the local characteristic of the point intensity, we utilize the temporal information to improve the voxel-based downsample filter. In the TVF, a temporal window is set for the intensity average. Specifically, the coordinate of the downsampled point is still the mean of all points in the voxel, but the intensity is the mean of points in the temporal window, i.e., \\(|t_{k}-t_{n}|<Th_{t}\\), where \\(t_{k}\\) and \\(t_{n}\\) represent timestamps of the current scan and selected point, respectively. #### 3.5.2 Mapping update The categorized features are jointly registered with feature maps in the same way as in the LiDAR odometry module. The low-drift pose transform \\(\\mathbf{T}_{w,k}\\) can be estimated by scan-to-map alignment (Line 7-11). Since the distribution of feature points in the local map is disordered, point neighbors cannot be directly indexed through the scan line number. Accordingly, the K-D tree is utilized for nearest points searching, and the PCA algorithm calculates norms and primary directions of neighbouring points. Finally, the obtained \\(\\mathbf{T}_{w,k}\\) is fed to the DOR filter to filter out dynamic points in the current scan. Only static points \\(\\mathcal{P}_{sk}\\) are retained in the local feature map list (Line 20-21). Moreover, the odometry reference drift is also updated by E.q.(17), i.e. \\(\\Delta\\mathbf{T}_{k}=\\mathbf{T}_{w,k}\\mathbf{T}_{w^{\\prime},k}^{-1}\\) (Line 18). ## 4 Experiments In this section, the proposed InTEn-LOAM is evaluated qualitatively and quantitatively on both simulated and real-world datasets, covering various outdoor scenes. We first test the feasibility of each functional module, including the feature extraction module, intensity-based scan registration, and dynamic points removal. Then we conduct a comprehensive evaluation for InTEn-LOAM in terms of positioning accuracy and constructed map quality. During experiments, the system processing LiDAR scans runs on a laptop computer with 1.8GHz quad cores and 4Gib memory, on top of the robot operating system (ROS) in Linux. The simulated test environment was built based on the challenging scene provided by the DARPA Subterranean (SubT) Challenge1. We simulated a \\(1000m\\) long straight mine tunnel (see Fig.7(b)) with smooth walls and reflective signs that are alternatively posted on both sides of the tunnel at \\(30m\\) intervals. Physical parameters of the simulated car, such as ground friction, sensor temperature and humidity are consistent with reality to the greatest extent. A 16-scanline LiDAR is on the top of the car. Transform groundtruths were exported at \\(100Hz\\). The real-world dataset was collected by an autonomous driving car with a 32-scanline LiDAR (see Fig.7(a)) in the autonomous driving test field, where a \\(150m\\) long straight tunnel is exsited. Moreover, the KITTI odometry benchmark2 was also utilized to compare with other state-of-the-art LO solutions. Footnote 1: [https://github.com/osrf/subt](https://github.com/osrf/subt) Footnote 2: [http://www.cvlibs.net/datasets/kitti/eval_odometry.php](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) ### Functional module test #### 4.1.1 **Feature extraction module** We validated the feature extraction module on the real-world dataset. In the test, we set the edge feature extraction threshold as \\(Th_{E}=0.3\\), the facade feature extraction threshold as \\(Th_{E}=0.1\\), and the intensity difference threshold as \\(Th_{\\Delta I}=80\\), and partitioned the intensity image into \\(16\\times 4\\) blocks. Figure 7: Dataset sampling platform. (a) Autonomous driving car; (b) Simulated mine car and scan example. Magenta laser points are reflected from brown signs in the simulated environemntsFig.8 shows feature extraction results. It can be seen that edges, planes, and reflectors can be correctly extracted in various road conditions. With the effect of ground segmentation, breakpoints on the ground (see orange box region in Fig.8(a)) are correctly marked as plane, avoiding the issue that breakpoints are wrongly marked as edge features due to their large roughness values. In the urban city scene, conspicuous intensity features can be easily found, such as landmarks and traffic lights (see Fig.8(b)). Though there are many plane features in the tunnel, few valid edge features can be extracted (see Fig.8(c)). In addition, sparse and scattered plant points with large roughness values (see orange box region in Fig.8(d)) are filtered as outliers with the help of the object clustering. According to the above results, some conclusions can be drawn: (1) The number of plane features is always much greater than that of edge features, especially in open areas, which may cause the issue of constraint-unbalance during the multi-metric nonlinear optimization. (2) Static reflector features widely exist in real-world environments, which are useful for the feature-based scan alignment, and should not be ignored. (3) The adapative intensity feature extraction approach makes it possible to manually add reflective targets in feature-degraded environments. #### 4.1.2 **Intensity-based scan registration** We validated the intensity-based scan registration method on the simulated dataset. To highlight the role of intensity-based scan registration, we quantitatively evaluated the relative accuracy of the proposed method and compared the result with prevalent geometric-based scan registration methods, i.e., edge and surface feature registration of LOAMZhang and Singh (2017), multi-metric registration of MULLSPan et al. (2021) and NDT of HDL-Graph-SLAMKoide et al. (2019). The evaluation used the simulated tunnel dataset, which is a typical geometric-degraded environment. The measure used to evaluate the accuracy of scan registration is the relative transformation error. In particular, differences between the groundtruth \\(\\mathbf{T}_{k+1,k}^{GT}\\) and the estimated relative transformation \\(\\mathbf{T}_{k+1,k}\\) are calculated and represented as an error vector, i.e., \\(\\mathbf{r}_{k}=\\text{vec}(\\mathbf{T}_{k+1,k}^{GT}\\cdot\\mathbf{T}_{k+1,k}^{-1})\\). The norms of translational and rotational parts of \\(\\mathbf{r}_{k}\\) are illustrated in Fig.9. Note that the result of intensity-based registration only utilizes measurements from the intensity channel of laser scan, instead of all information including range, bearing and intensity of laser scan. The figures show that all four rotation errors of different approaches are less than \\(0.01^{\\circ}\\), while errors of InTEn-LOAM and MULLS are less than \\(0.001^{\\circ}\\). It demonstrates that laser points from the tunnel wall and ground enable to provide sufficient geometric constraints for the accuracy of relative attitude estimation. However, there are significant differences in relative translation errors (RTE). The intensity-based scan registration achieves the best RTE (less than \\(0.02m\\)), which is much better than the feature-based of LOAM and NDT of HDL-Graph-SLAM (\\(0.4m\\) and \\(0.1m\\)), and better than the intensity-based weighting of MULLS(0.05). The result proves the correctness and feasibility of the proposed intensity-based approach under the premise of sufficient intensity features. It also reflects the necessity of fusing reflectance information of points in poorly structured environments. #### 4.1.3 **Dynamic object removal** We validated the DOR module on Seq.07 and 10 of the KITTI odometry dataset. The test result was evaluated by qualitative evaluation method, i.e., marking dynamic points for each scan frame and qualitatively judging the accuracy of the dynamic object segmentation according to the actual targets in the real world the dynamic points correspond to. Fig.10 exhibits DOR examples for a single frame of laser scan at typical urban driving scenes. It can be seen that dynamic objects, such as vehicles crossing the intersection, vehicles and pedestrians traveling in front of/behind the data collection car, can be correctly segmented by the proposed DOR approach no matter the sampling vehicle is stationary or in motion. Fig.11 shows constructed maps at two representative areas, i.e., intersection and busy road. Maps were incrementally built by LOAM (without DOR) and InTEn-LOAM (with DOR) method. We can figure out from the figure that the map built by InTEn-LOAM is better since the DOR module effectively filters out dynamic points to help to accumulate a purely static points map. In contrast, the map constructed by LOAM owns a large amount of 'ghost points' increasing the possibility of erroneous point matching. In general, the above results prove that the DOR method proposed in this paper owns the ability to segment dynamic objects for a scan frame correctly. However, it also has some shortcomings. For instance, (1) The proposed comparison-based DOR filter is sensitive to the quality of laser scan and the density of the local points map, causing the omission or mis-marking of some dynamic points (see the green circle box in the top of Fig.10(a) and the red rectangle box in the bottom of Fig.10(b)); (2) Dynamic points in the first frame of scan cannot be marked using the proposed approach (see the red rectangle box in the top of Fig.10(b)). ### Pose transform estimation accuracy #### 4.2.1 Kittli dataset The quantitative evaluations were conducted on the KITTI odometry dataset, which is composed of 11 sequences of laser scans captured by a Velodyne HDL-64E LiDAR with GPS/INS groundtruth poses. We followed the odometry evaluation criterion from Geiger et al. (2012) and used the average relative translation and rotation errors (RTE and RRE) within a certain distance range for the accuracy evaluation. The performance of the proposed InTEn-LOAM, and other six state-of-the-art LiDAR odometry solutions whose results are taken from their original papers, are reported in Table.1. Plots of average RTE/RRE over fixed lengths are exhibited in Fig.12. Note that all comparison methods did not incorporate the loop closure module for more objective accuracy comparison. Moreover, an intrinsic angle correction of \\(0.2^{\\circ}\\) is applied to KITTI raw scan data for better performance Pan et al. (2021). Fig.12 demonstrates that accuracies in different length ranges are stable, and the maximums of average RTE and RRE are less than \\(0.32\\%\\) and \\(0.22^{\\circ}/100m\\). It also can be seen from the table that the average RTE and RRE of InTEn-LOAM are \\(0.54\\%\\) and \\(0.26^{\\circ}/100m\\), which outperforms the LOAM accuracy of \\(0.84\\%\\). The comprehensive comparison shows that InTEn-LOAM is superior or equal to the current state-of-the-art LO methods. Although the result of MULLS slightly better than that of InTEn-LOAM, the contribution of InTEn-LOAM is significant considering its excellent performance in long straight tunnel with reflective markers. InTEn-LOAM costs around \\(90ms\\) per frame of scan with about 3k and 30k feature points in the current scan points \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \\hline Method & **\\#00U** & **\\#01H** & **\\#02C** & **\\#03C** & **\\#04C** & **\\#05C** & **\\#06U** & **\\#07U** & **\\#08U** & **\\#09C** & **\\#10C** & **\\#Avg. & \\textbackslash{time}[4]/\\textbackslash{time}[4] \\\\ \\hline \\hline LOAM & 0.78/- & 1.43/- & 0.92/- & 0.86/- & 0.71/- & 0.52/- & 0.65/- & 0.63/- & 1.12/- & 0.77/- & 0.79/- & 0.84/- & 0.10 \\\\ \\hline MILS-SLAM & **0.50/-** & 0.82/- & **0.53**- & 0.68/- & **-0.33**- & 0.32/- & 0.33/- & **0.33/-** & **0.83/-** & 0.53/- & 0.57/- & 1.25 \\\\ \\hline MC2SLAM & **0.51**- & 0.79/- & **0.54**- & 0.65/- & 0.44/- & **0.27**- & 0.31/- & 0.34/- & 0.84/- & 0.46/- & **0.52**- & 0.56/- & 0.10 \\\\ \\hline SuMo & 0.700/- & 0.30 & 1.70/- & 0.50/- & 1.00/- & 0.70/- & 0.50/- & 0.40/- & **0.20**- & 0.50/- & 0.70/- & 0.60/- & 0.00/- & 0.70/- & 0.07/- & 0.07/- & 0.07/- & 0.07 \\\\ \\and local map points, respectively. Accordingly, the proposed LO method is able to operate faster than \\(10Hz\\) on average for all KITTI odometry sequences and achieve real-time performance. For in-depth analysis, three representative sequences, i.e., Seq.00, 01, and 05, were selected. Seq.00 is a urban road dataset with the longest traveling distance, in which big and small loop closures are included, while geometric features are extremely rich. Consequently, the sequence is suitable for visualizing the trajectory drift of InTEn-LOAM. Seq.01 is a highway dataset with the fastest driving speed. Due to the lack of geometric features in the highway neighborhood, it is the most challenging sequence in the KITTI odometry dataset. Seq.05 is a country road sequence with great variation in elevation and rich structured features. For Seq.01, it can be seen from Fig.13(c) that areas with landmarks are circled by blue bounding boxes, while magenta boxes highlight road signs on the roadside. The drift of the estimated trajectory of Seq.01 by InTEn-LOAM is quite small (see Fig.13(d)), which reflects that the roadside guideposts can be utilized as reflector features since their high-reflective surfaces, and are conducive to improving the LO performance in such geometric-sparse highway environments. The result also proves that the proposed InTEn-LOAM is capable of adaptively mining and fully exploiting the geometric and intensity features in surrounding environments, which ensures the LO system can accurately and robustly estimate the vehicle pose even in some challenging scenarios. In terms of Seq.00 and 05, both two point cloud maps show excellent consistency in the small loop closure areas (see blue bound regions in Fig.13(a) and (e)), which indicates that InTEn-LOAM owns good local consistency. However, in large-scale loop closure areas, such as the endpoint, the global trajectory drifts incur a stratification issue in point cloud maps (see red bound regions in Fig.13(a) and (e)), which are especially significant in the vertical direction. (see plane trajectory plots in Fig.13(b) and (f)). This phenomenon is because constraints in the z-direction are insufficient in comparison with other directions in the state space since only ground features provide constraints for the z-direction during the point cloud alignment. #### 4.2.2 **Autonomous driving dataset** The other quantitative evaluation test was conducted on the autonomous driving field dataset, the groundtruth of which is referred to the trajectory output of the onboard positioning and orientation system (POS). There is a \\(150m\\) long tunnel in the data acquisition environment, which is extremely challenging for most LO systems. The root means square errors (RMSE) of horizontal position and yaw angle were used as indicators for the absolute state accuracy. LOAM, MULLS, and HDL-Graph-SLAM were utilized as control groups, whose results are listed in Table.2. Both LOAM and HDL-Graph-SLAM failed to localize the vehicle with \\(34.654m\\) and \\(141.498m\\) positional errors, respectively. MULLS and the proposed InTEn-LOAM are still able to function properly with \\(2.664m\\) and \\(7.043m\\) of positioning error and \\(0.476^{\\circ}\\) and \\(1.403^{\\circ}\\) of heading error within the path range of \\(1.5km\\). To further investigate the causes of this result, we plotted the cumulative distribution of absolute errors and horizontal trajectories of three LO systems, as shown in Fig.14. \\begin{table} \\begin{tabular}{|c|c c c|c|} \\hline \\multirow{2}{*}{method} & \\multicolumn{3}{c|}{Positioning error [m]} & \\multicolumn{2}{c|}{Heading error [\\(\\uparrow\\)]} \\\\ \\cline{2-5} & \\(x\\) & \\(y\\) & horizontal & yaw \\\\ \\hline LOAM & 29.478 & 18.220 & 34.654 & 1.586 \\\\ \\hline HDL-Graph-SLAM & 119.756 & 75.368 & 141.498 & 2.408 \\\\ \\hline MULLS-LO & 4.133 & 5.705 & 7.043 & 1.403 \\\\ \\hline **InTEn-LOAM** & **1.851** & **1.917** & **2.664** & **0.476** \\\\ \\hline \\end{tabular} \\end{table} Table 2: Quantitative evaluation and comparison on autonomous driving field dataset. From the trajectory plot, we can see that the overall trajectory drift of InTEn-LOAM and MULLS are relatively small, indicating that these two approaches can accurately localize the vehicle in this challenging scene by incorporating intensity features into the point cloud registration, and using intensity information for the feature weighting. The estimated position of LO inevitably suffers from error accumulation which is the culprit causing trajectory drift. It can be seen from the cumulative distribution of absolute errors that the absolute positioning error of InTEn-LOAM is no more than \\(10m\\), and the attitude error is no more than \\(1.5^{\\circ}\\). The overall trends of rotational errors of the other three systems are consistent with that of InTEn-LOAM. Results in Table.2 also verify that their rotation errors are similar. The cumulative distribution curves of absolute positioning errors of LOAM and HDL-Graph-SLAM do not exhibit smooth growth but a steep increase in some intervals. The phenomenon reflects the existence of anomalous registration in these regions, which is consistent with the fact that the scan registration-based motion estimation in the tunnel is degraded. MULLS, which incorporates intensity meatures by feature constraints weighting, present a smooth curve as similar as InTEn-LOAM. However, the absolute errors of positioning (no more than \\(19m\\)) and heading (no more than \\(2.5^{\\circ}\\)) are both large than those of our proposed LO system. We also plotted the RTE and RRE of all four approaches (see Fig.15). It can bee seen that the differences of the RRE between four systems are small, representing that the heading estimations of all theses LO systems are not deteriorated in the geometric-degraded long straight tunnel. In contrast, the RPEs are quite different. Both LOAM and HDL-Graph-SLAM suffer from serious scan registration drifts, while MULLS and InTEn-LOAM are able to positioning normally, and achieve very close relative accuracy. ### Point cloud map quality #### 4.3.1 **Large-scale urban scenario** The qualitative evaluations were conducted by intuitively comparing the constructed map by InTEn-LOAM with the reference map. The reference map is built by merging each frame of laser scan using their groundtruth poses. Maps of Seq.06 and 10 are displayed in Fig.16 and Fig.17, which are the urban scenario with trajectory loops and the country road scenario without loop, respectively. Although the groundtruth is the post-processing result of POS and its absolute accuracy reaches centimeter-level, the directly merged points map is blurred in the local view. By contrast, maps built by InTEn-LOAM own better local consistency, and various small targets, such as trees, vehicles, and fences, etc., can be clearly distinguished from the points map. The above results prove that the relative accuracy of InTEn-LOAM outperforms that of the GPS/INS post-processing solution, which is very critical for the mapping tasks. In addition, we constructed the complete point cloud map for the test field using InTEnLOAM and compared the result with the local remote sensing image, as shown in Fig.19. It can be seen that the consistency between the constructed point cloud map and regional remote sensing image is good, qualitatively reflecting that the proposed InTEn-LOAM has excellent localization and mapping capability without error accumulation in the around \\(2km\\) long exploration journey. ## 5 Conclusions In this work, we present a LiDAR-only odometry and mapping solution named InTEn-LOAM to cope with some challenging issues, i.e., dynamic environments, intensity channel incorporation. A temporal-based dynamic removal method and a novel intensity-based scan registration approach are proposed, and both of them are utilized to improve the performance of LOAM. The proposed system is evaluated on both simulated and real-world datasets. Results show that InTEn-LOAM achieves similar or better accuracy in comparison with the state-of-the-art LO solutions in normal environments, and outperforms them in challenging scenarios, such as long straight tunnel. Since the LiDAR-only method cannot adapt to aggressive motion, our future work involves developing a IMU/LiDAR tightly coupled method to escalate the robustness of motion estimation. ## References * Behley and Stachniss (2018) Behley, J., Stachniss, C., 2018. Efficient surfel-based slam using 3d laser range data in urban environments., in: Robotics: Science and Systems. * Besl and McKay (1992) Besl, P.J., McKay, N.D., 1992. Method for registration of 3-d shapes, in: Sensor fusion IV: control paradigms and data structures, International Society for Optics and Photonics. pp. 586-606. * Biber and Strasser (2003) Biber, P., Strasser, W., 2003. The normal distributions transform: A new approach to laser scan matching, in: Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453), IEEE. pp. 2743-2748. * Bogoslavskyi and Stachniss (2017) Bogoslavskyi, I., Stachniss, C., 2017. Efficient online segmentation for sparse 3d laser scans. PFG-Journal of Photogrammetry, Remote Sensing and Geoinformation Science 85, 41-52. * Campos et al. (2020) Campos, C., Elvira, R., Rodriguez, J.J.G., Montiel, J.M., Tardos, J.D., 2020. Orb-slam3: An accurate open-source library for visual, visual-inertial and multi-map slam. arXiv preprint arXiv:2007.11898. * Dewan et al. (2016) Dewan, A., Caselitz, T., Tipaldi, G.D., Burgard, W., 2016. Motion-based detection and tracking in 3d lidar scans, in: 2016 IEEE international conference on robotics and automation (ICRA), IEEE. pp. 4508-4513. * Ding et al. (2020) Ding, W., Hou, S., Gao, H., Wan, G., Song, S., 2020. Lidar inertial odometry aided robust lidar localization system in changing city scenes, in: 2020 IEEE International Conference on Robotics and Automation (ICRA), IEEE. pp. 4322-4328. * Droeschel et al. (2017) Droeschel, D., Schwarz, M., Behnke, S., 2017. Continuous mapping and localization for autonomous navigation in rough terrain using a 3d laser scanner. Robotics and Autonomous Systems 88, 104-115. * Dube et al. (2020) Dube, R., Cramariuc, A., Dugas, D., Sommer, H., Dymczyk, M., Nieto, J., Siegwart, R., Cadena, C., 2020. Segmap: Segment-based mapping and localization using data-driven descriptors. The International Journal of Robotics Research 39, 339-355. * Ebadi et al. (2020) Ebadi, K., Chang, Y., Palieri, M., Stephens, A., Hatteland, A., Heiden, E., Thakur, A., Funabiki, N., Morrell, B., Wood, S., et al., 2020. Lamp: Large-scale autonomous mapping and positioning for exploration of perceptually-degraded subterranean environments, in: 2020 IEEE International Conference on Robotics and Automation (ICRA), IEEE. pp. 80-86. * Filipenko and Afanasyev (2018) Filipenko, M., Afanasyev, I., 2018. Comparison of various slam systems for mobile robot in an indoor environment, in: 2018 International Conference on Intelligent Systems (IS), IEEE. pp. 400-407. * Favaro et al. (2017)Furukawa, T., Dantanarayana, L., Ziglar, J., Ranasinghe, R., Dissanayake, G., 2015. Fast global scan matching for high-speed vehicle navigation, in: 2015 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), IEEE. pp. 37-42. * Geiger et al. (2012) Geiger, A., Lenz, P., Urtasun, R., 2012. Are we ready for autonomous driving? the kitti vision benchmark suite, in: 2012 IEEE conference on computer vision and pattern recognition, IEEE. pp. 3354-3361. * Guo et al. (2020) Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L., Bennamoun, M., 2020. Deep learning for 3d point clouds: A survey. IEEE transactions on pattern analysis and machine intelligence * Himmelsbach et al. (2010) Himmelsbach, M., Hundelshausen, F.V., Wuensche, H.J., 2010. Fast segmentation of 3d point clouds for ground vehicles, in: 2010 IEEE Intelligent Vehicles Symposium, IEEE. pp. 560-565. * Jiao et al. (2020) Jiao, J., Ye, H., Zhu, Y., Liu, M., 2020. Robust odometry and mapping for multi-lidar systems with online extrinsic calibration. arXiv preprint arXiv:2010.14294. * Khan et al. (2016) Khan, S., Wollherr, D., Buss, M., 2016. Modeling laser intensities for simultaneous localization and mapping. IEEE Robotics and Automation Letters 1, 692-699. * Khoury et al. (2017) Khoury, M., Zhou, Q.Y., Koltun, V., 2017. Learning compact geometric features, in: Proceedings of the IEEE international conference on computer vision, pp. 153-161. * Kim and Kim (2020) Kim, G., Kim, A., 2020. Remove, then revert: Static point cloud map construction using multiresolution range images, in: IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE/RSJ. * Koide et al. (2019) Koide, K., Miura, J., Menegatti, E., 2019. A portable three-dimensional lidar-based system for long-term and wide-area people behavior measurement. International Journal of Advanced Robotic Systems 16, 1729881419841532. * Li et al. (2021) Li, K., Li, M., Hanebeck, U.D., 2021. Towards high-performance solid-state-lidar-inertial odometry and mapping. IEEE Robotics and Automation Letters * Li et al. (2020) Li, S., Li, G., Wang, L., Qin, Y., 2020. Slam integrated mobile mapping system in complex urban environments. ISPRS Journal of Photogrammetry and Remote Sensing 166, 316-332. * Lin et al. (2021) Lin, J., Zheng, C., Xu, W., Zhang, F., 2021. R2live: A robust, real-time, lidar-inertial-visual tightly-coupled state estimator and mapping. arXiv preprint arXiv:2102.12400. * Lu et al. (2019) Lu, W., Wan, G., Zhou, Y., Fu, X., Yuan, P., Song, S., 2019. Deepvcp: An end-to-end deep neural network for point cloud registration, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12-21. * Milz et al. (2018) Milz, S., Arbeiter, G., Witt, C., Abdallah, B., Yogamani, S., 2018. Visual slam for automated driving: Exploring the applications of deep learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 247-257. * Moosmann and Stiller (2011) Moosmann, F., Stiller, C., 2011. Velodyne slam, in: 2011 ieee intelligent vehicles symposium (iv), IEEE. pp. 393-398. * Palieri et al. (2020) Palieri, M., Morrell, B., Thakur, A., Ebadi, K., Nash, J., Chatterjee, A., Kanellakis, C., Carlone, L., Guaragnella, C., Agha-mohammadi, A.a., 2020. Locus: A multi-sensor lidar-centric solution for high-precision odometry and 3d mapping in real-time. IEEE Robotics and Automation Letters 6, 421-428. * Pan et al. (2021) Pan, Y., Xiao, P., He, Y., Shao, Z., Li, Z., 2021. Mulls: Versatile lidar slam via multi-metric linear least square. arXiv preprint arXiv:2102.03771. * Qin et al. (2020) Qin, C., Ye, H., Pranata, C.E., Han, J., Zhang, S., Liu, M., 2020. Lins: A lidar-inertial state estimator for robust and efficient navigation, in: 2020 IEEE International Conference on Robotics and Automation (ICRA), IEEE. pp. 8899-8906. * Qin et al. (2018) Qin, T., Li, P., Shen, S., 2018. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics 34, 1004-1020. * Rusu et al. (2009) Rusu, R.B., Blodow, N., Beetz, M., 2009. Fast point feature histograms (fpfh) for 3d registration, in: 2009 IEEE international conference on robotics and automation, IEEE. pp. 3212-3217. * Segal et al. (2009) Segal, A., Haehnel, D., Thrun, S., 2009. Generalized-icp., in: Robotics: science and systems, Seattle, WA. p. 435. * Servos and Waslander (2017) Servos, J., Waslander, S.L., 2017. Multi-channel generalized-icp: A robust framework for multi-channel scan registration. Robotics and Autonomous systems 87, 247-257. * Shan and Englot (2018) Shan, T., Englot, B., 2018. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain, in: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE. pp. 4758-4765. * Shan et al. (2020) Shan, T., Englot, B., Meyers, D., Wang, W., Ratti, C., Rus, D., 2020. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. arXiv preprint arXiv:2007.00258. * Sommer et al. (2020) Sommer, C., Usenko, V., Schubert, D., Demmel, N., Cremers, D., 2020. Efficient derivative computation for cumulative b-splines on lie groups, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11148-11156. * Wang and Wang (2021) Wang, H., Wang, C., Xie, L., 2021. Intensity-slam: Intensity assisted localization and mapping for large scale environment. IEEE Robotics and Automation Letters 6, 1715-1721. * Yang et al. (2018) Yang, S., Zhu, X., Nian, X., Feng, L., Qu, X., Ma, T., 2018. A robust pose graph approach for city scale lidar mapping, in: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE. pp. 1175-1182. * Yin et al. (2020) Yin, D., Zhang, Q., Liu, J., Liang, X., Wang, Y., Maanpaa, J., Ma, H., Hyyppai, J., Chen, R., 2020. Cae-lo: Lidar odometry leveraging fully unsupervised convolutional auto-encoder for interest point detection and feature description. arXiv preprint arXiv:2001.01354. * Yokozuka et al. (2021) Yokozuka, M., Koide, K., Oishi, S., Banno, A., 2021. Litamin2: Ultra light lidar-based slam using geometric approximation applied with kl-divergence. arXiv preprint arXiv:2103.00784. * Yoon et al. (2019) Yoon, D., Tang, T., Barfoot, T., 2019. Mapless online detection of dynamic objects in 3d lidar, in: 2019 16th Conference on Computer and Robot Vision (CRV), IEEE. pp. 113-120. * Zhang and Singh (2017) Zhang, J., Singh, S., 2017. Low-drift and real-time lidar odometry and mapping. Autonomous Robots 41, 401-416. * Zhao et al. (2019) Zhao, S., Fang, Z., Li, H., Scherer, S., 2019. A robust laser-inertial odometry and mapping method for large-scale highway environments, in: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE. pp. 1285-1292. * Zhou et al. (2021) Zhou, B., He, Y., Qian, K., Ma, X., Li, X., 2021. S4-slam: A real-time 3d lidar slam system for ground/watersurface multi-scene outdoor applications. Autonomous Robots 45, 77-98. * Zong et al. (2018) Zong, W., Li, G., Li, M., Wang, L., Li, S., 2018. A survey of laser scan matching methods. Chinese Optics 11, 914-930. Figure 8: Feature extraction results in different scenes. (a) Open road; (b) City avenue; (c) Long straight tunnel; (d) Roadside green belt. (plane, reflector, edge and raw scan points). Objects in the real-world scenes and their counterparts in laser scans are circled by boxes (reflector features, edge 28), sutures, some special areas). Figure 9: Relative error plots. (a) Relative translation error curves; (b) Relative rotation error curves. (NDT of HDL-Graph-SLAM, feature-based registration approach of LOAM, the proposed intensity-based registration approach). Figure 10: DOR examples for a single frame of laser scan. (a) Seq.07. Vehicles crossing the intersection when the data collection vehicle stops and waits for the traffic light (top); The cyclist traveling in the opposite direction when the data collection vehicle driving along the road (bottom). (b) Seq.10. Followers behind the data collection vehicle as it travels down the highway at high speed (top); Vehicles driving in the opposite direction and in front of the data collection vehicle when it slows down (bottom). (ficade, ground, edge and dynamics for points true positive, false positive and true negative for dynamic segmentation boxes.) Figure 11: The comparison between local maps of LOAM and InTEn-LOAM. (a) Map at the intersection (b) Map at the busy road. Iu each subfigure, the top represents the map of LOAM w/o DOR, while the bottom represents the map of InTEn-LOAM w/ DOR. Figure 12: The average RTE and RRE of InTEn-LOAM over fixed lengths. (a) RTE; (b) RRE. Figure 13: Constructed points maps with details and estimated trajectories. (a), (c), (e) maps of Seq.00, 01, and 05; (b), (d), (f) trajectories of Seq.00, 01, and 05 (groundtruths and InTEn-LOAM) Figure 14: Cumulative distributions of absolute state errors and estimated trajectories. (a) Cumulative distributions of the absolute positioning errors; (b) Cumulative distributions of the absolute rotational errors; (c) Estimated trajectories. (InTEn-LOAM, LOAM, HDL-Graph-SLAM, groundtruth) Figure 15: The average RTE and RRE of LO systems over fixed lengths. (a) RTE; (b) RRE. Figure 16: InTEn-LOAM’s map result on urban scenario (KITTI seq.06): (a) overview, (b) map in detail of circled areas, (c) reference map comparison. Figure 17: InTEn-LOAM’s map result on country scenario (KITTI seq.10): (a) overview, (b) map in detail of circled areas, (c) reference map comparison. ## Appendix A Figure 18: LO systems’ map results on autonomous driving field dataset in the tunnel region. (a) InTEn-LOAM, (b) LOAM, (c) HDL-Graph-SLAM, (d) MULLS. Figure 19: InTEn-LOAM’s map result on autonomous driving field dataset. (a) the constructed point cloud map, (b) local remote sensing image and estimated trajectory.
Traditional LiDAR odometry (LO) systems mainly leverage geometric information obtained from the traversed surroundings to register laser scans and estimate LiDAR ego-motion, while it may be unreliable in dynamic or unstructured environments. This paper proposes InTEn-LOAM, a low-drift and robust LiDAR odometry and mapping method that fully exploits implicit information of laser sweeps (i.e., geometric, intensity, and temporal characteristics). Scanned points are projected to cylindrical images, which facilitate the efficient and adaptive extraction of various types of features, i.e., ground, beam, facade, and reflector. We propose a novel intensity-based points registration algorithm and incorporate it into the LiDAR odometry, enabling the LO system to jointly estimate the LiDAR ego-motion using both geometric and intensity feature points. To eliminate the interference of dynamic objects, we propose a temporal-based dynamic object removal approach to filter them out before map update. Moreover, the local map is organized and downsampled using a temporal-related voxel grid filter to maintain the similarity between the current scan and the static local map. Extensive experiments are conducted on both simulated and real-world datasets. The results show that the proposed method achieves similar or better accuracy w.r.t the state-of-the-arts in normal driving scenarios and outperforms geometric-based LO in unstructured environments. keywords: SLAM; LiDAR odometry; dynamic removal; point intensity; scan registration + Footnote †: journal: XXXX
Condense the content of the following passage.
300
arxiv-format/2403_08027v1.md
# McCatch: Scalable Microcluster Detection in Dimensional and Nondimensional Datasets Braulio V. Sanchez Vinces _ICMC, University of Sao Paulo, Brazil_ [email protected] Robson L. F. Cordeiro _SCS, Carnegie Mellon University, USA_ [email protected] Christos Faloutsos _SCS, Carnegie Mellon University, USA_ [email protected] ## I Introduction How could we have a method that detects microclusters of outliers even in a nondimensional dataset? How to rank together both singleton ('one-off' outliers) and nonsingleton microclusters according to their anomaly scores? Can we define the scores in a principled way? Also, how to do that in a scalable and 'hands-off' manner? Outlier detection has many applications and extensive literature [1, 2, 3]. The discovery of microclusters of outliers is among its most challenging tasks. It happens because these outliers have close neighbors that make most algorithms fail [4, 5, 6]. For example, see the red elements in the plots of Fig. 1(i) and Fig. 2. Microclusters are critical for settings such as fraud detection and prevention of coordinated terrorist attacks, to name a few, because they indicate coalition or repetition as compared to 'one-off' outliers. For example, a microcluster (or simply'mc', for short) can be formed from: (i) frauds exploiting the same vulnerability in cybersecurity; (ii) reviews made by bots to illegitimately defame a product in e-commerce, or; (iii) unusual purchases of a hazardous chemical product made by ill-intended people. Its discovery and comprehension is, therefore, highly desirable. We present a new microcluster detector named \\(\\mathbf{McCatch}\\) - from \\(\\mathbf{Microcluster}\\)\\(\\mathbf{Catch}\\). The main idea is to leverage our proposed 'Oracle' plot, which is a plot of \\(1\\)NN Distance versus Group \\(1\\)NN Distance with the latter being the distance from a cluster of data elements to its nearest neighbor. Our goals are: 1. **General Input:** to work with any metric dataset, including nondimensional ones, such as sets of graphs, short texts, fingerprints, sequences of DNA, documents, etc. 2. **General Output:** to rank singleton ('one-off' outliers) and nonsingleton mcs _together_, by their anomalousness. 3. **Principled:** to obey axioms. 4. **Scalable:** to be subquadratic on the number of elements. 5. **'Hands-Off':** to be automatic with no manual tuning. We studied \\(31\\) datasets with up to \\(1\\)M elements to show that McCatch achieves all five goals, while \\(11\\) of the closest state-of-the-art competitors fail. Fig. 1 showcases McCatch's ability to process dimensional and _nondimensional_ data: (i) On vector, \\(3\\)d data from a satellite image of Shanghai, it spots two \\(2\\)-elements mcs of buildings with unusually colored roofs, and; a few other outliers. On nondimensional data of last names (ii) and skeletons (iii), it gives high anomaly scores to the few nonenglish names and wild-animal skeletons. The details of this experiment are given later; see Sec. V. ## II Problem & Related Work ### _Problem Statement_ **Problem 1** (**Main problem**): _It is as follows:_ * _a metric dataset_ \\(\\mathbf{P}\\ =\\ \\{\\mathbf{p}_{1},\\ldots\\mathbf{p}_{n}\\}\\)_, where_ \\(\\mathbf{p}_{i}\\) _is a data element ('point', in a dimensional case);_ * _a distance/dis-similarity function_ \\(\\mathbf{f}(\\mathbf{p}_{i},\\mathbf{p}_{i^{\\prime}})\\) _(e.g., Euclidean /_ \\(L_{p}\\) _for dimensional data; provided by domain expert for non-dimensional data)._ * _Find:_ **(i)** _a set of disjoint microclusters_ \\(\\mathcal{M}=\\{\\mathbf{M}_{1},\\ldots\\mathbf{M}_{m}\\}\\)_, ranked most-strange-first, and_ **(ii)** _the set of corresponding anomaly scores_ \\(\\mathcal{S}=\\{s_{1},\\ldots s_{m}\\}\\)__ * _to match human intuition (see the axioms in Fig._ 2_)._ For ease of explanation, from now on, we shall describe our algorithm using the term 'point' for each data element. However, notice that the algorithm only needs a distance function between two elements - NOT coordinates. ### _Related Work_ There is a huge literature on outlier and microcluster detection. However, as we show in Tab. I, only our McCatch meets all the specifications. Next, we go into more detail. _Related work vs. goals:_ Outlier detection has many applications, including finance [7, 8], manufacturing [9, 10], environmental monitoring [11, 12], to name a few. It is thus covered by extensive literature [1, 2, 3]. The existing methodscan be categorized in various ways, for example, based on the measures they employ, including density-, depth-, angle- and distance-based methods, or by their modeling strategy, such as statistical-modeling, clustering, ensemble, etc., among others. This section presents the related work in a nontraditional way. We describe it considering the goals of our introductory section as we understand an ideal method should work with a General Input (G1) and give a General Output (G2) that is Principled (G3) in a Scalable (G4) and 'Hands-Off' (G5) way. _General Input- goal G1:_ Many methods fail w.r.t. G1. It includes famous isolation-based detectors, e.g., iForest [18], Gen2Out [4], and SCiForest [6], other tree-based methods, such as XTreK [25] and DIAD [16], hash-based approaches, e.g., Sparkx [24], angle-based ones, like ABOD/FastABOD [13], some clustering-based methods, e.g., KMeans- [30] and PLDOF [23], and even acclaimed deep-learning-based detectors, such as RDA [28], DOIForest [27], and Deep SVDD [26]. They all require access to explicit feature values. Note that **embedding** may allow these methods to work on nondimensional data. Turning elements of a metric or non-metric space into low-dimensionality vectors that preserve the given distances is exactly the problem of multi-dimensional scaling [32] and more recently t-SNE and UMAP [33]. However, these strategies have two disadvantages: (i) they are _quadratic_ on the count of elements [34], and; (ii) they require as input the embedding dimensionality. Distinctly, density- and distance-based detectors - as well as some clustering methods that detect outliers as a byproduct of the process, like DBSCAN [29] and OPTICS [31] - may handle nondimensional data if adapted to work with a suitable distance function, and, ideally, also with a metric tree, like a Slim-tree [35] or an M-tree [36]. Examples are LOCI [14], LOF [21], GLOSH [17], kNN-Out [19], DB-Out [15], ODIN [22], LDOF [20], and D.MCA [5]. However, faster hypercube-based versions of some of these methods, e.g., ALOCI [14], require the features. _General Output- goal G2:_ Most methods fail in G2. They miss every mc whose points have close neighbors, like ABOD, iForest, LOCI, Deep SVDD, RDA, GLOSH, kNN-Out, LOF, DB-Out, ODIN, DIAD, Sparkx, XTreK, and DOIForest; or, fail to group these points into an entity with a score, e.g., D.MCA, SCiForest, LDOF, PLDOF, DBSCAN, OPTICS, and KMeans-. Gen2Out is the only exception. _Principled- goal G3:_ Goal G3 regards the generation of scores in a principled manner for both singleton and nonsingleton mcs. Gen2Out is the only method that provides scores for microclusters; thus, all other methods fail to achieve G3. Unfortunately, Gen2Out also fails w.r.t. G3 as it does not identify nor obey any axiom for generating microcluster scores. It does not obey axioms that we propose either. Fig. 1: **McCatch is unsupervised, and it ALSO works on nondimensional data:** (i) on vector, 3d data from a satellite image of Shanghai – it spots two 2-elements microclusters of unusually colored roofs, and a few other outliers; on nondimensional data of last names (ii) and skeletons (iii) – it gives high anomaly scores to the few nonenglish names and skeletons of wild animals. (best viewed in color)_Scalable- goal G4:_ Some methods are scalable, like ALOCI, iForest, Gen2Out, SciForest, PLDOF, KMeans--, Sparkx, XTreK, DOIForest, RDA, and Deep SVDD. They achieve G4. Distinctly, methods like DIAD, D.MCA, ABOD, FastABOD, GLOSH, LOCI, kNN-Out, LOF, DB-Out, ODIN, LDOF, DBSCAN, and OPTICS fail in G4 as they are quadratic (or worse) on the count of points. _'Hands-Off'- goal G5_: Goal G5 regards the ability of a method to process an unlabeled dataset without manual tuning. Methods that achieve G5 are either hyperparameter free, like ABOD; or, they have a default hyperparameter configuration to be used in all datasets, as it happens with FastABOD, Gen2Out, D.MCA, iForest, SciForest, GLOSH, LOCI, and XTreK. On the other hand, many methods fail w.r.t. G5. Examples are ALOCI, DB-Out, kNN-Out, LOF, LDOF, PLDOF, DBSCAN, OPTICS, KMeans--, RDA, Deep SVDD, ODIN, DIAD, Sparkx, and DOIForest as they all require user-defined hyperparameter values. _Conclusion: Only_McCatch _meets all the specifications:_ As mentioned earlier, and as shown in Tab. I, _only_McCatch fulfills all the specs. Additionally, in contrast to several competitors, our method is deterministic and ranks outliers by their anomalousness. McCatch also returns explainable results thanks to the plateaus of our 'Oracle' plot (See Sec. IV-A), which roughly correspond to the distance to the nearest neighbor. Distinctly, black-box methods suffer on explainability. ## III Proposed Axioms How could we verify if a method reports scores in a principled way? To this end, we propose reasonable axioms that match human intuition and, thus, should be obeyed by any method when ranking microclusters w.r.t. their anomalousness. Importantly, our axioms apply to singleton and also to nonsingleton microclusters, so that both 'one-off' outliers and 'clustered' outliers are included seamlessly into a _single_ ranking. The axioms state that the score \\(s_{j}\\) of a microcluster \\(\\mathbf{M}_{j}\\) depends on: (i) the smallest distance between any point \\(\\mathbf{p}_{i}\\in\\mathbf{M}_{j}\\) and this point's nearest inlier - let this distance be known as the **'Bridge's Length'**\\(\\frac{1}{g}(j)\\) of \\(\\mathbf{M}_{j}\\), and; (ii) the cardinality \\(|\\mathbf{M}_{j}|\\) of \\(\\mathbf{M}_{j}\\). Hence, given any two microclusters that differ in one of these properties with all else being equal, we must have: * **Isolation Axiom:** if they differ in the 'Bridges' Lengths', the furthest away microcluster has the largest score. * **Cardinality Axiom:** if they differ in the cardinalities, the less populous microcluster has the largest anomaly score. Fig. 2 depicts our axioms in scenarios with inliers forming Gaussian-, cross- or arc-shaped clusters; the green mc (bottom) is always weird, i.e., larger score, than the one in red (left). ## IV Proposed Method How could we have an outlier detector that works with a General Input (G1) and gives a General Output (G2) that is Principled (G3) in a Scalable (G4) and 'Hands-Off' (G5) way? Here we answer the question with McCatch. To our knowledge, it is the first method that achieves all of these five goals. We begin with the main intuition, and then detail our proposal. ### _Intuition & the 'Oracle' Plot_ The high-level idea is to spot (i) points that are far from everything else ('one-off' outliers), and (ii) groups of points that form a microcluster, that is, they are close to each other, but far from the rest. The tough parts are how to quantify these intuitive concepts. We propose to use the new 'Oracle' plot. _'Oracle' plot:_ It focuses on plateaus formed in the count of neighbors of each point as the neighborhood radius varies. This idea is shown in Fig. 3. We present a toy dataset in 3(i), and its 'Oracle' plot in 3(ii). For easy of understanding, let us consider five points of interest: inlier 'A' in black; 'halo-point' 'B' in orange;'mc-point' 'C' in green; 'halo-mc-point' 'D' in violet, and; 'isolate-point' 'E' in red. Our 'Oracle' plot groups inliers like 'A' at its bottom-left part. The other parts of the plot distinguish the outliers by type; see 'B', 'C', 'D', and 'E'. Note that the outliers 'C' and 'D' that belong to the mc of green/violet points are isolated at the top part of the plot. The details are in Fig.3(iii), where we plot the count of neighbors versus the radius for the points of interest. Each point is counted together with its neighbors, so the minimum count is \\(1\\). The large, blue curves give the average count for the dataset. A plateau exists if the count of neighbors of a point remains (quasi) unaltered for two or more radii; let this count be the _height_ of the plateau. The _length_ of the plateau is the difference between its largest radius, and the smallest one. Note that the counts for each point of interest form at least two plateaus: the first plateau, and the last one, referring to small, and large radii respectively. Middle plateaus may also exist. _'Plateaus' correspond to clusters:_ In fact, the plateaus follow a hierarchical clustering structure. Each plateau in the count of neighbors of a point describes this point's cluster in one level of the hierarchy. This is why the first, and the last plateaus always exist: the first one regards a low level, where the point is a cluster of itself (or nearly so); the last plateau refers to a higher level where the point, and (nearly) all other points cluster together. Middle plateaus may be multiple, as the point may belong to clusters in many intermediary levels of the hierarchy. Note that a plateau shows: (i) the cluster's cardinality, and; (ii) the cluster's distance to other points. The plateau's height is the cardinality; its length is the distance. _Examples of 'plateaus':_ The first plateaus in Fig. 3(iii) reveal that 'A' (in black) and 'C' (green) are close to their nearest neighbors, while 'E' (red) is isolated - see the length of each first plateau considering the log scale, and note the small lengths for 'A' and 'C', and the large one for 'E'. The middle plateaus show that: (i) 'A' belongs to a populous, isolate cluster whose cardinality is \\(77\\%\\) of the dataset cardinality \\(n\\) - see its middle plateau whose length is large, as well as the height that is also large, and; (ii) 'C' is part of an isolate mc whose cardinality is \\(0.08\\%\\) of \\(n\\) - note that its middle plateau has a large length, but a small height. 'E' does not belong to any nonsingleton cluster due to the absence of a middle plateau. _Gory details: 'excused' plateaus and the 'Oracle' plot:_ Provided that we look for microclusters, we propose to _excuse_ plateaus of large height; i.e., to ignore clusters of large cardinality, say, larger than a Maximum Microcluster Cardinality \\(c=\\left[n\\cdot 0.1\\right]\\). See the 'Excused' regions in the plots of Fig. 3(iii). Also, if any point happens to have two or more middle plateaus that are not excused, we only consider the one with the largest length, because the larger is the length of a plateau, the most isolated is the cluster that it describes. From now on, if we refer to a plateau, we mean a _nonexcused plateau_; and, if we refer to the middle plateau of a point, we mean this point's nonexcused, middle plateau of _largest length_. Our 'Oracle' plot is then built from the plateaus' lengths. We plot for each point \\(\\mathbf{p}_{i}\\) the length \\(x_{i}\\) of its first plateau versus the length \\(y_{i}\\) of its middle plateau, using \\(y_{i}=0\\) when \\(\\mathbf{p}_{i}\\) has no middle plateau. Importantly, \\(x_{i}\\) is approximately1 the distance between \\(\\mathbf{p}_{i}\\) and its nearest neighbor. Let us refer to \\(x_{i}\\) as the **1NN Distance** of \\(\\mathbf{p}_{i}\\). On the other hand, \\(y_{i}\\) is approximately2 the largest distance between any potential, nonsingleton mc that contains \\(\\mathbf{p}_{i}\\), and the nearest neighbor of this cluster. Thus, we refer to \\(y_{i}\\) as the **Group 1NN Distance** of \\(\\mathbf{p}_{i}\\). Footnote 1: The exact distance could be found iff using an infinite set of radii, which is unfeasible; plateaus would also have to have strictly unaltered neighbor counts. Footnote 2: The exact distance could only be found for a point \\(\\mathbf{p}_{i}\\) at the center of the potential mc, and still being subject to the previous footnote’s requirements. Hence, the 'X' axis of our 'Oracle' plot represents the possibility of each point to form a cluster of itself, that is, to be a singleton microcluster. If a point \\(\\mathbf{p}_{i}\\) is far from any other point, then \\(\\mathbf{p}_{i}\\) has a larger \\(x_{i}\\) than it would have if it were close to another point. Distinctly, the 'Y' axis regards the possibility of each point to be in a nonsingleton microcluster. If \\(\\mathbf{p}_{i}\\) has a few close neighbors, i.e., fewer than \\(c\\) neighbors, but it is far from other points, then \\(\\mathbf{p}_{i}\\) has a larger \\(y_{i}\\) than it would have if its close neighbors were many, or none. Provided that \\(x_{i}\\) and \\(y_{i}\\) have both the same meaning - in the sense that each one is the distance between a potential microcluster \\(\\mathbf{M}_{j}:\\mathbf{p}_{i}\\in\\mathbf{M}_{j}\\) and the cluster's nearest neighbor - they can be compared to a threshold \\(d\\) to verify if \\(\\mathbf{p}_{i}\\) is an outlier; i.e., if either \\(x_{i}\\) or \\(y_{i}\\) is larger than or equal to \\(d\\). For instance, note in Fig. 3(ii) that a threshold \\(d\\) distinguishes outliers and inliers. From now on, we refer to threshold \\(d\\) as the **Cutoff**. McCatch obtains \\(d\\) automatically, in a data-driven way, as we show later. ### McCatch _in a Nutshell_ McCatch is shown in Alg. 1. Following the problem state Fig. 2: **Proposed Axioms:** the green microcluster is always more weird, i.e., larger anomaly score. All else being equal, (i) Isolation Axiom – furthest away microcluster wins; (ii) Cardinality Axiom – less populous microcluster wins. (best viewed in color)ment from Probl. 1, it receives a dataset \\(\\mathbf{P}=\\{\\mathbf{p}_{1},\\ldots\\mathbf{p}_{n}\\}\\) as input, and returns a set of microclusters \\(\\mathcal{M}=\\{\\mathbf{M}_{1},\\ldots\\mathbf{M}_{m}\\}\\) together with their anomaly scores \\(\\mathcal{S}=\\{s_{1},\\ldots s_{m}\\}\\). For applications that require a full ranking of the points (as well as for backward compatibility with previous methods), McCatch also returns a set of scores per point \\(\\mathcal{W}=\\{w_{1},\\ldots w_{n}\\}\\), where \\(w_{i}\\in\\mathbb{R}_{>0}\\) is the score of point \\(\\mathbf{p}_{i}\\). Our method has hyperparameters, for which we provide reasonable default values: Number of Radii \\(a\\in\\mathbb{N}_{>1}\\); Maximum Plateau Slope \\(b\\in\\mathbb{R}_{\\geq 0}\\), and; Maximum Microcluster Cardinality \\(c\\in\\mathbb{N}_{\\geq 1}\\). The default values \\(a=15\\), \\(b=0.1\\), and \\(c=\\lceil n\\cdot 0.1\\rceil\\) were used in _every_ experiment reported in our paper3. It confirms McCatch's ability to be fully automatic. Footnote 3: Except for the experiments reported in Sec. V-E, where we explicitly test the sensitivity to distinct hyperparametrization. In a nutshell, McCatch has four steps: (I) define neighborhood radii - see Lines \\(1\\)-\\(3\\) in Alg. 1; (II) build 'Oracle' plot - Line \\(4\\); (III) spot microclusters - Line \\(5\\), and; (IV) compute anomaly scores - Line \\(6\\). Step I is straightforward. We build a tree \\(\\mathsf{T}\\) for \\(\\mathbf{P}\\), like an R-tree, M-tree, or Slim-tree4. Then, we estimate the diameter \\(l\\) of \\(\\mathbf{P}\\) as the maximum distance between any child node (direct successor) of the root node of \\(\\mathsf{T}\\). Finally, we define the set of radii \\(\\mathcal{R}=\\{r_{1},\\ldots r_{a}\\}=\\left\\{\\frac{l}{2^{a-1}},\\ \\frac{l}{2^{a-2}},\\ \\ldots\\ \\frac{l}{2^{a}}\\right\\}\\) to be used when counting neighbors. The next subsections detail Steps II, III, and IV. Footnote 4: M-trees and Slim-trees are for non-vector data; R-trees for disk-based vector data, and kd-trees for main-memory-based vector data. ### _Build the 'Oracle' Plot_ Alg. 2 builds the 'Oracle' plot. It starts by counting neighbors for each point w.r.t. a few radii of neighborhood - see Lines \\(1\\)-\\(3\\). Specifically, for each radius \\(r_{e}\\in\\mathcal{R}\\), we run a spatial self join algorithm SelfJoin( ) to obtain a set \\(\\left\\{q_{e}^{(1)},\\ldots q_{e}^{(n)}\\right\\}\\), where \\(q_{e}^{(i)}\\) is the count of neighbors (+ self) of \\(\\mathbf{p}_{i}\\) considering radius \\(r_{e}\\). Any off-the-shelf spatial join Fig. 3: **Intuition & the ‘Oracle’ plot: McCatch spots outliers in a dataset (i) using our ‘Oracle’ plot (ii). The plot groups inliers like point ‘A’ (in black) at its bottom-left, and distinguishes outliers by type; see ‘B’ (orange), ‘C’ (green), ‘D’ (violet), and ‘E’ (red). Outliers ‘C’ and ‘D’ from the microcluster in green/violet are isolated at the top. It is made possible by capitalizing on plateaus formed in the count of neighbors of each point as the neighborhood radius varies; see examples in (iii). (best viewed in color)**algorithm can be used here. However: (i) the algorithm must be adapted to return only counts of neighbors, not pairs of neighboring points, and; (ii) we consider the use of any join algorithm that can leverage a tree T to speed up the process. Later, we use \\(b\\), \\(c\\), and the counts \\(\\left\\{q_{1}^{(i)},\\ldots q_{a}^{(i)}\\right\\}\\) of each point \\(\\mathbf{p}_{i}\\) to compute both the length \\(x_{i}\\) of its first plateau, i.e., the \\(1\\)NN Distance, and the length \\(y_{i}\\) of its middle plateau, i.e., the Group \\(1\\)NN Distance - see Lines \\(4\\)-\\(7\\). The details are in Def.ns 1-3. Note that we use \\(x_{i}=0\\) for every point \\(\\mathbf{p}_{i}\\) such that \\(q_{1}^{(i)}>1\\), because, in these cases, the number of radii \\(a\\) is not large enough to uncover the first plateau of that particular point. Similarly, we use \\(y_{i}=0\\) for every point \\(\\mathbf{p}_{i}\\) that does not have a middle plateau. The plateau lengths are then employed in Lines \\(8\\)-\\(10\\) to mount the 'Oracle' plot \\(\\mathsf{0}=\\left(\\left\\{x_{1},\\ldots x_{n}\\right\\},\\left\\{y_{1},\\ldots y_{n} \\right\\}\\right)\\), which is returned in Line \\(11\\). **Definition 1** (**Plateau**): _A plateau \\(\\mathtt{U}^{(i)}=[r_{e},r_{e^{\\prime}}]\\) of a point \\(\\mathbf{p}_{i}\\) is a maximal range of radii where the count of neighbors of \\(\\mathbf{p}_{i}\\) remains unaltered, or quasi-unaltered according to a Maximum Plateau Slope \\(b\\). Formally, \\([r_{e},r_{e^{\\prime}}]\\) is a plateau of \\(\\mathbf{p}_{i}\\) if and only if \\(r_{e},r_{e^{\\prime}}\\in\\mathcal{R}\\) with \\(r_{e}<r_{e^{\\prime}}\\), and the slope_ \\[\\textsc{Slope}\\left(e^{\\prime\\prime},\\ i\\right)=\\frac{\\log\\left(q_{e^{\\prime \\prime}+1}^{(i)}\\right)-\\log\\left(q_{e^{\\prime\\prime}}^{(i)}\\right)}{\\log \\left(r_{e^{\\prime\\prime}+1}\\right)-\\log\\left(r_{e^{\\prime\\prime}}\\right)}\\] _is smaller than or equal to \\(b\\) for every value \\(e^{\\prime\\prime}\\), such that \\(r_{e^{\\prime\\prime}}\\in\\mathcal{R}\\) and \\(e\\leq e^{\\prime\\prime}<e^{\\prime}\\). Also, it must be true that: \\(\\textsc{Slope}\\left(e-1,\\ i\\right)>b\\), if \\(e>1\\), and; \\(\\textsc{Slope}\\left(e^{\\prime},\\ i\\right)>b\\), if \\(e^{\\prime}<a\\). The **length** of a plateau \\(\\mathtt{U}^{(i)}=[r_{e},r_{e^{\\prime}}]\\) is given by \\(r_{e^{\\prime}}-r_{e}\\); its **height** is \\(q_{e}^{(i)}\\), which must be smaller than, or equal to a Maximum Microcluster Cardinality \\(c\\). **Definition 2** (**First Plateau**): _Among every plateau \\(\\mathtt{U}^{(i)}=[r_{e},r_{e^{\\prime}}]\\) of a point \\(\\mathbf{p}_{i}\\), the first plateau is the only one that has **height one**, that is, \\(q_{e}^{(i)}==1\\). Let us use \\(\\mathtt{X}^{(i)}\\) to refer specifically to the first plateau of \\(\\mathbf{p}_{i}\\). Similarly, we use \\(x_{i}\\) to denote the length of \\(\\mathtt{X}^{(i)}\\), which is the **1NN Distance** of \\(\\mathbf{p}_{i}\\)._ **Definition 3** (**Middle Plateau**): _Among every plateau \\(\\mathtt{U}^{(i)}=[r_{e},r_{e^{\\prime}}]\\) of \\(\\mathbf{p}_{i}\\) such that \\(r_{e^{\\prime}}\ eq l\\) and \\(q_{e}^{(i)}>1\\), the middle plateau is the one with the **largest length**\\(r_{e^{\\prime}}-r_{e}\\). We use \\(\\mathtt{Y}^{(i)}\\) to refer to the middle plateau of \\(\\mathtt{p}_{i}\\). Similarly, we use \\(y_{i}\\) to denote the length of \\(\\mathtt{Y}^{(i)}\\), which is the **Group 1NN Distance** of \\(\\mathbf{p}_{i}\\)._ ### _Spot the Microclusters_ Once the 'Oracle' plot is built, how to (i) spot the outliers, and then (ii) group the ones that are nearby each other? Alg. 3 gives the answers. It has two steps: to compute the Cutoff \\(d\\) so to distinguish outliers from inliers as shown in Fig. 3(ii); and then, to gel the outliers into mcs, that is, to assign each outlying point to the correct cluster. The details are as follows. ``` 0: Dataset \\(\\mathbf{P}=\\left\\{\\mathbf{p}_{1},\\ldots\\mathbf{p}_{n}\\right\\}\\); Tree T; Radii \\(\\mathcal{R}=\\left\\{r_{1},\\ldots r_{a}\\right\\}\\); Maximum Plateau Slope \\(b\\); Maximum Microcluster Cardinality \\(c\\); 0: Oracle' plot \\(\\mathsf{0}=\\left(\\left\\{x_{1},\\ldots x_{n}\\right\\},\\left\\{y_{1},\\ldots y_{n} \\right\\}\\right)\\); \\(\\triangleright\\) Count the neighbors \\(\\triangleright\\)\\(q_{e}^{(i)}=\\#\\) neighbors (+ self) of \\(\\mathbf{p}_{i}\\) regarding radius \\(r_{e}\\); 1:for\\(e=1,\\ldots a\\)do\\(\\triangleright\\) Run a join per radius 2:\\(\\left\\{q_{e}^{(1)},\\ldots q_{e}^{(n)}\\right\\}=\\textsc{SelfJoinC}\\left( \\mathtt{T},r_{e}\\right)\\); 3:endfor\\(\\triangleright\\) Find the plateaus 4:for\\(\\mathbf{p}_{i}\\in\\mathbf{P}\\)do \\(\\triangleright\\) Compute the \\(1\\)NN Distance \\(x_{i}=\\) use \\(b\\), \\(c\\), and \\(\\left\\{q_{1}^{(i)},\\ldots q_{a}^{(i)}\\right\\}\\) to compute the length of the first plateau of \\(\\mathbf{p}_{i}\\); \\(\\triangleright\\) Def. 2 \\(\\triangleright\\) Compute the Group \\(1\\)NN Distance \\(y_{i}=\\) use \\(b\\), \\(c\\), and \\(\\left\\{q_{1}^{(i)},\\ldots q_{a}^{(i)}\\right\\}\\) to compute the length of the middle plateau of \\(\\mathbf{p}_{i}\\); \\(\\triangleright\\) Def. 3 5:endfor \\(\\triangleright\\) Mount the 'Oracle' plot 6:\\(\\mathcal{X}=\\left\\{x_{1},\\ldots x_{n}\\right\\}\\); \\(\\triangleright\\) 'X' axis 7:\\(\\mathcal{Y}=\\left\\{y_{1},\\ldots y_{n}\\right\\}\\); \\(\\triangleright\\) 'Y' axis 8:\\(\\mathsf{0}=\\left(\\ \\mathcal{X},\\ \\mathcal{Y}\\ \\right)\\); \\(\\triangleright\\) 'Oracle' plot 9:return 0; ``` **Algorithm 2**BuildOPlot ( ) #### Iv-D1 Compute the Cutoff How to let the data dictate the correct Cutoff? Ideally, we want a method which is hands-off without requiring any parameters. The first solution that comes to mind is \\(k\\) standard deviations with \\(k\\) equals \\(3\\). Can we get rid of the \\(k\\) parameter too? _Cutoff comes from compression: insight:_ Our insight is to use Occam's razor [37] and formally the Minimum Description Length (MDL) [38]. MDL is a powerful way of regularizing and eliminating the need for parameters. The idea is to choose those parameter values that result in the best compression of the given dataset. It is a well respected principle made popular by Jorma Rissanen [38], Peter Grunwald [37] and others. We compute the Cutoff \\(d\\) by capitalizing on the set \\(\\left\\{x_{1},\\ldots x_{n}\\right\\}\\) of \\(1\\)NN Distances; that is, by leveraging the 'X' axis of our 'Oracle' plot5. Importantly, it is expected that many points have small values in this axis, while only a few points have larger values. The small values come mostly from inliers, such as the black point 'A' in Fig 3(i), but a few ones may come from outliers in the core of a non-singleton microcluster, like point 'C' in green, because these points also have close neighbors. Distinctly, larger values derive exclusively from outliers, e.g., 'B' (in orange), 'D' (violet) and 'E' (red). It allows us to compute \\(d\\) in a data-driven way, by partitioning a histogram of \\(1\\)NN Distances so to best separate the tall bins that refer to small distances from the short bins that regard large distances. Intuitively, Cutoff \\(d\\) is the minimum distance required between one microcluster and its nearest inlier. Footnote 5: Intuitively, it would be equivalent to get \\(d\\) by using the plot’s ‘Y’ axis, i.e., \\(\\left\\{y_{1},\\ldots y_{n}\\right\\}\\). The ‘X’ axis is chosen simply because we must pick an option. **Histogram of \\(1\\)NN Distances**. As expected, the majority of the points is counted in bins referring to small distances in the histogram - see the tall bins on its left side. The Cutoff \\(d\\) is the distance that best separates the tall bins from the short ones, where the former refer to small distances and the latter regard large distances. We obtain it automatically from the data, by partitioning the Histogram of \\(1\\)NN Distances so to minimize the cost of compressing the partitions - see the top part of Fig. 4. Besides being parameter-free, our solution is grounded in the same concept of compression used later to generate scores, and, thus, it increases the coherence of our method. _Cutoff comes from compression: details:_ Here we provide the details of computing \\(d\\). As shown in Lines \\(1\\)-\\(5\\) of Alg. 3, we follow Def. 4 to build the Histogram of \\(1\\)NN Distances \\(\\mathcal{H}=\\{h_{1},\\ldots h_{a}\\}\\). Then, in Line 6, we use Def.ns 5-6 to obtain \\(d\\). Specifically, we find the peak bin \\(h_{e^{\\prime}}\\). It regards the distance/radius \\(r_{e^{\\prime}}\\) most commonly seen between a point and its nearest neighbor. Obviously, \\(d\\) must be larger than \\(r_{e^{\\prime}}\\). Hence, we compute \\(d\\) considering only the bins in \\(\\{h_{e^{\\prime}},\\ldots h_{a}\\}\\). They are analyzed to find the best cut position \\(e\\) that maximizes the homogeneity of values in subsets \\(\\{h_{e^{\\prime}},\\ldots h_{e-1}\\}\\) and \\(\\{h_{e},\\ldots h_{a}\\}\\), so to best separate the tall from the short bins. **Definition 4** (**Histogram of \\(1\\)NN Distances**): _The Histogram of \\(1\\)NN Distances is defined as a set \\(\\mathcal{H}=\\{h_{1},\\ldots h_{a}\\}\\), in which each bin \\(h_{e}\\) is computed as follows: \\(h_{e}~{}=~{}|~{}\\{\\mathbf{p}_{i}\\in\\mathbf{P}~{}:~{}x_{i}==r_{e}\\}~{}|\\)._ **Definition 5** (**Cost of Compression**): _The cost of compression of a nonempty set \\(\\mathcal{V}\\subset\\mathbb{N}_{\\geq 0}\\) with average \\(\\overset{\\rightarrow}{v}\\) is_ \\[\\textsc{Cost}\\left(\\mathcal{V}\\right)= \\big{\\langle}|~{}\\mathcal{V}~{}|\\big{\\rangle}+\\bigg{\\langle}1+ \\bigg{\\lceil}\\overset{\\rightarrow}{v}\\bigg{\\rceil}\\bigg{\\rangle}+\\sum_{v\\in \\mathcal{V}}\\bigg{\\langle}1+\\bigg{\\lceil}|~{}v-\\overset{\\rightarrow}{v}~{}| \\bigg{\\rceil}\\bigg{\\rangle}\\] _, where \\(\\big{\\langle}~{}\\big{\\rangle}\\) is the universal code length for integers6._ Footnote 6: It can be shown that \\(\\big{\\langle}z\\big{\\rangle}\\approx\\log_{2}\\left(z\\right)+\\log_{2}\\left(\\log_{2 }\\left(z\\right)\\right)+\\ldots\\), where \\(z\\in\\mathbb{N}_{\\geq 1}\\) and only the positive terms of the equation are retained [39]. This is the optimal length, if we do not know the range of values for \\(z\\) beforehand. **Definition 6** (**Cutoff**): _The Cutoff \\(d\\) is defined as_ \\[d~{}=~{}r_{e}\\in\\mathcal{R}~{}:~{}e~{}\\text{minimizes Cost}\\left(\\{h_{e^{ \\prime}},\\ldots h_{e-1}\\}\\right)+\\] \\[\\textsc{Cost}\\left(\\{h_{e},\\ldots h_{a}\\}\\right)\\] _, where \\(\\{h_{e^{\\prime}},\\ldots h_{e-1}\\}\\) and \\(\\{h_{e},\\ldots h_{a}\\}\\) are subsets of \\(\\mathcal{H}\\), and \\(e^{\\prime}\\) is chosen so that radius \\(r_{e^{\\prime}}\\in\\mathcal{R}\\) is the mode of \\(\\{x_{1},\\ldots x_{n}\\}\\)._ To obtain \\(e\\), based on the principle of MDL, we check partitions \\(\\{h_{e^{\\prime}},\\ldots h_{e^{\\prime\\prime}-1}\\}\\) and \\(\\{h_{e^{\\prime\\prime}},\\ldots h_{a}\\}\\) for all possible cut positions \\(e^{\\prime\\prime}\\). The idea is to compress each possible partition, representing it by its cardinality, its average, and the differences of each of its values to the average. A partition with high homogeneity of values allows good compression, as its differences to the average are small, and small numbers need less bits to be represented than large ones do. The best cut position \\(e\\) is, therefore, the one that creates the partitions that compress best. Note in Def. 5 that we add ones to some values whose code lengths \\(\\big{\\langle}~{}\\big{\\rangle}\\) are required, so to account for zeros. Cutoff \\(d\\) is then obtained as \\(d=r_{e}\\), without depending on any input from the user. It allows us to identify the set \\(\\mathbf{A}=\\{\\mathbf{p}_{i}\\in\\mathbf{P}~{}:~{}x_{i}\\geq d~{}\\vee~{}y_{i}\\geq d\\}\\) of all outliers. #### Iii-B2 Gel the outliers into microclusters Given the outliers, how to cluster them? Alg. 3 provides the answer. We use the 'Y' axis of the 'Oracle' plot to isolate outliers of nonsingleton mcs into a set \\(\\mathbf{M}=\\{\\mathbf{p}_{i}\\in\\mathbf{A}:y_{i}\\geq d\\}\\); see Line 8. Then, Fig. 4: **McCatch obtains the Cutoff of automatically, by partitioning a histogram of \\(1\\)NN Distances so to best separate tall and short bins. It is done by minimizing the cost of compressing the partitions. (best viewed in color)**in Lines \\(9\\)-\\(15\\), we group these points using the plot's 'X' axis. Every outlier in \\(\\mathbf{M}\\) must be grouped together with its nearest neighbor; thus, we identify the largest \\(1\\)NN Distance \\(\\hat{\\vec{x}}~{}=~{}\\text{max}_{i}~{}x_{i}:\\mathbf{p}_{i}\\in\\mathbf{M}\\), and use it to specify a threshold that rules if each possible pair of points from \\(\\mathbf{M}\\) is close enough to be grouped together. Provided that the \\(1\\)NN Distances are approximations, the threshold itself is the smallest radius larger than \\(\\hat{\\vec{x}}\\), that is, radius \\(r_{e+1}\\) from Line \\(12\\). It avoids having a point and its nearest neighbor in distinct clusters. The nonsingleton mcs are then identified by spotting connected components in a graph \\(\\texttt{G}~{}=~{}(~{}\\mathbf{M},~{}\\mathcal{E}~{})\\), where \\(\\mathbf{M}\\) is the set of nodes, and \\(\\mathcal{E}\\subseteq\\mathbf{M}\\times\\mathbf{M}\\) is the set of edges. The edges are obtained from any off-the-shelf spatial self join algorithm SelfJoin\\((~{})\\) that returns pairs of nearby points from \\(\\mathbf{M}\\) - see Line \\(12\\). Lastly, in Lines \\(16\\)-\\(19\\), we recognize each outlier in \\(\\mathbf{A}\\setminus\\mathbf{M}\\) as a cluster of itself, and return the final set of mcs \\(\\mathcal{M}\\). ### _Compute the Anomaly Scores_ Given the mcs, how to get scores that obey our axioms? _Scores come from compression: insight:_ We quantify the anomalousness of each mc according to how much it can be compressed when described in terms of the nearest inlier. Fig. 5 depicts this idea. To describe a microcluster \\(\\mathbf{M}_{j}\\) we would store its cardinality \\(|~{}\\mathbf{M}_{j}~{}|\\) and the identifier \\(i\\in\\{1,2,\\ldots n\\}\\) of the nearest inlier \\(\\mathbf{p}_{i}\\). See Items 1 and 2 in Fig. 5. Then, we would use \\(\\mathbf{p}_{i}\\) as a reference to describe the point \\(\\mathbf{p}_{i^{\\prime}}\\in\\mathbf{M}_{j}\\) that is the closest to it. To this end, we would store the differences (e.g., in each feature if we have vector data) between \\(\\mathbf{p}_{i}\\) and \\(\\mathbf{p}_{i^{\\prime}}\\); see 3. Point \\(\\mathbf{p}_{i^{\\prime}}\\) would in turn be the reference to describe its nearest neighbor \\(\\mathbf{p}_{i^{\\prime\\prime}}\\in\\mathbf{M}_{j}\\), which would later serve as a reference to describe one other close neighbor from \\(\\mathbf{M}_{j}\\), thus following a recursive process that would lead us to describe every point of \\(\\mathbf{M}_{j}\\); see 4. Importantly, in this representation, the cost per point - that is, the total number of bits used to describe \\(\\mathbf{M}_{j}\\) divided by \\(|\\mathbf{M}_{j}|\\) - reflects the axioms of Fig. 2. A large 'Bridge's Length' increases the cost per point due to 3. Also, the larger the cardinality, the smaller the cost per point. It is because the costs of 1, 2 and 3 are diluted with points. Hence, the cost per point appropriately quantifies the anomalousness of \\(\\mathbf{M}_{j}\\). _Scores come from compression: details:_ Alg. 4 computes the scores. We begin by finding the 'Bridge's Length' \\(\\hat{\\vec{y}}^{(j)}\\) of each \\(\\mathbf{M}_{j}\\). To this end, we compute the distance \\(g_{i}\\) between each outlier \\(\\mathbf{p}_{i}\\in\\mathbf{A}\\) and its nearest inlier; see Lines \\(1\\)-\\(12\\). Specifically, we run a join between \\(\\mathbf{A}\\) and \\(\\mathbf{P}\\setminus\\mathbf{A}\\) for each \\(r_{e}\\in\\mathcal{R}\\). Any spatial join algorithm can be used here, but it must be adapted to return counts of neighbors, not pairs of points. Each \\(g_{i}\\) is then the largest radius for which \\(\\mathbf{p}_{i}\\) has zero neighbors. The 'Bridge's Length' \\(\\hat{\\vec{y}}^{(j)}\\) is finally found for each \\(\\mathbf{M}_{j}\\) in Line \\(17\\); it is the smallest distance \\(g_{i}:\\mathbf{p}_{i}\\in\\mathbf{M}_{j}\\). **Definition 7** (**Score**): _The score \\(s_{j}\\) of \\(\\mathbf{M}_{j}\\) is defined as_ \\[s_{j}~{}=~{}\\frac{\\text{1}~{}~{}+~{}~{}\\text{2}~{}~{}+~{}~{}\\text each one of the remaining \\(\\left|\\,\\mathbf{M}_{j}\\,\\right|-1\\) points to be described and its point of reference. The cost is once more approximated using distances to work with any metric data. For applications requiring a full ranking of the points (and for backward compatibility with other methods), we also have a score \\(w_{i}\\) for each \\(\\mathbf{p}_{i}\\). To this end, we follow previous ideas and propose a score representing the cost of describing \\(\\mathbf{p}_{i}\\) in terms of the nearest inlier; see Lines \\(13\\)-\\(15\\) and \\(21\\)-\\(24\\) in Alg. 4. ### _Time and Space Complexity_ **Lemma 1** (**Time Complexity**): _The time complexity of McCatch is \\(O\\left(n\\ \\cdot\\ n^{1-\\frac{1}{u}}\\right)\\), where \\(u\\) is the intrinsic (correlation fractal) dimension7 of \\(\\mathbf{P}\\)._ Footnote 7: We only need distances to compute the fractal dimension \\(u\\), which is how quickly the number of neighbors grows with the distance [40]. It can be computed even for nondimensional data, for example, using string-editing distance for strings of last names or tree-editing distance for skeleton-graphs. Moreover, Traina Jr. [35] show how to quickly estimate the fractal dimension of a nondimensional dataset requiring subquadratic time. ### _Time and Space Complexity_ **Lemma 1** (**Time Complexity**): _The time complexity of McCatch is \\(O\\left(n\\ \\cdot\\ n^{1-\\frac{1}{u}}\\right)\\), where \\(u\\) is the intrinsic (correlation fractal) dimension7 of \\(\\mathbf{P}\\)._ Footnote 7: We only need distances to compute the fractal dimension \\(u\\), which is how quickly the number of neighbors grows with the distance [40]. It can be computed even for nondimensional data, for example, using string-editing distance for strings of last names or tree-editing distance for skeleton-graphs. Moreover, Traina Jr. [35] show how to quickly estimate the fractal dimension of a nondimensional dataset requiring subquadratic time. **Proof 1**: McCatch is presented in Alg. 1. It builds a tree \\(\\mathtt{T}\\) for \\(\\mathbf{P}\\) in \\(O\\left(n\\ \\cdot\\ \\log\\left(n\\right)\\right)\\) time, and then computes \\(l\\) and \\(\\mathcal{R}\\) in a negligible time. Later, it spots microclusters by calling functions BuildOPlot\\(\\left(\\ \\ \\right)\\) from Alg. 2, SpotMCs\\(\\left(\\ \\ \\right)\\) from Alg. 3 and ScoreMCs\\(\\left(\\ \\ \\right)\\) from Alg. 4, sequentially. Therefore, the overall cost of McCatch is the larger between \\(O\\left(n\\ \\cdot\\ \\log\\left(n\\right)\\right)\\) and the costs of Algs. 2, 3 and 4. Alg. 2 counts neighbors by running \\(a\\) self-joins for \\(\\mathbf{P}\\). Because \\(a\\) is small, the complexity of this step is the same as that of one single self-join. The self-join finds neighbors for each of \\(n\\) points. A rough, upper-bound estimation for its runtime is thus \\(O\\left(n\\cdot\\text{NN}\\left(n\\right)\\right)\\), where \\(\\text{NN}\\left(n\\right)\\) is time taken by each nearest neighbor search. According to Pagel, Korn and Faloutsos [41], \\(\\text{NN}\\left(n\\right)=O\\left(n^{1-\\frac{1}{u}}\\right)\\), in which \\(u\\) is the intrinsic (correlation fractal) dimension of a vector dataset \\(\\mathbf{P}\\). For nondimensional data, Traina Jr. et al. [35] demonstrate that the query cost also depends on \\(u\\). It leads us to estimate the cost of counting neighbors as \\(O\\left(n\\ \\cdot\\ n^{1-\\frac{1}{u}}\\right)\\). Later, Alg. 2 finds plateaus in \\(O\\left(n\\right)\\) time, and then it mounts \\(\\mathtt{0}\\) in a negligible time. Hence, the cost of Alg. 2 is \\(O\\left(n\\ \\cdot\\ n^{1-\\frac{1}{u}}\\right)\\). Algs. 3 and 4 scan \\(\\mathbf{P}\\) in \\(O\\left(n\\right)\\) time. They also process the set of outliers \\(\\mathbf{A}\\), which takes a negligible time because \\(\\left|\\ \\mathbf{A}\\ \\right|\\) is small. Thus, the time required by each algorithm is \\(O\\left(n\\right)\\). As the total cost is the larger between \\(O\\left(n\\ \\cdot\\ \\log\\left(n\\right)\\right)\\) and the costs of Algs. 2, 3 and 4, it comes to \\(O\\left(n\\ \\cdot\\ n^{1-\\frac{1}{u}}\\right)\\). We consider \\(u>1\\) because it is the most expected scenario. NoteIn our experience, real data often have fractal dimension \\(u\\) smaller than \\(20\\)[41, 42]. As shown in [35], several nondimensional data have small \\(u\\). Thus, McCatch should be subquadratic on the count of points for most real applications. **Lemma 2** (**Space Complexity**): _The space complexity of McCatch is given by \\(O\\left(n\\right)\\)._ **Proof 2**: McCatch receives a set \\(\\mathbf{P}\\) as the input, and returns sets \\(\\mathcal{M}\\), \\(\\mathcal{S}\\) and \\(\\mathcal{W}\\) as the output. The largest data structures employed to this end are the 'Oracle' plot \\(\\mathtt{0}\\) and a tree \\(\\mathtt{T}\\) created for \\(\\mathbf{P}\\) in Alg. 1. \\(\\mathbf{P}\\), \\(\\mathcal{W}\\), \\(\\mathtt{0}\\) and \\(\\mathtt{T}\\) require \\(O\\left(n\\right)\\) space each. \\(\\mathcal{M}\\) and \\(\\mathcal{S}\\) have negligible space requirements, as they require \\(O\\left(\\left|\\ \\mathbf{A}\\ \\right|\\right)\\) space with a small \\(\\left|\\ \\mathbf{A}\\ \\right|\\). Consequently, the space complexity of McCatch is \\(O\\left(n\\right)\\). ### _Implementation_ The main cost of McCatch regards counting neighbors to build the 'Oracle' plot. As shown in Lines \\(1\\)-\\(3\\) of Alg. 2, we can count neighbors using a didactic strategy that is easy to follow. Nevertheless, when using this strategy, we count the neighbors of every point w.r.t. all radii, which is unnecessary because we only need a count \\(q_{e}^{(i)}\\) if it is smaller than or equal to \\(c\\); see the 'Excused' regions in Fig. 3(iii). Our actual implementation follows a sparse-focused, speed-up principle: * **Sparse-focused principle**: run a self-join for \\(\\mathbf{P}\\) with \\(r_{1}\\). Then, for each \\(r_{e}\\in\\left\\{r_{2},\\ldots r_{a}\\right\\}\\), from smallest to largest, run a join (not self) between \\(\\left\\{\\mathbf{p}_{i}\\in\\mathbf{P}\\ :\\ q_{e-1}^{(i)}\\leq c\\right\\}\\) and \\(\\mathbf{P}\\), thus computing only the counts required. Other speed-up principles we employ for joins are: * **Count-only principle**: do not materialize pairs of neighbors (e.g., by leveraging 'compact similarity joins' [43]) because we only need counts of neighbors, not the actual pairs of points. It applies to the joins in Algs. 2 and 4. * **Using-index principle**: use a tree, like an R-tree, M-tree, or Slim-tree4. It applies to the joins in Algs. 2, 3 and 4. * **Small-radii-only principle**: don't run a join for radius \\(r_{a}\\). Since \\(r_{a}=l\\), we already know all points are neighbors of each other. It applies to the joins in Algs. 2, 3 and 4. ## V Experiments We designed experiments to answer five questions: * **Accurate**: How accurate is McCatch? * **Principel**: Does McCatch obey axioms? * **Scalable**: How scalable is McCatch? * **Practical**: How well McCatch works on real data? * **PACS**: **'Hands-Off'**: Does McCatch need manual tuning? _Setup, code, competitors, and datasets:_McCatch was coded in Java and C\\(++\\). The joins in Algs. 2-4 employ the approach of 'compact similarity joins' [43]. We compared McCatch with \\(11\\) state-of-the-art competitors: ABOD, FastABOD, LOCI, ALOCI, DB-Out, LOF, iForest and ODIN, which are coded in Java under the framework ELKI (elki-project.github.io), besides; Gen2Out, D.MCA and RDA whose original source codes in Python were used. Tab. II has the hyperparameter values employed. McCatch was always tested with its default configuration3. The competitors were carefully tuned following hyperparameter-setting heuristics widely adopted in prior works, such as in [44, 45, 2, 18, 2, 19, 21, 46, 44, 45]. Non-deterministic competitors were run \\(10\\) times per dataset; we report the average results. Tab. III summarizes our data, which we describe as follows: * **Last Names**: \\(5k\\) names of people frequent in the US (inliers), and \\(50\\) names frequent elsewhere (outliers). * **Fingerprints**: ridges from \\(398\\) full (inliers) and \\(10\\) partial (outliers) fingerprints. * **Skeletons**: skeleton graphs from \\(200\\) human (inliers) and \\(3\\) wild-animal (outliers) silhouettes. * **Axioms:** synthetic data with Gaussian-, cross- and arc-shaped inliers following each axiom as shown in Fig. 2. * **Popular benchmark datasets:** benchmark data from many real domains. Importantly, HTTP and Anthyroid are known to have nonsingleton microclusters [6]. * **Shanghai and Volcanoes:** average RGB values extracted from satellite image tiles. Outliers are unknown. * **Uniform and Diagonal: 2-**, 4-, \\(20\\)-, and \\(50\\)-dim. data that follow a uniform distribution, or form a diagonal line. In all cases we have that (i) for vector data, we use the Euclidean distance (but any other \\(L_{p}\\) metric would work), and (ii) for nondimensional datasets the distance function is given by a domain expert. For example, string-editing or soundex encoding distance [46] for strings, and mathematical morphology [47] or tree-editing distance [48] for shapes or skeleton graphs. ### _Q1. McCatch is Accurate_ Fig. 6 reports results regarding the accuracy of McCatch. For every dataset where outliers are known, we compare the Area Under the Receiver Operator Characteristic curve (AUROC) obtained by McCatch with that of each competitor. All methods were evaluated according to the anomaly scores they reported per point. Note that it was unfeasible to run D.MCA, ABOD, FastABOD, DB-Out and LOCI in some datasets; they either required an excessive runtime (i.e., \\(>10\\) hours) or an excessive RAM memory usage (i.e., \\(>30\\) GB). These two cases are respectively denoted by symbols \\(\\copyright\\) and \\(\\copyright\\). Fig. 6: **Q1. McCatch is Accurate _and_ typically beats competition (red): Accuracy comparison. Top: McCatch wins in vector data with known nonsingleton microclusters, and _also_ in nondimensional data. Bottom: our method ties with the competitors in other cases. (best viewed in color)**Tab. IV reports the harmonic mean of the ranking positions of each method over all datasets. Besides AUROC, we consider additional metrics: Average Precision (AP) and Max-F1. McCatch outperforms _every_ competitor in _all_ three metrics. Overall, McCatch is the best option. It wins in the vector datasets with known nonsingleton mcs; see the many red squares in the 'Microclusters' section of Fig. 6. And, it ties with the competitors in the other vector datasets. McCatch is also the only method directly applicable to metric data. Every competitor is either nonapplicable or needs modifications when the data has no dimensions; see the red and orange rectangles at the top Fig. 6. Choosing McCatch over the others is thus advantageous in _both_ vector and metric data. ### _Q2._ McCatch _is Principiled_ Tab. V reports the results of an experiment performed to verify if the methods obey the axioms of Sec. III. Except for McCatch and Gen2Out, no method provides a score per microcluster; thus, they all fail to obey the axioms, by design. We compared McCatch and Gen2Out statistically, by conducting two-sample t-tests, testing for \\(50\\) datasets per axiom and shape of the cluster of inliers - thus, summing up to \\(300\\) datasets - if the score obtained for the green microcluster (see Fig. 2) is larger than that of the red microcluster, against the null hypothesis that they are indifferent. Note that Gen2Out misses both axioms by failing to find the microclusters in _every_ one of the \\(200\\) datasets with a cross- or arc-shaped cluster of inliers. Distinctly, McCatch does _not_ miss any microcluster nor axiom. Hence, our method obeys all the axioms a microcluster detector should follow; every single competitor fails. ### _Q3._ McCatch _is Scalable_ Fig. 7 has results on the scalability of McCatch. We plot runtime vs. data size for random samples of Uniform and Diagonal, considering their \\(2\\)- to \\(50\\)-dimensional versions. The lines reflect the slopes expected from Lemma 1. Estimations and measurements agree. As expected, McCatch scales sub-quadratically in every single case, regardless of the embedding dimension of the data. Particularly, note in 7(iii) and (iv) that Uniform have respectively _fractal dimension 20 and 50!_ Tab. VI reports runtime for McCatch and the other microcluster detectors in data of large cardinality or dimensionality. Note that McCatch is the fastest method in nearly all cases, e.g., \\(>\\)_50 times faster_ than D.MCA in large data. We also emphasize our method is the only one that reports principled results because efficiency is worthless without effectiveness. ### _Q4._ McCatch _is Practical_ _Attention routing:_ We studied the images on the left sides of Figs. 1(i) and 8(i). Each image was split into rectangular tiles from which average RGB values were extracted, thus leading to Shanghai and Volcanoes. Ground truth labels are unknown and, thus, AUROC cannot be computed. Regarding Shanghai, McCatch found two \\(2\\)-points clusters formed from unusually colored roofs of buildings (red and blue tiles in Fig. 1(i) - center), and other outlying tiles (in yellow). The red tiles are unusual and alike, and the same happens with the blue ones, while the yellow tiles are unusual but very distinct from one another. The plot in the right side of Fig. 1(i) corroborates our findings; see the red and blue mcs, and the scattered, yellow outliers. Similar results were seen in Volcanoes, with a \\(3\\)-points cluster of snow found on the summit of the volcano; see Fig. 8(i). Thus, McCatch can successfully and unsupervised route people's attention to notable image regions. _Unusual names:_ Fig. 1(ii) reports results on nondimensional data. We studied Last Names using the L-Edit distance. McCatch earned a \\(0.75\\) AUROC by finding the outliers on the left side of the figure. We investigated these names and discovered that they have a large variety of geographic origins; see the country flags. Distinctly, the low-scored names mostly come from the UK; see the five ones with the lowest scores on the illustration's right side. We conclude that McCatch distinguished English and NonEnglish names in the data. _Unusual skeletons:_ Fig. 1(iii) also regards nondimensional data. We studied the \\(203\\) graphs in Skeletons using the Graph edit distance. McCatch earned a _perfect_ AUROC of Fig. 7: **Q3. McCatch is Scalable: Runtime vs. data size. Lines show expected slopes (Lemma 1). Estimations and measurements agree very well: McCatch is subquadratic in all cases, despite embedding dimension. Note: Uniform in (iii) and (iv) have respectively _fractal dimension 20 and 50!_**by finding all \\(3\\) wild-animal skeletons on the figure's left side. Thus, it successfully found the unusual, non-human skeletons. Network attacksFig. 8(ii) reports results from HTTP. The raw data is in the left-side plot; there are \\(222\\)k connections described by numbers of bytes sent and received, and durations. The inliers and outliers found by McCatch are at the center- and the right-side plots, respectively. AUROC is \\(0.96\\). Note that \\(99.99\\%\\) of the inliers are not attacks; the outliers are either confirmed attacks or connections with a clear rarity, as they have oddly large durations, or numbers of bytes sent or received. The most notable result is the detection of a \\(30\\)-points mc of _confirmed_ 'DoS back' attacks, which are characterized by sending too many bytes to a server aimed at overloading it. Hence, our McCatch unsupervisedly found a cluster of frauds exploiting the same vulnerability in cybersecurity. ### _Q5_. McCatch _is 'Hands-Off'_ McCatch needs only a few hyperparameters, namely, \\(a\\), \\(b\\), \\(c\\). It turns out that the values we have used (\\(15\\), \\(1/10\\), \\(1/10\\), respectively), are at a smooth plateau (see Fig. 9): that is, the accuracy is insensitive to the exact choice of hyperparameter values. Specifically, Fig. 9 shows accuracy vs. \\(a\\), \\(b\\), \\(c\\), respectively, and every line corresponds to one of the datasets. Notice that all lines are near flat, highlighting the fact that McCatch needs no hyperparameter fine-tuning. To avoid clutter, we only show the largest real dataset (HTTP) with line-point format. ## VI Conclusions We presented McCatch to address the microcluster-detection problem. The main idea is to leverage our proposed 'Oracle' plot (1NN Distance vs. Group 1NN Distance). McCatch achieves five goals: 1. [leftmargin=*] 2. **General Input:** McCatch works with any metric dataset, including nondimensional ones, as shown in Fig. 1. It is achieved by depending solely on distances. 3. **General Output:** McCatch ranks singleton ('one-off' outliers) and nonsingleton mcs _together_, by anomalousness. See Probl. 1 and Def. 7. It is achieved thanks to a new compression-based idea (Fig. 5) to compute scores. 4. **Principied:** McCatch obeys axioms; see Tab. V. It is achieved thanks to the new group axioms of Fig. 2 and our score-computation strategy that match human intuition. 5. **Scalable:** McCatch is subquadratic on the number of points, as shown in Lemma 1 and Fig. 7. It is made possible by carefully building on spatial joins and metric trees. 6. **'Hands-Off':** McCatch needs no manual tuning. It is achieved due to our MDL-based idea to get the Cutoff \\(d\\) from the given data; see Def. 6 and Figs. 4 and 9. We also set hyperparameters to the reasonable defaults of Alg. 1. No competitor fulfills all of these goals; see Tab. I. Also, McCatch is deterministic, ranks the points, and gives explainable results. We studied \\(31\\) real and synthetic datasets and showed McCatch outperforms \\(11\\) competitors, especially when the data has nonsingleton microclusters or is nondimensional. We also showcased McCatch's ability to find meaningful mcs in graphs, fingerprints, logs of network connections, text data, and satellite images. For example, it found a \\(30\\)-points mc of _confirmed_ attacks in the network logs, taking \\(\\sim\\!3\\)_minutes_ for \\(222\\)K points on a stock desktop; see Fig. 8(ii). ReproducibilityFor reproducibility, our data and code are available at [https://figshare.com/s/08869576e75b74c6cc5](https://figshare.com/s/08869576e75b74c6cc5). Fig. 8: **Q4. McCatch is Practical:** finding valid mcs unsupervisedly. (i) on a satellite image – it spots a \\(3\\)-points mc of snow in a volcano, and other outlying tiles with snow and rocks; (ii) on network security data – it spots a \\(30\\)-points mc of confirmed attacks, and other confirmed attacks. (best viewed in color) Fig. 9: **Q5. McCatch is ‘Hands-Off’** and insensitive to the (\\(a,b,c\\)) hyperparameter values: Accuracy has a smooth plateau. (best viewed in color) ## References * [1] C. C. Aggarwal, _Outlier Analysis_. Springer, 2013. * [2] G. O. Campos, A. Zimek, J. Sander, R. J. G. B. Campello, B. Micenkova, E. Schubert, I. Assent, and M. E. Houle, \"On the evaluation of unsupervised outlier detection: measures, datasets, and an empirical study,\" _DAMI_, vol. 30, no. 4, pp. 891-927, Jul. 2016. * [3] G. H. Orair, C. H. C. Teixeira, W. Meira, Y. Wang, and S. Parthasarathy, \"Distance-based outlier detection: Consolidation and renewed bearing,\" _Proc. VLDB Endow._, vol. 3, no. 1-2, p. 1469-1480, sep 2010. * [4] M.-C. Lee, S. Shekhar, C. Faloutsos, T. N. Hutson, and L. Iasemidis, \"Gen2out:Detecting and ranking generalized anomalies,\" in _2021 IEEE International Conference on Big Data (Big Data)_, 2021, pp. 801-811. * [5] S. Jiang, R. L. F. Cordeiro, and L. Akoglu, \"D.MCA: Outlier detection with explicit micro-cluster assignments,\" in _Proceedings of the 22nd IEEE International Conference on Data Mining (ICDM)_, 2022, p. 987-992. * [6] F. T. Liu, K. M. Ting, and Z.-H. Zhou, \"On detecting clustered anomalies using SEforrest,\" in _Machine Learning and Knowledge Discovery in Databases_, 2010, pp. 274-290. * [7] G. Shabat, D. Segev, and A. Averbuch, \"Uncovering unknown unknowns in financial services big data by unsupervised methodologies: Present and future trends,\" in _Proceedings of the KDD 2017 Workshop on Anomaly Detection in Finance, ADF@KDD 2017, Halifax, Nova Scotia, Canada, August 14, 2017_, ser. Proceedings of Machine Learning Research, vol. 71. PMLR, 2017, pp. 8-19. * [8] Y. Ki and J. W. Yoon, \"PD-FDS: purchase density based online credit card fraud detection system,\" in _Proceedings of the KDD 2017 Workshop on Anomaly Detection in Finance, ADF@KDD 2017, Halifax, Nova Scotia, Canada, August 14, 2017_, ser. Proceedings of Machine Learning Research, vol. 71. PMLR, 2017, pp. 76-84. * November 1, 2015_. IEEE Computer Society, 2015, pp. 2771-2774. * [10] X. Wang, J. Lin, N. Patel, and M. W. Braun, \"A self-learning and online algorithm for time series anomaly detection, with application in CPU manufacturing,\" in _Proceedings of the 25th ACM International Conference on Information and Knowledge Management, CIKM 2016, Indianapolis, IN, USA, October 24-28, 2016_. ACM, 2016, pp. 1823-1832. * [11] S. Kao, A. R. Ganguly, and K. Steinhaeuser, \"Motivating complex dependence structures in data mining: A case study with anomaly detection in climate,\" in _ICDM Workshops 2009, IEEE International Conference on Data Mining Workshops, Miami, Florida, USA, 6 December 2009_. IEEE Computer Society, 2009, pp. 223-230. * [12] M. Das and S. Parthasarathy, \"Anomaly detection and spatio-temporal analysis of global climate system,\" in _Proceedings of the Third International Workshop on Knowledge Discovery from Sensor Data, Paris, France, June 28, 2009_. ACM, 2009, pp. 142-150. * [13] H.-P. Kriegel, M. Schubert, and A. Zimek, \"Angle-based outlier detection in high-dimensional data,\" in _Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, 2008, p. 444-452. * [14] S. Papadimitriou, H. Kitagawa, P. Gibbons, and C. Faloutsos, \"Loci: fast outlier detection using the local correlation integral,\" in _Proceedings 19th International Conference on Data Engineering_, 2003, pp. 315-326. * [15] E. M. Knorr and R. T. Ng, \"Algorithms for mining distance-based outliers in large datasets,\" in _Proceedings of the 24rd International Conference on Very Large Data Bases_, 1998, p. 392-403. * [16] C.-H. Chang, J. Yoon, S. O. Arik, M. Udell, and T. Pfister, \"Data-efficient and interpretable tabular anomaly detection,\" in _Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_, 2023, p. 190-201. * [17] R. J. G. B. Campello, D. Moulavi, A. Zimek, and J. Sander, \"Hierarchical density estimates for data clustering, visualization, and outlier detection,\" _ACM Trans. Knowl. Discov. Data_, vol. 10, no. 1, 2015. * [18] F. T. Liu, K. M. Ting, and Z.-H. Zhou, \"Isolation-based anomaly detection,\" _ACM Trans. Knowl. Discov. Data_, vol. 6, no. 1, mar 2012. * [19] S. Ramaswamy, R. Rastogi, and K. Shim, \"Efficient algorithms for mining outliers from large data sets,\" _SIGMOD Rec._, vol. 29, no. 2, p. 427-438, 2000. * [20] K. Zhang, M. Hutter, and H. Jin, \"A new local distance-based outlier detection approach for scattered real-world data,\" in _Advances in Knowledge Discovery and Data Mining_, T. Theeramunkong, B. Kijsirikul, N. Cercone, and T.-B. Ho, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 813-822. * [21] M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander, \"Lof: Identifying density-based local outliers,\" in _Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data_, 2000, p. 93-104. * [22] V. Hautamaki, I. Karkkainen, and P. Franti, \"Outlier detection using k-nearest neighbour graph,\" in _Proceedings of the 17th International Conference on Pattern Recognition_, 2004, pp. 430-433. * [23] R. Pamula, J. K. Deka, and S. Nandi, \"An outlier detection method based on clustering,\" in _2011 Second International Conference on Emerging Applications of Information Technology_, 2011, pp. 253-256. * [24] S. Zhang, V. Ursekar, and L. Akoglu, \"Sparx: Distributed outlier detection at scale,\" in _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_, 2022, p. 4530-4540. * [25] L. Kong, A. Huet, D. Rossi, and M. Sozio, \"Tree-based kendall's \\(\\tau\\) maximization for explainable unsupervised anomaly detection,\" in _2023 IEEE International Conference on Data Mining (ICDM)_, 2023, pp. 1073-1078. * [26] L. Ruff, N. Gornitz, L. Deecke, S. A. Siddiqui, R. A. Vandermeulen, A. Binder, E. Muller, and M. Kloft, \"Deep one-class classification,\" in _Proceedings of the 35th International Conference on Machine Learning_, 2018, pp. 4390-4399. * [27] H. Xiang, X. Zhang, M. Das, A. Beheshti, W. Dou, and X. Xu, \"Deep optimal isolation forest with genetic algorithm for anomaly detection,\" in _2023 IEEE International Conference on Data Mining (ICDM)_, 2023, pp. 678-687. * [28] C. Zhou and R. C. Paffenroth, \"Anomaly detection with robust deep autoencoders,\" in _Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, 2017, p. 665-674. * [29] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, \"A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise,\" in _International Conference on Knowledge Discovery and Data Mining_, USA, Oregon, Portland, 1996, pp. 226-231. * [30] S. Chawla and A. Gionis, \"k-means--: A unified approach to clustering and outlier detection,\" in _Proceedings of the 13th SIAM International Conference on Data Mining (SDM)_, 2013, pp. 189-197. * [31] M. Ankerst, M. M. Breunig, H.-P. Kriegel, and J. Sander, \"Optics: Ordering points to identify the clustering structure,\" _ACM Sigmod record_, vol. 28, no. 2, pp. 49-60, 1999. * [32] I. Borg and P. Groenen, _Modern Multidimensional Scaling: theory and applications_, 2nd ed. Springer-Verlag, 2005. * [33] L. McInnes, J. Healy, and J. Melville, \"Umap: Uniform manifold approximation and projection for dimension reduction,\" 2020. * [34] L. van der Maaten and G. Hinton, \"Visualizing data using t-sne,\" _Journal of Machine Learning Research_, vol. 9, no. 86, pp. 2579-2605, 2008. [Online]. Available: [http://jmlr.org/papers/v9/vandermaaten08a.html](http://jmlr.org/papers/v9/vandermaaten08a.html) * [35] C. T. Ir, A. J. M. Traina, C. Faloutsos, and B. Seeger, \"Fast indexing and visualization of metric data sets using slim-trees,\" _IEEE Trans. Knowl. Data Eng._, vol. 14, no. 2, pp. 244-260, 2002. * [36] P. Ciaccia, M. Patella, and P. Zezula, \"M-tree: An efficient access method for similarity search in metric spaces,\" in _Proceedings of the 23rd International Conference on Very Large Data Bases_, ser. VLDB '97. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1997, p. 426-435. * [37] P. Grunwald, \"A tutorial introduction to the minimum description length principle,\" 2004. * [38] J. Rissanen, \"A universal prior for integers and estimation by minimum description length,\" _The Annals of Statistics_, vol. 11, no. 2, pp. 416-431, 6 1983. * [39] D. Chakrabarti, S. Papadimitriou, D. S. Modha, and C. Faloutsos, \"Fully automatic cross-associations,\" in _KDD_. ACM, 2004, pp. 79-88. * [40] C. Faloutsos and I. Kamel, \"Beyond uniformity and independence: Analysis of r-trees using the concept of fractal dimension,\" in _PODS_. ACM Press, 1994, pp. 4-13. * [41] B. Pagel, F. Korn, and C. Faloutsos, \"Deflating the dimensionality curse using multiple fractal dimensions,\" in _ICDE_. IEEE Computer Society, 2000, pp. 589-598. * [42] C. T. Jr, A. J. M. Traina, L. Wu, and C. Faloutsos, \"Fast feature selection using fractal dimension,\" in _SBBD_. CEPET-PB, 2000, pp. 158-171. * [43] B. Bryan, F. Eberhardt, and C. Faloutsos, \"Compact similarity joins,\" in _ICDE_. IEEE Computer Society, 2008, pp. 346-355. * [44] T. R. Bandaragoda, K. M. Ting, D. Albrecht, F. T. Liu, Y. Zhu, and J. R. Wells, \"Isolation-based anomaly detection using nearest-neighbor ensembles,\" _Computational Intelligence_, vol. 34, no. 4, pp. 968-998, 2018. [Online]. Available: [https://onlinelibrary.wiley.com/doi/abs/10.1111/coin.12156](https://onlinelibrary.wiley.com/doi/abs/10.1111/coin.12156) * [45] K. M. Ting, B. Xu, T. Washio, and Z. Zhou, \"Isolation distributional kernel: A new tool for kernel based anomaly detection,\" in _KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020_, R. Gupta, Y. Liu, J. Tang, and B. A. Prakash, Eds. ACM, 2020, pp. 198-206. [Online]. Available: [https://doi.org/10.1145/3394486.3403062](https://doi.org/10.1145/3394486.3403062) * determine string similarities and distance,\" [https://www.postgresql.org/docs/current/fuzzystrmatch.html](https://www.postgresql.org/docs/current/fuzzystrmatch.html), Accessed: 2024-02-21. * [47] L. Vincent, \"Graphs and mathematical morphology,\" _Signal Processing_, vol. 16, no. 4, pp. 365-388, 1989. [Online]. Available: [https://doi.org/10.1016/0165-1684](https://doi.org/10.1016/0165-1684)(89)90031-5 * [48] M. Pawlik and N. Augsten, \"Efficient computation of the tree edit distance,\" _ACM Trans. Database Syst._, vol. 40, no. 1, mar 2015. [Online]. Available: [https://doi.org/10.1145/2699485](https://doi.org/10.1145/2699485)
How could we have an outlier detector that works even with _nondimensional_ data, and ranks _together_ both singleton microclusters (\"one-off\" outliers) and nonsingleton microclusters by their anomaly scores? How to obtain scores that are _principled_ in _one scalable_ and _'hands-off'_ manner? Microclusters of outliers indicate coalition or repetition in fraud activities, etc.; their identification is thus highly desirable. This paper presents McCatch: a new algorithm that detects microclusters by leveraging our proposed 'Oracle' plot (\\(1\\)NN Distance versus Group \\(1\\)NN Distance). We study \\(31\\) real and synthetic datasets with up to \\(1\\)M data elements to show that McCatch is the only method that answers both of the questions above; and, it outperforms \\(11\\) other methods, especially when the data has nonsingleton microclusters or is nondimensional. We also showcase McCatch's ability to detect meaningful microclusters in graphs, fingerprints, logs of network connections, text data, and satellite imagery. For example, it found a \\(30\\)-elements microcluster of confirmed 'Denial of Service' attacks in the network logs, taking only \\(\\sim\\!3\\)_minutes_ for \\(222\\)K data elements on a stock desktop. microcluster detection, metric data, scalability
Write a summary of the passage below.
289
arxiv-format/1408_0177v2.md
# Parameter Estimation in SAR Imagery using Stochastic Distances and Asymmetric Kernels Juliana Gambini, Julia Cassetti, Maria Magdalena Lucini, and Alejandro C. Frery, This work was supported by Conicet, CNPq, and Frapeal.Juliana Gambini is with the Instituto Tecnologico de Buenos Aires, Av. Madero 399, C1106ACD Buenos Aires, Argentina and with Depto. de Ingenieria en Computacion, Universidad Nacional de Tres de Febrero, Peia.de Buenos Aires, Argentina, [email protected] Cassetti is with the Instituto de Desarrollo Humano, Universidad Nacional de Gral, Sarmento, Peia, de Buenos Aires, Argentina.Magdalena Lucini is with the Facultad de Ciencias Exactas, Naturales y Agrimensura, Universidad Nacional del Nordeste, Av. Libertad 5460, 3400 Corrientes, Argentina.Alejandro C. Frery is with the LaCCAN, Universidade Federal de Alagoas, Av. Lourival Melo Mota, s/n, 57072-900 Maceio - AL, Brazil, [email protected] ## I Introduction The statistical modeling of the data is essential in order to interpret SAR images. Speckled data have been described under the multiplicative model using the \\(\\mathcal{G}\\) family of distributions which is able to describe rough and extremely rough areas better than the \\(\\mathcal{K}\\) distribution [1, 2]. The survey article [3] discusses in detail several statistical models for this kind of data. Under the \\(\\mathcal{G}\\) model different degrees of roughness are associated to different parameter values, therefore it is of paramount importance to have high quality estimators. Several works have been devoted to the subject of improving estimation with two main venues of research, namely, analytic and resampling procedures. The analytic approach was taken by Vasconcellos et al. [4] who quantified the bias in the estimation of the roughness parameter of the \\(\\mathcal{G}_{A}^{0}\\) distribution by maximum likelihood (ML). They proposed an analytic change for improved performance with respect to bias and mean squared error. Also, Silva et al. [5] computed analytic improvements for that estimator. Such approaches reduce both the bias and the mean squared error of the estimation, at the expense of computing somewhat cumbersome correction terms, and yielding estimators whose robustness is largely unknown. Cribari-Neto et al. [6] compared several numerical improvements for that estimator using bootstrap. Again, the improvement comes at the expense of intensive computing and with no known properties under contamination. Allende et al. [7] and Bustos et al. [8] also sought for improved estimation, but seeking the robustness of the procedure. The new estimators are resistant to contamination, and in some cases they also improve the mean squared error, but they require dealing with influence functions and asymptotic properties not always immediately available to remote sensing practitioners. A common issue in all the aforementioned estimation procedures, including ML, and to those based on fractional moments [1] and log-cumulants [9, 10, 11] is the need of iterative algorithms for which there is no granted convergence to global solutions. Such lack of convergence usually arises with small samples, precluding the use of such techniques in, e.g., statistical filters. Frery et al. [12] and Pianto and Cribari-Neto [13] proposed techniques which aim at alleviating such issue, at the cost of additional computational load. The main idea of this work is to develop an estimation method for the \\(\\mathcal{G}_{I}^{0}\\) model with good properties (as measured by its bias, the mean squared error and its ability to resist contamination) even with samples of small and moderate size, and low computational cost. In order to achieve this task, we propose minimizing a stochastic distance between the fixed empirical evidence and the estimated model. Shannon proposed a divergence between two density functions as a measure of the relative information between the two distributions. Divergences were studied by Kullback and Leibler and by Renyi [14], among others. These divergences have multiple applications in signal and image processing [15], medical image analysis diagnosis [16], and automatic region detection in SAR imagery [17, 18, 19, 20]. Liese and Vajda [21] provide a detailed theoretical analysis of divergence measures. Cassetti et al. [22] compared estimators based on the Hellinger, Bhattacharyya, Renyi and Triangular distance with the ML estimator. They presented evidence that the Triangular distance is the best choice for this application, but noticed that histograms led to many numerical instabilities. This work presents improvements with respect to those results in the following regards: we assess the impact of contamination in the estimation, we employ kernels rather than histograms andwe compare estimators based on the Triangular distance with the ML, Fractional Moments and Log-Cumulants estimators. Among the possibilities for such estimate, we opted for employing asymmetric kernels. Kernels have been extensively applied with success to image processing problems as, for instance, object tracking [23]. Kernels with positive support, which are of particular interest for our work, were employed in [24, 25]. The general problem of using asymmetric kernels was studied in [26]. In [27], the authors demonstrate, through extensive modeling of real data, that SAR images are best described by families of heavy-tailed distributions. We provide new results regarding the heavytailedness of the \\(\\mathcal{G}_{I}^{0}\\) distribution; in particular, we show that it has heavy tails with tail index \\(1-\\alpha\\). The paper unfolds as follows. Section II recalls the main properties of the \\(\\mathcal{G}_{I}^{0}\\) model and the Maximum Likelihood, the \\(\\frac{1}{2}\\)-moment and the log-cumulants methods for parameter estimation. Estimation with stochastic distances is presented in Section III, including different ways of calculating the estimate of the underlying density function and the contamination models employed to assess the robustness of the procedure. Section IV presents the main results. Section V discusses the conclusions. ## II The \\(\\mathcal{G}_{I}^{0}\\) Model The return in monopolized SAR images can be modeled as the product of two independent random variables, one corresponding to the backscatter \\(X\\) and the other to the speckle noise \\(Y\\). In this manner \\(Z=XY\\) represents the return in each pixel under the multiplicative model. For monopolarized data, speckle is modeled as a \\(\\Gamma\\) distributed random variable, with unitary mean and shape parameter \\(L\\geq 1\\), the number of looks, while the backscatter is considered to obey a reciprocal of Gamma law. This gives rise to the \\(\\mathcal{G}_{I}^{0}\\) distribution for the return. Given the mathematical tractability and descriptive power of the \\(\\mathcal{G}_{I}^{0}\\) distribution for intensity data [2, 28] it represents an attractive choice for SAR data modeling. The density function for intensity data is given by \\[f_{\\mathcal{G}_{I}^{0}}(z)=\\frac{L^{L}\\Gamma(L-\\alpha)}{\\gamma^{\\alpha}\\Gamma (-\\alpha)\\Gamma(L)}\\frac{z^{L-1}}{(\\gamma+zL)^{L-\\alpha}}, \\tag{1}\\] where \\(-\\alpha,\\gamma,z>0\\) and \\(L\\geq 1\\). The \\(r\\)-order moments are \\[E(Z^{r})=\\Big{(}\\frac{\\gamma}{L}\\Big{)}^{r}\\frac{\\Gamma(-\\alpha-r)}{\\Gamma(- \\alpha)}\\frac{\\Gamma(L+r)}{\\Gamma(L)}, \\tag{2}\\] provided \\(\\alpha<-r\\), and infinite otherwise. With the double purpose of simplifying the calculations and making the results comparable, in the following we choose the scale parameter such that \\(E(Z)=1\\), which is given by \\(\\gamma^{*}=-\\alpha-1\\). One of the most important features of the \\(\\mathcal{G}_{I}^{0}\\) distribution is the interpretation of the \\(\\alpha\\) parameter, which is related to the roughness of the target. Values close to zero (typically above \\(-3\\)) suggest extreme textured targets, as urban zones. As the value decreases, it indicates regions with moderate texture (usually \\(\\alpha\\in[-6,-3]\\)), as forest zones. Textureless targets, e.g. pasture, usually produce \\(\\alpha\\in(-\\infty,-6)\\). This is the reason why the accuracy in the estimation of \\(\\alpha\\) is so important. Let \\(\\mathbf{z}=(z_{1},\\ldots,z_{n})\\) be a random sample of \\(n\\) independent draws from the \\(\\mathcal{G}_{I}^{0}\\) model. Assuming \\(\\gamma^{*}=-\\alpha-1\\), the ML estimator for the parameter \\(\\alpha\\), namely \\(\\widehat{\\alpha}_{\\text{ML}}\\) is the solution of the following nonlinear equation: \\[\\Psi^{0}(\\widehat{\\alpha}_{\\text{ML}})-\\Psi^{0}(L-\\widehat{\\alpha }_{\\text{ML}})-\\log(1-\\widehat{\\alpha}_{\\text{ML}})+\\] \\[\\frac{\\widehat{\\alpha}_{\\text{ML}}}{1-\\widehat{\\alpha}_{\\text{ML }}}+\\frac{1}{n}\\sum_{i=1}^{n}\\log(1-\\widehat{\\alpha}_{\\text{ML}}+Lz_{i})-\\] \\[\\frac{\\widehat{\\alpha}_{\\text{ML}}-L}{n}\\sum_{i=1}^{n}\\frac{1}{1 -\\widehat{\\alpha}_{\\text{ML}}+Lz_{i}}=0,\\] where \\(\\Psi^{0}(\\cdot)\\) is the digamma function. Some of the issues posed by the solution of this equation have been discussed, and partly solved, in [12]. Fractional moments estimators have been widely used with success [1, 29]. Using \\(r=1/2\\) in (2) one has to solve \\[\\frac{1}{n}\\sum_{i=1}^{n}\\sqrt{z_{i}}-\\sqrt{\\frac{-\\widehat{\\alpha}_{\\text{ Mom12}}-1}{L}}\\frac{\\Gamma(-\\widehat{\\alpha}_{\\text{Mom12}}-\\frac{1}{2})}{ \\Gamma(-\\widehat{\\alpha}_{\\text{Mom12}})}\\frac{\\Gamma(L+\\frac{1}{2})}{\\Gamma( L)}=0. \\tag{3}\\] Estimation based on log-cumulants is gaining space in the literature due to its nice properties and good performance [9, 10, 11]. Following Tison et al. [30] the main second kind statistics can be defined as: * First second kind characteristic function: \\(\\phi_{x}(s)=\\int_{0}^{\\infty}u^{s-1}p_{x}(u)du\\) * Second second kind characteristic function: \\(\\psi_{x}(s)=\\log\\phi_{x}(s)\\) * First order second kind characteristic moment: \\[\\tilde{m}_{r}=\\frac{d\\phi_{x}(s)}{ds}\\Big{|}_{s=1}.\\] (4) * First order second kind characteristic cumulant (log-cumulant) \\[\\tilde{k}_{r}=\\frac{d\\psi_{x}(s)}{ds}\\Big{|}_{s=1}.\\] (5) If \\(p_{x}(u)=f_{\\mathcal{G}_{I}^{0}}(u)\\) then we have: * \\(\\phi_{x}(s)=\\frac{\\big{(}\\frac{L}{\\gamma}\\big{)}^{1-\\tau}\\Gamma(-1+L+s)\\Gamma(1 -s-\\alpha)}{\\Gamma(L)\\Gamma(-\\alpha)}\\) * \\(\\widetilde{m}_{1}=\\frac{d\\phi_{x}(s)}{ds}\\big{|}_{s=1}=-\\log(\\frac{L}{\\gamma} )+\\Psi^{0}(L)-\\Psi^{0}(-\\alpha)\\). Using the developments presented in [30], we have that \\(\\widetilde{k}_{1}=\\widetilde{m}_{1}\\) and the empirical expression for the first log-cumulant estimator for \\(n\\) samples \\(z_{i}\\) is \\(\\widetilde{\\widetilde{k}}_{1}=n^{-1}\\sum_{i=1}^{n}\\log z_{i}\\). Therefore \\[\\widetilde{k}_{1}=-\\log\\frac{L}{\\gamma}+\\Psi^{0}(L)-\\Psi^{0}(-\\alpha). \\tag{6}\\] Assuming \\(\\gamma^{*}=-\\alpha-1\\), the Log-Cumulant estimator of \\(\\alpha\\), denoted by \\(\\widehat{\\alpha}_{\\text{LCom}}\\) is then the solution of \\(\\widetilde{\\widetilde{k}}_{1}=-\\log\\frac{L}{1-\\widehat{\\alpha}_{\\text{LCom}}}+ \\Psi^{0}(L)-\\Psi^{0}(-\\widehat{\\alpha}_{\\text{LCom}})\\), that is, the solution of \\[\\frac{1}{n}\\sum_{i=1}^{n}\\log z_{i}=-\\log\\frac{L}{1-\\widehat{\\alpha}_{\\text{LCom }}}+\\Psi^{0}(L)+\\Psi^{0}(-\\widehat{\\alpha}_{\\text{LCom}}). \\tag{7}\\] The ability of these estimators to resist outliers has not still been assessed. In the following we provide new results regarding the heavytailedness of the \\(\\mathcal{G}_{I}^{0}\\) distribution. This partly explains the numerical issues faced when seeking for estimators for its parameters: the distribution is prone to producing extreme values. The main concepts are from [31, 32, 33]. **Definition 1**: _The function \\(\\ell\\colon\\mathbb{R}\\to\\mathbb{R}\\) is slow-varying in the infinite if for every \\(t>0\\) holds that_ \\[\\lim_{x\\to+\\infty}\\frac{\\ell(tx)}{\\ell(x)}=1.\\] **Definition 2**: _A probability density function \\(f(x)\\) has heavy tails if for any \\(\\eta>0\\) holds that_ \\[f(x)=\\ell(x)x^{-\\eta},\\] _where \\(\\ell\\) is a slow-varying function in the infinite, and \\(\\eta\\) is the tail index._ The smaller the tail index is, the more prone to producing extreme observations the distribution is. **Proposition 1**: _The \\(\\mathcal{G}_{I}^{0}\\) distribution has heavy tails with tail index \\(1-\\alpha\\)._ Defining \\(\\ell(x)=f_{\\mathcal{G}_{I}^{0}}(x)x^{-\\alpha+1}\\) we have that \\[\\lim_{x\\to+\\infty}\\frac{\\ell(tx)}{\\ell(x)} =\\lim_{x\\to+\\infty}\\frac{(tx)^{L-1}\\left(Ltx+\\gamma\\right)^{ \\alpha-L}\\left(tx\\right)^{1-\\alpha}}{x^{L-1}\\left(Lx+\\gamma\\right)^{\\alpha-L} x^{1-\\alpha}}\\] \\[=\\lim_{x\\to+\\infty}t^{L-\\alpha}\\left(\\frac{Lx+\\gamma}{Ltx+\\gamma }\\right)^{L-\\alpha}=1.\\] This holds for every \\(t>0\\), so \\(\\ell\\) is as in Def. 1 As expected, the tail index is a decreasing function on \\(\\alpha\\), then the \\(\\mathcal{G}_{I}^{0}\\) distribution is more prone to producing extreme observations when the roughness parameter is bigger. The remainder of this section is devoted to proving that the \\(\\mathcal{G}_{I}^{0}\\) model is outlier-prone. Consider the random variables \\(z_{1},\\ldots,z_{n}\\) and the corresponding order statistics \\(Z_{1:n}\\leq\\cdots\\leq Z_{n:n}\\). **Definition 3**: _The distribution is absolutely outlier-prone if there are positive constants \\(\\varepsilon,\\delta\\) and an integer \\(n_{0}\\) such that_ \\[\\Pr(Z_{n:n}-Z_{n-1:n}>\\varepsilon)\\geq\\delta\\] _holds for every integer \\(n\\geq n_{0}\\)._ A sufficient condition for being absolutely outlier-prone is that there are positive constants \\(\\varepsilon,\\delta,x_{0}\\) such that the density of the distribution satisfies, for every \\(x\\geq x_{0}\\), that \\[\\frac{f(x+\\varepsilon)}{f(x)}\\geq\\delta. \\tag{8}\\] Since \\[\\lim_{x\\to+\\infty}\\frac{f_{\\mathcal{G}_{I}^{0}}(x+\\varepsilon)}{f _{\\mathcal{G}_{I}^{0}}(x)} =\\lim_{x\\to+\\infty}\\Bigl{(}\\frac{x+\\varepsilon}{x}\\Bigr{)}^{L-1} \\Bigl{(}\\frac{Lx+\\gamma}{L(x+\\varepsilon)+\\gamma}\\Bigr{)}^{L-\\alpha}\\] \\[=1,\\] we are in the presence of an absolutely outlier-prone model. ## III Estimation by the Minimization of Stochastic Distances Information Theory provides divergences for comparing two distributions; in particular, we are interested in distances between densities. Our proposal consists of, given the sample \\(\\mathbf{z}\\), computing \\(\\widehat{\\alpha}\\), estimator of \\(\\alpha\\), as the point which minimizes the distance between the density \\(f_{\\mathcal{G}_{I}^{0}}\\) and an estimate of the underlying density function. Cassetti et al. [22] assessed the Hellinger, Bhattacharyya, Renyi and Triangular distances, and they concluded that the latter outperforms the other ones in a variety of situations. The Triangular distance between the densities \\(f_{V}\\) and \\(f_{W}\\) with common support \\(S\\) is given by \\[d_{T}(f_{V},f_{W})=\\int_{S}\\frac{(f_{V}-f_{W})^{2}}{f_{V}+f_{W}}. \\tag{9}\\] Let \\(\\mathbf{z}=(z_{1},\\ldots,z_{n})\\) be a random sample of \\(n\\) independent \\(\\mathcal{G}_{I}^{0}(\\alpha_{0},\\gamma_{0}^{*},L_{0})\\)-distributed observations. An estimate of the underlying density function of \\(\\mathbf{z}\\), denoted \\(\\widehat{f}\\), is used to define the objective function to be minimized as a function of \\(\\alpha\\). The estimator for the \\(\\alpha\\) parameter based on the minimization of the Triangular distance between an estimate of the underlying density \\(\\widehat{f}\\) and the model \\(f_{\\mathcal{G}_{I}^{0}}\\), denoted by \\(\\widehat{\\alpha}_{T}\\), is given by \\[\\widehat{\\alpha}_{T}=\\arg\\min_{-20\\leq\\alpha\\leq-1}d_{T}\\bigl{(}f_{\\mathcal{G }_{I}^{0}}(\\alpha,\\gamma_{0}^{*},L_{0}),\\widehat{f}(\\mathbf{z})\\bigr{)}, \\tag{10}\\] where \\(\\gamma_{0}^{*}\\) and \\(L_{0}\\) are known and \\(d_{T}\\) is given in (9). Solving (10) requires two steps: the integration of (9) using the \\(\\mathcal{G}_{I}^{0}\\) density and the density estimate \\(\\widehat{f}\\), and optimizing with respect to \\(\\alpha\\). To the best of the authors' knowledge, there are no explicit analytic results for either problem, so we rely on numerical procedures. The range of the search is established to avoid numerical instabilities. We also calculate numerically the ML, the \\(\\frac{1}{2}\\)-moment and Log-Cumulants based estimators using the same established range of the search and compare all the methods through the bias and the mean squared error by simulation. ### _Estimate of the Underlying Distribution_ The choice of the way in which the underlying density function is computed is very important in our proposal. In this section we describe several possibilities for computing it, and we justify our choice. Histograms are the simplest empirical densities, but they lack unicity and they are not smooth functions. Given that the model we are interested is asymmetric, another possibility is to use asymmetric kernels [24, 26]. Let \\(\\mathbf{z}=(z_{1},\\ldots,z_{n})\\) be a random sample of size \\(n\\), with an unknown density probability function \\(f\\), an estimate of its density function using kernels is given by \\[\\widehat{f}_{b}(t;\\mathbf{z})=\\frac{1}{n}\\sum_{i=1}^{n}K(t;z_{i},b),\\] where \\(b\\) is the bandwith of the kernel \\(K\\). Among the many available asymmetric kernels, we worked with two: the densities of the Gamma and Inverse Gaussian distributions. These kernels are given by \\[K_{\\Gamma}(t;z_{i},b) =\\frac{t^{z_{i}/b}\\exp\\{-t/b\\}}{b^{z_{i}/b+1}\\Gamma(z_{i}/b+1)},\\text {and}\\] \\[K_{\\text{IG}}(t;z_{i},b) =\\frac{1}{\\sqrt{2\\pi bt^{3}}}\\exp\\Big{\\{}-\\frac{1}{2bz_{i}}\\Big{(} \\frac{t}{z_{i}}+\\frac{z_{i}}{t}-2\\Big{)}\\Big{\\}},\\] respectively, for every \\(t>0\\). Empirical studies led us to employ \\(b=n^{-1/2}/5\\). As an example, Figure 1 shows the \\(\\mathcal{G}_{I}^{0}(-3,2,1)\\) density and three estimates of the underlying density function obtained with \\(n=30\\) samples: those produced by the Gamma and the Inverse Gaussian kernels, and the histogram computed with the Freedman-Diaconis method. As it can be seen, fitting the underlying density function using kernels is better than using a histogram. After extensive numerical studies, we opted for the \\(K_{\\mathrm{IG}}\\) kernel due to its low computational cost, its ability to describe observations with large variance, and its good numerical stability. In agreement with what Bouezmarni and Scaillet [26] reported, the Gamma kernel is also susceptible to numerical instabilities. ### _Contamination_ Estimators in signal and image processing are often used in a wide variety of situations, so their robustness is of highest importance. Robustness is the ability to perform well when the data obey the assumed model, and to not provide completely useless results when the observations do not follow it exactly. Robustness, in this sense, is essential when designing image processing and analysis applications. Filters, for instance, employ estimators based, typically, in small samples which, more often than not, receive samples from more than one class. Supervised classification relies on samples, and the effect of contamination (often referred to as \"training errors\") on the results has been attested, among other works, in [34]. In order to assess the robustness of the estimators, we propose three contamination models able to describe realistic departures from the hypothetical \"independent identically distributed sample\" assumption. One of the sources of contamination in SAR imagery is the phenomenon of double bounce which results in some pixels having a high return value. The presence of such outliers may provoke big errors in the estimation. In order to assess the robustness of the proposal, we generate contaminated random samples using three types (cases) of contamination, with \\(0<\\epsilon\\ll 1\\) the proportion of contamination. Let the Bernoulli random variable \\(B\\) with probability of success \\(\\epsilon\\) model the occurrence of contamination. Let \\(C\\in\\mathbb{R}_{+}\\) be a large value. * Case 1: Let \\(W\\) and \\(U\\) be such that \\(W\\sim\\mathcal{G}_{I}^{0}(\\alpha_{1},\\gamma_{1}^{*},L)\\), and \\(U\\sim\\mathcal{G}_{I}^{0}(\\alpha_{2},\\gamma_{2}^{*},L)\\). Define \\(Z=BU+(1-B)W\\), then we generate \\(\\{z_{1},\\ldots,z_{n}\\}\\) identically distributed random variables with cumulative distribution function \\[(1-\\epsilon)\\mathcal{F}_{\\mathcal{G}_{I}^{0}(\\alpha_{1},\\gamma_{1}^{*},L)}(z) +\\epsilon\\mathcal{F}_{\\mathcal{G}_{I}^{0}(\\alpha_{2},\\gamma_{2}^{*},L)}(z),\\] where \\(\\mathcal{F}_{\\mathcal{G}_{I}^{0}(\\alpha,\\gamma,L)}\\) is the cumulative distribution function of a \\(\\mathcal{G}_{I}^{0}(\\alpha,\\gamma,L)\\) random variable. * Case 2: Consider \\(W\\sim\\mathcal{G}_{I}^{0}(\\alpha_{1},\\gamma_{1}^{*},L)\\); return \\(Z=BC+(1-B)W\\). * Case 3: Consider \\(W\\sim\\mathcal{G}_{I}^{0}(\\alpha,\\gamma^{*},L)\\) and \\(U\\sim\\mathcal{G}_{I}^{0}(\\alpha,10^{k}\\gamma^{*},L)\\) with \\(k\\in\\mathbb{N}\\). Return \\(Z=BU+(1-B)W\\), then \\(\\{z_{1},\\ldots,z_{n}\\}\\) are identically distributed random variables with cumulative distribution function \\[(1-\\epsilon)\\mathcal{F}_{\\mathcal{G}_{I}^{0}(\\alpha,\\gamma^{*},L)}(z)+e \\mathcal{F}_{\\mathcal{G}_{I}^{0}(\\alpha,10^{k}\\gamma^{*},L)}(z).\\] All these models consider departures from the hypothesized distribution \\(\\mathcal{G}_{I}^{0}(\\alpha,\\gamma^{*},L)\\). The first type of contamination assumes that, with probability \\(\\epsilon\\), instead of observing outcomes from the \"right\" model, a sample from a different one will be observed; notice that the outlier may be close to the other observations. The second type returns a fixed and typically large value, \\(C\\), with probability \\(\\epsilon\\). The third type is a particular case of the first, where the contamination assumes the form of a distribution whose scale is \\(k\\) orders of magnitude larger than the hypothesized model. We use the three cases of contamination models in our assessment. ## IV Results A Monte Carlo experiment was set to assess the performance of each estimation procedure. The parameter space consists of the grid formed by (i) three values of roughness: \\(\\alpha=\\{-1.5,-3,-5\\}\\), which are representative of areas with extreme and moderate texture; (ii) three usual levels of signal-to-noise processing, through \\(L=\\{1,3,8\\}\\); (iii) sample sizes \\(n=\\{9,25,49,81,121,1000\\}\\), related to squared windows of side \\(3\\), \\(5\\), \\(7\\), \\(9\\) and \\(11\\), and to a large sample; (iv) each of the three cases of contamination, with \\(\\epsilon=\\{0.001,0.005,0.01\\}\\), \\(\\alpha_{2}=\\{-4,-15\\}\\), \\(C=100\\) and \\(k=2\\). One thousand samples were drawn for each point of the parameter space, producing \\(\\{\\widehat{\\alpha}_{1},\\ldots,\\widehat{\\alpha}_{1000}\\}\\) estimates of each kind. Estimates of the mean \\(\\widehat{\\alpha}=(1000)^{-1}\\Sigma_{i=1}^{1000}\\widehat{\\alpha}_{i}\\), bias \\(\\widehat{B}(\\widehat{\\alpha})=\\overline{\\widehat{\\alpha}_{i}}-\\alpha\\) and mean squared error \\(\\mathrm{mise}=(1000)^{-1}\\Sigma_{i=1}^{1000}\\left(\\widehat{\\alpha}_{i}-\\alpha \\right)^{2}\\) were then computed and compared. In the following figures, \"ML\", \"T\",\"Mom12\" and \"Lcum\" denote the estimator based on the Maximum likelihood, Triangular distance, \\(\\frac{1}{2}\\)-moment and Log-Cumulant, respectively. Fig. 1: Fitting the \\(\\mathcal{G}_{I}^{0}\\) density of thirty \\(\\mathcal{G}_{I}^{0}(-3,2,1)\\) observations in different ways, along with the histogram. Sample sizes are in the abscissas, which are presented in logarithmic scale. The estimates of the mean are presented with error bars which show the Gaussian confidence interval at the \\(95\\%\\) level of confidence. Only a few points of this parameter space are presented for brevity, but the remaining results are consistent with what is discussed here. Figure 2 shows the mean of the estimators \\(\\widehat{\\alpha}\\) in uncontaminated data (\\(\\epsilon=0\\)) with different values of \\(n\\) and \\(L\\). Only two of the four estimators lie, in mean, very close to the true value: \\(\\widehat{\\alpha}_{\\text{MV}}\\) and \\(\\widehat{\\alpha}_{\\text{T}}\\); it is noticeable how far from the true value lie \\(\\widehat{\\alpha}_{\\text{LCum}}\\) and \\(\\widehat{\\alpha}_{\\text{Mom12}}\\) when \\(\\alpha=-5\\). It is noticeable that \\(\\widehat{\\alpha}_{\\text{MM}}\\) has a systematical tendency to underestimate the true value of \\(\\alpha\\). Vasconcellos et al. [4] computed a first order approximation of such bias for a closely related model, and our results are in agreement with those. The estimator based on the Triangular distance \\(\\widehat{\\alpha}_{\\text{T}}\\) compensates this bias. Figure 3 shows the sample mean squared error of the estimates under the same situation, i.e., uncontaminated data. In most cases, all estimators have very similar \\(\\operatorname{mse}\\), not being possible to say that one is systematically the best one. This is encouraging, since it provides evidence that \\(\\widehat{\\alpha}_{\\text{T}}\\) exhibits the first property of a good robust estimator, i.e., not being unacceptable under the true model. The mean times of processing, measured in seconds, for each method and each case, were computed, as an example, the mean times of processing for \\(L=1\\) and \\(n=81\\) are presented in Table I. It can be seen that the new method has a higher computational cost. The other cases are consistent with this table. The details of the computer platform are presented in the appendix. Figures 4 and 5 show, respectively, the sample mean and mean squared error of the estimates under Case 1 contamination with \\(\\alpha_{2}=-15\\), \\(\\epsilon=0.01\\), and varying \\(n\\) and \\(L\\). This type of contamination injects, with probability \\(\\epsilon=0.01\\), observations with almost no texture in the sample under analysis. As expected, the influence of such perturbation is more noticeable in those situations where the underlying model is further away from the contamination, i.e., for larger values of \\(\\alpha\\). This is particularly clear in Fig. 5, which shows that the mean squared errors of \\(\\widehat{\\alpha}_{\\text{ML}}\\), \\(\\widehat{\\alpha}_{\\text{Mom12}}\\) and \\(\\widehat{\\alpha}_{\\text{LCum}}\\) are larger than that of \\(\\widehat{\\alpha}_{\\text{T}}\\) for \\(L=3,8\\), with not a clear distinction for \\(L=1\\) except that \\(\\widehat{\\alpha}_{\\text{T}}\\) is at least very competitive in the \\(\\alpha=-3,-5\\) cases. Figures 6 and 7 present, respectively, the sample mean and sample mean squared error of the estimates under Case 2 contamination with \\(\\epsilon=0.001\\) and \\(C=100\\). This type of contamination injects a constant value (\\(C=100\\)) with probability \\(\\epsilon=0.001\\) instead of each observation from the \\(\\mathcal{G}_{l}^{0}\\) distribution. Since we are considering samples with unitary mean, this is a large contamination. In this case, \\(\\widehat{\\alpha}_{\\text{T}}\\) is, in mean, closer to the true value than the other methods, and its mean square error the smallest. Figures 8 and 9 show, respectively, the sample mean and mean squared error of the estimates under Case 3 with \\(\\epsilon=0.005\\) with \\(k=2\\). This kind of contamination draws, with probability \\(\\epsilon=0.005\\), an observation from a \\(\\mathcal{G}_{l}^{0}\\) distribution with a scale one hundred times larger than that of the \"correct\" model. The behavior of the estimators follows the same pattern for \\(L=3,8\\): \\(\\widehat{\\alpha}_{\\text{T}}\\) produces the closest estimates to the true value with reduced mean squared error. There is no good estimator for the single-look case with this case of contamination. with the other methods in the corresponding iteration, so the amount of elements for calculating the mean \\(\\widetilde{\\alpha}\\), the bias and the mean squared error is lower than \\(1000\\). As an example, Table II informs the number of such situations for Case 1 and \\(\\alpha_{2}=-15,\\epsilon=0.01\\). These results are consistent with other situations under contamination, and they suggest that these methods are progressively more prone to fail in more heterogeneous areas. Data from a single-look L-band HH polarization in intensity format E-SAR [35] image was used in the following. Figure 10 shows the regions used for estimating the texture parameter. Table III shows the results of estimating the \\(\\alpha\\) parameter for Fig. 3: Sample mean squared error of estimates under uncontaminated data. each rectangular region, where \\(NA\\) means that the corresponding estimator is not available. The Kolmogorov-Smirnov test (KS-test) is applied to two samples: \\(\\mathbf{x}\\), from the image, and \\(\\mathbf{y}\\), simulated. Babu and Feigelson [36] warn about the use of the same sample for parameter estimation and for performing a KS test between the data and the estimated cumulative distribution function. We then took the real sample \\(\\mathbf{x}\\), used to estimate the parameters with the four methods under assessment, and then samples of the same size \\(\\mathbf{y}\\) were drawn from the \\(\\mathcal{G}_{I}^{0}\\) law with those parameters. The KS test was then performed between \\(\\mathbf{x}\\) and \\(\\mathbf{y}\\) for the null hypothesis \\(H_{0}\\) \"both samples come from the same distribution,\" and the complementary alternative hypothesis. Table IV shows the sample \\(p\\)-values. It is observed that the null hypothesis is not rejected with a significance level of \\(5\\)% in any of the cases. This result justifies the adequacy of the model for the data. Fig. 11 shows the regions used for estimating the texture parameter under the influence of a corner reflector. Table V shows the estimates of \\(\\alpha\\) for each rectangular region in the image of Fig. 11(b). The Maximum Likelihood, \\(\\frac{1}{2}\\)-moment and Log-Cumulant estimators are unable to produce any estimate in small samples. They only produce sensible values when the sample is of at least \\(90\\) observations, The Log-Cumulant method is the one which requires the largest sample size to produce an acceptable estimate. The estimator based on the Triangular distance yields plausible values under contamination even with very small samples. Table VI presents the \\(p\\)-values of the KS test, applied as previously described to the samples of Fig. 11(b). Estimators based on the \\(\\frac{1}{2}\\)-moments and on Log-Cumulants failed to produce estimates in two samples and, therefore, it was not Fig. 4: Sample mean of estimates, Case 1 with \\(\\alpha_{2}=-15\\) and \\(\\epsilon=0.01\\). possible to apply the procedure. The other samples did not fail to pass the KS test, except for the Blue sample and the Maximum Likelihood estimator. These results leads us to conclude that the underlying model is a safe choice regardless the presence of contamination and the estimation procedure. ## V Conclusions We proposed a new estimator for the texture parameter of the \\(\\mathcal{G}_{I}^{0}\\) distribution based on the minimization of a stochastic distance between the model (which is indexed by the estimator) and an estimate of the probability density function built with asymmetric kernels. We defined three models of contamination inspired in real situations in order to assess the impact of outliers in the performance of the estimators. The difficulties of estimating the textured parameter of the \\(\\mathcal{G}_{I}^{0}\\) distribution were further investigated and justified throughout new theoretical results regarding its heavytailedness. Regarding the impact of the contamination on the performance of estimators, we observed the following. Only Fractional Moment and Log-Cumulant estimators fail to converge. * Case 1: Regardless the intensity of the contamination, the Fig. 5: Sample mean squared error of estimates, Case 1 with \\(\\alpha_{2}=-15\\) and \\(\\epsilon=0.01\\). bigger the number of looks, the smaller the percentage of situations for which no convergence if achieved. * Cases 2 and 3: the percentage of situations for which there is no convergence increases with the level of contamination, and reduces with \\(\\alpha\\). In the single-case look the proposed estimator does not present excellent results, but it never fails to converge whereas the others are prone to produce useless output. The new estimator presents good properties as measured by its bias and mean squared error. It is competitive with the Maximum Likelihood, Fractional Moment and Log-Cumulant estimators in situations without contamination, and outperforms the other techniques even in the presence of small levels of contamination. For this reason, it would be advisable to use \\(\\widehat{\\alpha}_{\\text{T}}\\) in every situation, specially when small samples are used and/or when there is the possibility of having contaminated data. The extra computational cost incurred in using this estimator is, at most, the twenty times the required to compute \\(\\widehat{\\alpha}_{\\text{ML}}\\), but its advantages outnumber these extra computer cycles. ## Appendix Simulations were performed using the R language and environment for statistical computing [37] version 3.0.2. The adaptIntegrate function from the cubature package was used to perform the numerical integration required to evaluate the Triangular distance, the algorithm utilized is an adaptive multidimensional integration over hypercubes. In order to numerically find \\(\\widehat{\\alpha}_{\\text{L,Cum}}\\) we used the function uniroot implemented in R to solve (7). The computer platform is Intel(R) Core i7, with \\(8\\,\\)GB of memory and \\(64\\) bits Windows 7. Codes and data are available upon request from the corresponding author. Fig. 6: Sample mean of estimates, Case 2 with \\(C=100\\), \\(\\epsilon=0.001\\). ## References * [1] A. C. Frery, H.-J. Muller, C. C. F. Yanasse, and S. J. S. Sant'Anna, \"A model for extremely heterogeneous clutter,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 35, no. 3, pp. 648-659, 1997. * [2] M. Mejali, J. C. Jacobo-Berlels, A. C. Frery, and O. H. Bustos, \"Classification of SAR images using a general and tractable multiplicative model,\" _International Journal of Remote Sensing_, vol. 24, no. 18, pp. 3565-3582, 2003. * [3] G. Gao, \"Statistical modeling of SAR images: A survey,\" _Sensors_, vol. 10, no. 1, pp. 775-795, 2010. * [4] K. L. P. Vasconcellos, A. C. Frery, and L. B. Silva, \"Improving estimation in speckled imagery,\" _Computational Statistics_, vol. 20, no. 3, pp. 503-519, 2005. * [5] M. Silva, F. Cribari-Neto, and A. C. Frery, \"Improved likelihood inference for the roughness parameter of the GA0 distribution,\" _Environmetrics_, vol. 19, no. 4, pp. 347-368, 2008. * [6] F. Cribari-Neto, A. C. Frery, and M. F. Silva, \"Improved estimation of clutter properties in speckled imagery,\" _Computational Statistics and Data Analysis_, vol. 40, no. 4, pp. 801-824, 2002. * [7] H. Allende, A. C. Frery, J. Galbati, and L. Pfazro, \"M-estimators with asymmetric influence functions: the GA0 distribution case,\" _Journal of Statistical Computation and Simulation_, vol. 76, no. 11, pp. 941-956, 2006. * [8] O. H. Bustos, M. M. Lucini, and A. C. Frery, \"M-estimators of roughness and scale for GA0-modelled SAR imagery,\" _EURASIP Journal on Advances in Signal Processing_, vol. 2002, no. 1, pp. 105-114, 2002. * [9] S. Anfinsen and T. Eltoft, \"Application of the matrix-variate Mellin Fig. 7: Sample mean squared error of estimates, Case 2 with \\(C=100\\), \\(\\epsilon=0.001\\). transform to analysis of polarimetric radar images,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 49, no. 6, pp. 2281-2295, 2011. * [19] F. Bujor, E. Trowe, L. Valet, J.-M. Nicolas, and J.-P. Rudart, \"Application of log-cumulants to the detection of spatiotemporal discontinuities in multitemporal SAR images,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 42, no. 10, pp. 2073-2084, 2004. * [20] S. Khan and R. Guida, \"Application of Mellin-kind statistics to polarimetric } distribution for SAR data,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 52, no. 6, pp. 3513-3528, June 2014. * [21] A. C. Frery, F. Cribari-Neto, and M. O. Souza, \"Analysis of minute features in speckled imagery with maximum likelihood estimation,\" _EURASIP Journal on Advances in Signal Processing_, vol. 2004, no. 16, pp. 2476-2491, 2004. * [22] D. M. Pianto and F. Cribari-Neto, \"Dealing with monotone likelihood in a model for speckled data,\" _Computational Statistics and Data Analysis_, vol. 55, pp. 1394-1409, 2011. * [23] C. Arndt, _Information Measures: Information and Its Description in Science and Engineering_. Springer, 2004. * [24] S. Aviyente, F. Ahmad, and M. G. Amin, \"Information theoretic measures for change detection in urban sensing applications,\" in _IEEE Workshop on Signal Processing Applications for Public Security and Forensics_, 2007, pp. 1-6. * [25] B. Vemuri, M. Liu, S.-I. Amari, and F. Nielsen, \"Total Bregman divergence and its applications to DTI analysis,\" _IEEE Transactions on Medical Imaging_, vol. 30, no. 2, pp. 475-483, 2011. * [26] A. Nascimento, R. Cintra, and A. Frery, \"Hypothesis testing in speckled data with stochastic distances,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 48, no. 1, pp. 373-385, 2010. * [27] W. B. Silva, C. C. Freitas, S. J. S. Sant'Anna, and A. C. Frery, \"Classification of segments in POSAR imagery by minimum stochastic distances between Wishart distributions,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 6, no. 3, pp. 1263-1273, 2013. * [28] A. D. C. Nascimento, M. M. Horta, A. C. Frery, and R. J. Cintra, \"Comparing edge detection methods based on stochastic entropies and distances for PolSAR imagery,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 7, no. 2, pp. 648-663, Feb. 2014. * [29] R. C. P. Marques, F. N. Medeiros, and J. Santos Nobre, \"SAR image segmentation based on level set approach GAO model,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 34, no. 10, pp. 2046-2057, 2012. * [30] F. Liese and I. Vajda, \"On divergences and informations in statistics and information theory,\" _IEEE Transactions on Information Theory_, vol. 52, no. 10, pp. 4394-4412, 2006. * [31] J. Cassetti, J. Gambini, and A. C. Frery, \"Parameter estimation in SAR imagery using stochastic distances,\" in _Proceedings of The 4th Asia-Pacific Conference on Synthetic Aperture Radar (AFSAR)_, Tsukuba, Fig. 8: Sample mean of estimates, Case 3 with \\(\\epsilon=0.005\\) and \\(k=2\\). Japan, 2013, pp. 573-576. * [23] B. Han, D. Comaniciu, Y. Zhu, and L. Davis, \"Sequential kernel density approximation and its application to real-time visual tracking,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 30, no. 7, pp. 1186-1197, 2008. * [24] S. Chen, \"Probability density functions estimation using Gamma kernels,\" _Annals of Institute Statistical Mathematics_, vol. 52, no. 3, pp. 471-480, 2000. * [25] O. Scaillet, \"Density estimation using inverse and reciprocal inverse Gaussian kernels,\" _Journal of Nonparametric Statistics_, vol. 16, no. 1-2, pp. 217-226, 2004. * [26] T. Bouezmarri and O. Scaillet, \"Consistency of asymmetric kernel density estimators and smoothed histograms with application to income data,\" _Econometric Theory_, pp. 390-412, 2005. * [27] A. Achim, P. Tsakalides, and A. Bezerianos, \"SAR image denoising via Bayesian wavelet shrinkage based on heavy-tailed modeling,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 41, pp. 1773-1784, 2003. * [28] M. E. Mejail, A. C. Frery, J. Jacobo-Berlles, and O. H. Bustos, \"Approximation of distributions for SAR images: Proposal, evaluation and practical consequences,\" _Latin American Applied Research_, vol. 31, pp. 83-92, 2001. * [29] J. Gambini, M. Mejail, J. J. Berlles, and A. Frery, \"Accuracy of local edge detection in speckled imagery,\" _Statistics & Computing_, vol. 18, no. 1, pp. 15-26, 2008. * [30] C. Tison, J.-M. Nicolas, F. Tupin, and H. Maitre, \"A new statistical model for Markovian classification of urban areas in high-resolution SAR images,\" _IEEE Transactions on Geoscience and Remote Sensing_, Fig. 9: Sample mean squared error of estimates, Case 3 with \\(\\epsilon=0.005\\) and \\(k=2\\). 42, no. 10, pp. 2046-2057, 2004. * [31] R. F. Green, \"Outlier-prone and outlier-resistant distributions,\" _American Statistical Association_, vol. 71, no. 354, pp. 502-505, 1976. * [32] J. Danielsson, B. N. Jorgensen, M. Sarma, and C. G. de Vries, \"Comparing downside risk measures for heavy tailed,\" _Economics Letters_, vol. 92, no. 2, pp. 202-208, 2006. * [33] J. Rojo, \"Heavy-tailed densities,\" _Wiley Interdisciplinary Reviews Computing Statistics_, vol. 5, no. 10, pp. 30-40, 2013. * [34] A. C. Frery, S. Ferrero, and O. H. Bustos, \"The influence of training errors, context and number of hands in the accuracy of image classification,\" _International Journal of Remote Sensing_, vol. 30, no. 6, pp. 1425-1440, 2009. * [35] R. Horn, \"The DLR airborne SAR project E-SAR,\" in _Proceedings IEEE International Geoscience and Remote Sensing Symposium_, vol. 3. IEEE Press, 1996, pp. 1624-1628. * [36] G. J. Babu and E. D. Feigelson, \"Astrostatistics: Goodness-of-fit and all that!\" in _Astronomical Data Analysis Software and Systems XV ASP Conference Series_, vol. 351, 2006, pp. 127-136. * [37] R Core Team, _R: A Language and Environment for Statistical Computing_, R Foundation for Statistical Computing, Vienna, Austria, 2013. [Online]. Available: [http://www.R-project.org/](http://www.R-project.org/) \\begin{tabular}{c c} & Juliana Gambini received the B.S. degree in Mathematics and the Ph.D. degree in Computer Science both from Universidad de Buenos Aires (UBA), Argentina. She is currently Professor at the Instituto Tecnologico de Buenos Aires (ITBA), Buenos Aires, and Professor at Universidad Nacional de Tres de Febrero, Pcia. de Buenos Aires. Her research interests include SAR image processing, video processing and image recognition. \\\\ \\end{tabular} Fig. 11: Samples of several sizes in a real SAR image with a corner reflector, used to estimate the \\(\\alpha\\)-parameter. Fig. 10: Real SAR image and the regions used to estimate the \\(\\alpha\\)-parameter. \\\\ \\end{tabular} \\begin{tabular}{c c c c c c c c c c} \\hline \\hline Color & size & \\(\\widehat{\\alpha}_{\\text{MV}}\\) & \\(\\widehat{\\alpha}_{\\text{T}}\\) & \\(\\widehat{\\alpha}_{\\text{Mon12}}\\) & \\(\\widehat{\\alpha}_{\\text{LCom}}\\) & time MV & time DT & time \\(\\frac{1}{2}\\)Mom & time LCum \\\\ \\hline Magenta & \\(100\\) & \\(-1.9\\) & \\(-2.7\\) & \\(-1.9\\) & \\(-1.7\\) & \\(0.03\\) & \\(5.85\\) & \\(0.03\\) & \\(0.02\\) \\\\ Green & \\(48\\) & \\(-2.5\\) & \\(-2.5\\) & \\(-2.9\\) & \\(-3.1\\) & \\(0.00\\) & \\(3.31\\) & \\(0.00\\) & \\(0.00\\) \\\\ Blue & \\(25\\) & \\(-4.9\\) & \\(-3.0\\) & \\(NA\\) & \\(NA\\) & \\(0.00\\) & \\(2.08\\) & \\(0.00\\) & \\(0.00\\) \\\\ Yellow & \\(90\\) & \\(-6.2\\) & \\(-5.1\\) & \\(-6.6\\) & \\(-6.8\\) & \\(0.00\\) & \\(5.16\\) & \\(0.00\\) & \\(0.00\\) \\\\ Red & \\(64\\) & \\(-1.8\\) & \\(-1.9\\) & \\(-1.9\\) & \\(-1.8\\) & \\(0.00\\) & \\(4.17\\) & \\(0.00\\) & \\(0.00\\) \\\\ \\hline \\hline \\end{tabular} \\begin{tabular}{c c c c} \\hline \\hline \\multicolumn{1}{c}{\\(p\\)-value} \\\\ \\hline \\multirow{2}{*}{Color} & \\multirow{2}{*}{TestMV} & \\multirow{2}{*}{TestDT} & \\multirow{2}{*}{TestMon} & \\multirow{2}{*}{TestLogCum} \\\\ \\hline Magenta & \\(0.38\\) & \\(0.93\\) & \\(NA\\) & \\(NA\\) \\\\ Green & \\(0.11\\) & \\(0.93\\) & \\(NA\\) & \\(NA\\) \\\\ Blue & \\(0.01\\) & \\(0.11\\) & \\(0.400\\) & \\(0.11\\) \\\\ Yellow & \\(0.19\\) & \\(0.31\\) & \\(0.460\\) & \\(0.15\\) \\\\ Red & \\(0.23\\) & \\(0.12\\) & \\(0.008\\) & \\(0.02\\) \\\\ \\hline \\hline \\end{tabular} \\begin{tabular}{c c c c} \\hline \\hline \\multicolumn{1}{c}{\\(p\\)-value} \\\\ \\hline \\multirow{2}{*}{Color} & \\multirow{2}{*}{TestMV} & \\multirow{2}{*}{TestDT} & \\multirow{2}{*}{TestMom} & \\multirow{2}{*}{TestLCom} \\\\ \\hline Magenta & \\(0.46\\) & \\(0.58\\) & \\(0.28\\) & \\(0.69\\) \\\\ Green & \\(0.37\\) & \\(0.85\\) & \\(0.37\\) & \\(0.37\\) \\\\ Blue & \\(0.15\\) & \\(0.07\\) & \\(NA\\) & \\(NA\\) \\\\ Yellow & \\(0.63\\) & \\(0.98\\) & \\(0.76\\) & \\(0.22\\) \\\\ Red & \\(0.30\\) & \\(0.30\\) & \\(0.21\\) & \\(0.99\\) \\\\ \\hline \\hline \\end{tabular} \\begin{tabular}{c c c c} \\hline \\hline \\multicolumn{1}{c}{\\(p\\)-value} \\\\ \\hline \\multirow{2}{*}{Color} & \\multirow{2}{*}{TestMV} & \\multirow{2}{*}{TestDT} & \\multirow{2}{*}{TestMom} & \\multirow{2}{*}{TestLCom} \\\\ \\hline Magenta & \\(0.46\\) & \\(0.58\\) & \\(0.28\\) & \\(0.69\\) \\\\ Green & \\(0.37\\) & \\(0.85\\) & \\(0.37\\) & \\(0.37\\) \\\\ Blue & \\(0.15\\) & \\(0.07\\) & \\(NA\\) & \\(NA\\) \\\\ Yellow & \\(0.63\\) & \\(0.98\\) & \\(0.76\\) & \\(0.22\\) \\\\ Red & \\(0.30\\) & \\(0.30\\) & \\(0
In this paper we analyze several strategies for the estimation of the roughness parameter of the \\(\\mathcal{G}_{I}^{0}\\) distribution. It has been shown that this distribution is able to characterize a large number of targets in monopolarized SAR imagery, describing the denomination of \"Universal Model.\" It is indexed by three parameters: the number of looks (which can be estimated in the whole image), a scale parameter, and the roughness or texture parameter. The latter is closely related to the number of elementary backscatters in each pixel, one of the reasons for receiving attention in the literature. Although there are efforts in providing improved and robust estimates for such quantity, its dependable estimation still poses numerical problems in practice. We discuss estimators based on the minimization of stochastic distances between empirical and theoretical densities, and argue in favor of using an estimator based on the Triangular distance and asymmetric kernels built with Inverse Gaussian densities. We also provide new results regarding the heavytailedness of this distribution. Feature extraction, image texture analysis, statistics, synthetic aperture radar, speckle
Write a summary of the passage below.
224
mdpi/026d37cc_2b82_47c7_829f_108c5a45a993.md
# Is It Time for a Reset in Arctic Governance? Oran R. Young 1Bren School of Environmental Science and Management, University of California (Santa Barbara), Santa Barbara, CA 93103, USA; [email protected] or [email protected] Received: 2 August 2019; Accepted: 15 August 2019; Published: 20 August 2019 ## 1 Introduction The architecture of Arctic governance, centered on the role of the Arctic Council treated as a \"high-level forum\" designed to promote \"cooperation, coordination and interaction among the Arctic states,\" reflects conditions prevailing in the 1990s, a period marked by the end of the cold war, the collapse of the Soviet Union, and the reduction in tension in the High North that followed these events [1]. Most analysts agree that the Arctic Council has performed well during the 25 years that have elapsed since its establishment under the terms of the 1996 Ottawa Declaration on the Establishment of the Arctic Council. The accomplishments of the council have exceeded the expectations of most of us who participated in the processes eventuating in the adoption of the Ministerial Declaration on 19 September 1996. In a moment of enthusiasm, the Arctic Council Ministerial Meeting held in Kiruna, Sweden on 15 May 2013 at the close of the Swedish chairmanship went so far as to adopt a statement declaring that \" the Arctic Council has become the pre-eminent high-level forum of the Arctic region and we have made this region into an area of unique international cooperation\" [2]. In support of this expansive assertion, the ministers proclaimed that: \"We have achieved mutual understanding and trust, addressed issues of common concern, strengthened our co-operation, influenced international action, established a standing secretariat and, under the auspices of the Council, Arctic States have concluded legally binding agreements. We have also documented the importance of science and traditional knowledge for understanding our region and for informed decision-making in the Arctic\" [2]. Even discounting for a normal dose of inflated rhetoric, this statement reflects a remarkable degree of confidence regarding the achievements of an institution rooted in nothing more than the provisions of a ministerial declaration. Yet, conditions in the Arctic today differ dramatically from those prevailing at the time of the creation of the Arctic Council [3]. In the 1990s, the Arctic was a peripheral region, no longer critical as the front line of the cold war and not yet regarded as a central focus among those concerned with the threat of climate change or the prospects for extracting raw materials needed to supply advancedindustrial systems. For the most part, non-Arctic states did not object to claims on the part of the Arctic states to dominance in the Arctic arising from their self-proclaimed \"sovereignty, sovereign rights, and jurisdiction\" in the region [4]. So long as Arctic concerns did not spill over into global arenas, in other words, the rest of the world saw no harm in leaving the affairs of the Arctic to the Arctic states. In recent years, however, the Arctic has moved from the periphery to the center with regard to matters of global concern. The effects of climate change are surfacing more rapidly in the Arctic than in any other part of the Earth system. Due to the operation of feedback mechanisms involving the recession and thinning of sea ice, the melting of permafrost, and shifts in the mass balance of the Greenland ice sheet, these effects of climate change are producing dramatic consequences on a global scale. In addition, the increasing accessibility of the Arctic has generated a surge of interest in extracting the Arctic's natural resources, and especially, its world-class reserves of oil and natural gas. To take a single example, the extraction of natural gas on Russia's Yamal Peninsula and its shipment to Asian and European markets using specially designed LNG tankers has become a focus of attention in the financial capitals of the world. Some see the Arctic as an arena that will attract investments on the order of $1 US trillion during the coming years [5]. Broader geopolitical shifts have served to heighten the significance of these developments. A resurgent Russia has taken steps to reclaim its status as a great power, articulating renewed claims to a leading role in the affairs of the Far North and strengthening military assets in the Russian Arctic. China, as an emerging superpower, has taken steps to include the Arctic within the scope of its Belt and Road Initiative, leading among other things, to a series of bilateral moves relating to economic investments or proposed investments in Russia, Finland, Iceland, Greenland, and Canada. The United States has become sensitive about its superpower status and begun to treat the Arctic activities of others as hostile initiatives threatening America's interests. All these developments are playing out within a shifting global context characterized by the decline of the postwar world order but, so far at least, a lack of clarity regarding the global order of the future. The relative tranquillity associated with the Arctic's peripheral status has come to an end. Whereas an interest in the Arctic once seemed quixotic to students of international politics, not a week goes by today without one or more international conferences in which numerous people advance speculative projections regarding the future of the Arctic. It, therefore, makes sense to ask whether the architecture of Arctic governance, put in place during the 1990s, is well-suited to address needs for governance arising under the conditions prevailing in the 2020s. Can we make incremental and pragmatic adjustments in the existing arrangements to accommodate changing conditions? Will we find ourselves faced with a need to consider changes in the existing architecture that are constitutive in character? Will the new conditions prevailing in the Arctic impede all efforts to adjust or restructure existing governance arrangements? I set forth my responses to these questions in three steps. The next section describes and clarifies the premises and practices that define the architecture of Arctic governance put in place during the 1990s. The following section then asks whether these premises and practices can provide a viable platform for coming to terms with needs for governance regarding Arctic issues in the 2020s. The final substantive section then turns to the issue of institutional reconfiguration, considering innovations designed to meet the needs of the 2020s, with particular reference to the role of the Arctic Council in what is often described as the 'new' Arctic [6]. ## 2 The Existing Architecture: Premises and Practices We normally look to the provisions of formal documents, like the 1996 Ottawa Declaration, in identifying the principal elements of governance systems. But it is important to recognize that these systems rest on fundamental premises or assumptions that provide the analytic context for specific institutional arrangements, even though they may remain unstated in formal documents. The Arctic is no exception in these terms. Three major premises are embedded in the thinking of those who formulated the specific provisions of the Ottawa Declaration. _The Arctic is a distinctive, low-tension international region with a policy agenda of its own centered on issues of environmental protection and sustainable development_. It is not self-evident that the circumpolar Arctic constitutes a distinctive international region that makes sense in thinking about matters of public policy. Yet during the mid-1980s, a group of policy analysts began to make the case for treating the Arctic as a distinctive region [7]. This effort received powerful support in Mikhail Gorbachev's 1 October 1987 Murmansk speech in which the then president of the Soviet Union said \"[1]et the North of the globe, the Arctic, become a zone of peace,\" called for international cooperation to pursue this goal, and proposed a suite of cooperative initiatives dealing with arms control, economic development, the opening of the Northern Sea Route, environmental protection, and scientific research [8]. Brian Mulroney, then prime minister of Canada, added momentum in a speech in Leningrad (now St. Petersburg) on 24 November 1989 in which he said: \" and why not a council of Arctic countries eventually coming into existence to coordinate and promote cooperation among them\" [9]. This set the stage for the so-called Finnish Initiative bringing together the eight Arctic states in a collaborative effort leading to the 14 June 1991 Rovaniemi Declaration on the Protection of the Arctic Environment launching \" a joint Action Plan of the Arctic Environmental Protection Strategy\" [10; 11]. This declaration established the practice of treating the circumpolar Arctic as a distinctive international region and paved the way for the 19 September 1996 Ottawa Declaration on the Establishment of the Arctic Council, not only calling on the Arctic states to cooperate and coordinate on \"issues of sustainable development and environmental protection in the Arctic\" but also (famously) specifying that the \"Arctic Council should not deal with matters related to military security\" [1]. _The interests of the Arctic states are paramount when it comes to addressing needs for governance in the region_. In the process, the Arctic states made it clear that they could and should control the international relations of the Arctic region by virtue not only of their de facto control but also of their de jure authority in the region. There is an internal debate that surfaces from time to time regarding the division of authority and control between the five Arctic Ocean coastal states (Canada, Denmark, Norway, Russia, and the United States) and the eight Arctic states that are members of the Arctic Council (the five plus Finland, Iceland, and Sweden). Most recently, the Arctic five have taken the lead in negotiating the terms of the 2018 Central Arctic Ocean fisheries agreement. Nevertheless, all the Arctic states stand together when it comes to matters regarding the primacy of the Arctic states in addressing issues pertinent to Arctic governance. In justifying this stand, the Arctic states can point to their jurisdiction over Arctic lands, to general provisions of the law of the sea, and to several specific provisions of the 1982 UN Convention on the Law of the Sea dealing with the role of coastal states relating to maritime governance. Ultimately, however, the claims of the Arctic states concerning the primacy of their interests in addressing matters of Arctic governance rest on the political realities of the region. _The Arctic is not a vacuum when it comes to arrangements required to address specific needs for governance._ Given the tendency of outsiders to compare the Arctic and the Antarctic and to conclude that there is a need for an Arctic Treaty analogous to the 1959 Antarctic Treaty, it is understandable that the Arctic states have made a concerted effort to demonstrate that the Arctic is not the \"wild west\" regarding matters of governance [12]. In fact, they argue, the Arctic already has an extensive system of governance, dating back to the 1920 Spitsbergen Treaty and the 1973 Agreement on the Conservation of Polar Bears. The terrestrial parts of the region are subject to the domestic laws of the individual Arctic states. But even in the case of the marine parts of the region, there are applicable constitutive provisions of the law of the sea coupled with a raft of specific arrangements dealing with matters like commercial shipping, fishing, and marine mammals. The role of the Arctic Council, on this account, is not to assume responsibility for some functionally defined set of issues but to provide a policy forum for the Arctic states to consider matters of coordination relating to this complex web of governance arrangements. This explains why the Arctic states launched the council through a simple ministerial declaration rather than through the negotiation of an international legally binding instrument. Given these premises, it is relatively easy to understand the rationale for the practices articulated initially in the provisions of the Ottawa Declaration and refined through the subsequent workings of the Arctic Council. The members of the council are Canada, Russia, the United States and the five Nordic states described as \"the Arctic States.\" There are no provisions for accepting additional members, though the fact that the Ottawa Declaration is not a legally binding instrument means legally that it could be restructured or even replaced through the adoption of a new ministerial declaration with different provisions. As widely noted, the designation of Indigenous peoples' organizations as Permanent Participants to \"provide for active participation and full consultation with the Arctic indigenous representatives within the Arctic Council\" constitutes an innovation that would have been hard to incorporate in the terms of an international legally binding instrument [1]. All others--non-Arctic states, intergovernmental organizations, and nongovernmental organizations--are relegated to the status of Observers. As the Ottawa Declaration indicates and as the practice of the council has made clear, the member states control decisions regarding all rules of procedure relating to the activities of the Observers [13]. Given the membership of the Arctic Council, it makes sense that the spatial scope of the council's activities extends to the circumpolar Arctic. But there are two interesting complications regarding this provision pertaining to spatial coverage. One arises from a clear distinction between the Western Arctic and the Eurasian Arctic regarding the southern boundary of the region. In the Western Arctic, the council's remit extends southward to 60\\({}^{\\circ}\\)N and even a bit further in the case of Alaska. A similar boundary in the Eurasian Arctic would pass close to Oslo, Stockholm, Helsinki, and St. Petersburg, an arrangement unacceptable to decisionmakers in these realms. Although this is partly a simple matter arising from the configuration of land masses in the Northern Hemisphere, it introduces a distinct asymmetry into the constitutive provisions of the Arctic Council. In addition, many issues of environmental protection and sustainable development in the Arctic involve actions taking place outside the region. This is true whether we are concerned with matters relating to persistent organic pollutants and heavy metals or to matters arising from the extraction of hydrocarbons in the region. As a result, there is an arbitrary element to any effort to draw a clear spatial distinction between the Arctic region and the outside world. The substantive remit of the Arctic Council articulated in the Ottawa Declaration flows directly from the idea of the Arctic as a low-tension region that merits treatment as a zone of peace. The council is to promote \"cooperation, coordination and interaction among the Arctic states on common Arctic issues, in particular issues of sustainable development and environmental protection in the Arctic\" [1]. Addressing environmental protection was easy, since the council inherited the fully operational working groups of the AEPS. Sustainable development presents a greater challenge. Not only is environmental protection generally regarded as one of the pillars of sustainable development; the reach of sustainable development into the spheres of economic prosperity and sociocultural well-being is also anything but clear. Still, it is clear that the founders of the Arctic Council meant to direct attention toward the domain of low politics and to insulate the council from complications associated with efforts to address matters of high politics. An interesting artifact of this practice involves the idea that the Arctic Council sometimes can serve as a forum for constructive contacts among representatives of states (e.g., Canada, Russia, the United States) that are engaged simultaneously in relatively sharp conflicts in other domains. ## 3 The Challenges of the 21st Century Such are the premises and practices underlying the regime the Arctic states put in place during the 1990s. Whatever the suitability of this architecture for a regime addressing needs for governance at that time, we need to ask hard questions about the fit between this architecture and the challenges arising in the Arctic today and likely to come into focus during the foreseeable future. A straightforward way to proceed is to revisit the premises and practices described above in the light of both current developments in the Arctic itself and the evolution of links between the Arctic and the outside world. Some have argued that the narrative of the circumpolar Arctic as a distinctive region with a policy agenda of its own was never persuasive. They point to sharp differences in the political histories of the North American Arctic, Fennoscandia, and the Russian Arctic as evidence for this observation and conclude that the concept of the circumpolar Arctic as an international region is a flawed construct [14; 15]. Be that as it may, there are good reasons today to question both the distinctiveness of the Arctic as an international region and the proposition that the Arctic is a low-tension region that fits the description of a zone of peace. Above all, the impacts of climate change, a global development rooted in the activities of industrial societies located in the mid-latitudes, have emerged as dominant forces in the circumpolar Arctic. Feedback processes triggered by these impacts are amplifying the effects of climate change on a global scale. In biophysical terms, there is compelling evidence that the Arctic is experiencing what scientists call a bifurcation or a state change giving rise to what observers now describe as the 'new' Arctic. This state change is generating profound consequences both for the human residents of the Arctic itself and for outside actors who have begun to think about the Arctic through a new lens. Each of these developments deserves the attention of those concerned about meeting the challenges of Arctic governance in the 21st century. Increasingly, Arctic residents and their communities are facing challenges driven by the global forces of climate change. These include coastal erosion producing pressure to relocate communities, melting permafrost causing increasingly severe disruption of infrastructure, ecological changes affecting the availability of both marine and terrestrial mammals, widespread fires impeding normal activities in sizable areas, and volatile weather patterns complicating day-to-day decisions about the conduct of subsistence practices. While Arctic residents can endeavor to inform policymakers operating in global arenas (e.g., the UNFCCC COPs) about these dramatic developments, they must find ways to adapt to the impacts of these global forces in securing their own well-being. At the same time, scientists have documented the importance of Arctic feedback mechanisms (e.g., increased absorption of solar radiation following the melting of sea ice, release of methane associated with melting permafrost) in accelerating the pace of climate change on a global scale [16]. With regard to climate change, the Arctic is ground zero on a global scale. In this respect, regional and global concerns have merged into an integrated policy agenda [17]. Similar remarks are in order regarding issues arising from increased access to the natural resources of the Arctic under the conditions associated with the 'new' Arctic. While all estimates need to be treated with a healthy dose of caution, the Arctic certainly contains a sizable fraction of the world's recoverable reserves of hydrocarbons as well as a variety of minerals ranging from staples like lead, zinc and iron ore to more esoteric resources like rare earths. The prospect of developing these resources is appealing both to economic decisionmakers and public policymakers within the Arctic and beyond. The government of Russia has accorded top priority to the exploitation of Arctic hydrocarbons in its drive to reestablish the country as a great power capable of playing a prominent role on a global scale. Chinese leaders have identified the Arctic as a component of the Belt and Road Initiative designed to create a far-flung network of coordinated economic relationships expected to underpin China's global strategy as an emerging superpower. Chinese, Dutch, and French corporations, as well as corporations based in the Arctic states, are among the leaders in initiatives aimed at the extraction of Arctic resources on a large scale. The profitability of these economic initiatives is subject to global market forces and global policy actions. Arctic resources are expensive to produce and to ship to southern markets. Fluctuations in world market prices can make even large deposits of oil and gas in the Arctic (e.g., the supergiant Shtokman gas field in the Russian segment of the Barents Sea) unprofitable to develop. Any serious effort to come to terms with the global problem of climate change would make Arctic hydrocarbons less and less attractive in both economic and political terms. Advocates of vigorous action to address climate change argue that Arctic hydrocarbons must remain untapped. In short, the Arctic agenda and the global agenda have merged with respect to matters of political economy as well as the challenge of climate change. An important consequence of these developments is that the Arctic is once again subject to the interplay of high politics. This does not mean we are witnessing the emergence of a new 'great game' in the Arctic or that the circumpolar Arctic is likely to become the scene of armed clashes during the foreseeable future. Yet, the Arctic is critical to Russia's strategy for reasserting its global status as a great power. China has made clear its intention to include the Arctic in its strategy for exercising its influence as an emerging superpower. Reacting defensively, the United States is making assertions about the need to protect its interests in the Arctic, even as its attention is focused on hot spots in other parts of the world including Iran, North Korea, and Venezuela. Regarding Arctic governance, two important observations emerge from this account. One is that the Arctic is no longer a peripheral region with a policy agenda centered exclusively on issues of environmental protection and sustainable development that can be addressed largely in regional terms. The other related observation is that major non-Arctic states like China and intergovernmental organizations like the European Union are no longer willing to be content with the role of Observers in the Arctic Council when it comes to the pursuit of their Arctic interests. An obvious implication of these developments is that they raise questions about the Arctic states' claim to paramountcy regarding Arctic issues based on their \"sovereign rights, and jurisdiction.\" No one denies the special place of the Arctic states regarding Arctic issues, though much of the territory of all these states, with the exception of Iceland, lies outside the Arctic, and the major drivers of public policy in these states are not Arctic in character. In fact, most of the non-Arctic states interested in the Arctic make a point of professing respect for the interests of the Arctic states in the region. However, these states also are making expansive claims to being legitimate stakeholders in the Arctic and in China's case, to being a \"near-Arctic state.\" Combined with the restrictiveness of the role of Observer in the Arctic Council, the effect of this situation is to encourage non-Arctic states to look to avenues to pursue their Arctic interests that offer more scope for the fulfillment of their goals. These include the negotiation of bilateral agreements pertaining to matters of mutual interest (e.g., China's role in the development of the Yamal LNG project and the associated port of Sabettta in the Russian Arctic), participation in multilateral forums that are more welcoming than the Arctic Council (e.g., the annual Arctic Circle event held in Reykjavik, Iceland), and engagement in intergovernmental agreements relating to Arctic issues not developed under the auspices of the Arctic Council (e.g., the Central Arctic Ocean fisheries agreement). The overall effect of these developments is to raise serious questions about the characterization of the Arctic Council as the \"preeminent high-level forum of the Arctic region.\" A particularly striking development in this regard is the failure of the council, for the first time ever and due largely to American opposition to any reference to climate change, to reach agreement on the terms of a Ministerial Declaration at its 7 May 2019 meeting in Rovaniemi, marking the close of the Finnish Chairanship. These observations do not call into question the proposition that the Arctic is not a vacuum when it comes to the establishment and operation of arrangements designed to address needs for governance. We are witnessing the creation of a growing complex of arrangements dealing with Arctic issues. The terms of some of these arrangements have been worked out under the auspices of the Arctic Council, though the council itself is an informal body lacking the legal authority to adopt substantive agreements. These include the legally binding agreements on search and rescue (2011), oil spill preparedness and response (2013), and the enhancement of scientific cooperation (2017). The formal parties to all these agreements are the eight Arctic states. However, what is more striking is the development of governance arrangements that are not linked to the Arctic Council and that allow for the participation of non-Arctic states as members. Among the most significant are the Polar Code adopted by the International Maritime Organization to address issues relating to commercial shipping in the Arctic (2014/2015); the forum of science ministers from countries interested in Arctic research (initiated in 2016), and the regime dealing with fisheries in the Central Arctic Ocean (2018). As the 2015 Conference on Global Leadership in the Arctic: Innovation, Engagement, and Resilience (GLACIER), bringing together officials from over 20 countries in an event organized by the Obama Administrationin the United States in preparation for the negotiation of the Paris Climate Agreement in December of that year, makes clear, Arctic developments also have acquired a prominent place in efforts to respond to needs for governance relating to climate on a global scale. Thus, we are witnessing a proliferation of regimes dealing with a range of needs for governance in the Arctic or relating to matters affecting the Arctic in important ways. What is less clear is what to make of the resultant assemblage of arrangements. Are we witnessing increasing institutional fragmentation that will have the effect of detracting from the performance of efforts to address specific needs for governance? Are there opportunities to manage the resultant complex in such a way that the whole is greater than the sum of the parts? What is the appropriate role for the Arctic Council? Is there a need to make more or less significant adjustments in significant features of the Arctic Council to allow it to perform effectively in this role? ## 4 Managing the Arctic Regime Complex The rapidly growing literature on international governance encompasses several lines of analysis that may help in addressing these questions. Within the Earth System Governance community, there is an extensive literature on what analysts call institutional fragmentation [18]. There are many situations in which multiple regimes deal with matters arising in the same issue area or spatial area in the absence of any well-defined mechanism for integrating or at least coordinating their activities. The implication embedded in characterizing these situations as fragmented is that this an undesirable condition from the perspective of effective governance and that finding ways to reduce fragmentation should be a priority objective in any effort to improve the performance of governance systems. The classic recipe for pursuing this goal is to negotiate a comprehensive treaty (e.g., the Stockholm Convention on persistent organic pollutants) or to create an umbrella convention within which to nest a variety of linked protocols dealing with specific issues. A familiar example is the 1979 Convention on Long-Range Transboundary Air Pollution [19], which has a number of protocols dealing with specific pollutants such as sulfur dioxide, nitrogen oxides, and volatile organic compounds. Conditions prevailing in the Arctic, however, suggest this is not a suitable strategy for coming to terms with the challenges of the 21st century. Numerous proposals for the development of an Arctic Treaty in the 2000s failed to gain traction; the debate about these proposals made it clear that an Arctic Treaty would have a number of drawbacks, even in the unlikely event that the relevant players agreed to enter into such a compact [20]. Fortunately, the literature on international governance suggests another way to think about the operation of multiple regimes that seems more promising as an approach to Arctic governance. The key concept here is the idea of a regime complex treated as a collection of distinct institutional arrangements dealing with related matters but not organized into a hierarchical structure [21]. Such complexes may deal with identifiable issue areas (e.g., the regime complexes for plant genetic resources and for climate) or with spatial areas (e.g., the regime complex for Antarctica) [22; 23; 24]. Research on regime complexes has produced two major findings of interest in this discussion of Arctic governance. One is that interactions between or among the individual elements of a regime complex need not give rise to conflicts that are difficult to resolve. Analysts have concluded both that these interactions often proceed without generating conflicts or tensions among distinct elements and that such interactions sometimes are synergistic in the sense that the various elements work together to address needs for governance in ways that would not be possible in the absence of such interactions [25]. The other significant finding is that it is possible, under some conditions, to manage interactions among the elements of regime complexes in ways that not only alleviate the danger of conflicts but also enhance the capacity of the complexes to meet needs for governance arising in the relevant issue area or spatial area [26]. An awareness of this prospect among those responsible for designing and administering individual elements can help. So, there is no need for immediate alarm in response to the observation that we are witnessing a proliferation of issue-specific arrangements dealing with a collection of Arctic or, in some cases, polar issues. These arrangements fall into two broad categories. Some focus on the governance of human activities occurring within the Arctic itself. The arrangements dealing with polar bears, search and rescue, oil spill preparedness and response, commercial shipping, the Central Arctic Ocean fisheries, Arctic marine-protected areas, and the conduct of science in the Arctic all belong to this category. The other category includes regimes that are global in scope but deal with issues that are of intense interest to those concerned with the well-being of the Arctic's human residents and biophysical systems. The international regimes dealing with ozone-depleting substances, persistent organic pollutants, heavy metals, greenhouse gases, and highly migratory species of birds, fish, and great whales all belong to this category. Any effort to assess the performance of the Arctic regime complex must cover elements belonging to both broad categories [27]. Yet, the two types of cases not only differ in terms of architecture, they also present different challenges for those concerned with Arctic governance. In the first category, the challenge is to make sure that the individual elements of the complex do not interfere with one another. For example, it is obviously important to ensure that shipping is managed in such a way that it does not disrupt the migration of marine mammals or harm marine-protected areas. In the second category, the goal is to make sure that global deliberations are informed by observations regarding Arctic developments and are as responsive as possible to conditions prevailing in the Arctic [28]. For example, global efforts to come to terms with climate change need to be informed by credible evidence regarding the impacts of climate change in the Arctic and the nature of the feedback processes linking Arctic developments (e.g., the recession of sea ice, the melting of permafrost) to the global climate system. What mechanisms are available to take on these managerial roles regarding the Arctic regime complex during the foreseeable future? The obvious answer involves looking to the contributions of the Arctic Council either in its current configuration or in some revised form [29]. In thinking about the activities of the council, it is important to draw a clear distinction between two distinct approaches. One approach treats the Arctic Council as the voice of the Arctic states. On this account, the basic purpose of the council is to defend the primacy of these states in the realm of Arctic governance based on their \"sovereignty, sovereign interests and jurisdiction\" in the region. This approach amounts to circling the wagons, emphasizing solidarity among the eight Arctic states and holding the line against rising interests in Arctic issues on the part of non-Arctic states and other influential actors like the European Union. The other approach looks to the Arctic Council to play an important managerial role relating to the Arctic governance complex as a whole, including arrangements in which a variety of non-Arctic states and nonstate actors are active participants. Maximizing the effectiveness of the council in playing this role would require adjusting some of the constitutive provisions set forth in the Ottawa Declaration, a point I return to below. Which of these options makes the most sense under the conditions prevailing today and likely to arise during the foreseeable future? There is no correct answer to this question. The desire of the Arctic states to protect their position of primary regarding the treatment of Arctic issues is easy enough to understand. Considering their physical location and their longstanding interest in issues arising in the circumpolar Arctic, the clear sense among policymakers in the eight Arctic states that the rest of the world should acknowledge their primacy in the region is understandable. However, the critical question is whether hanging on to this position in the light of ongoing developments in the world at large, as well as in the Arctic more specifically, is realistic as an approach to Arctic governance going forward. More specifically, will a policy of using the Arctic Council as a bulwark against pressures emanating from non-Arctic states prove successful in holding the line or will it fail to produce the desired results and, at the same time, erode the legitimacy of the council? My own sense is that we should be thinking hard at this stage about possible adjustments in the constitutive provisions of the Arctic Council that would enhance its capacity to manage the evolving Arctic regime complex in an effective manner. The good news in this regard is that these provisions are set forth in a ministerial declaration rather than an international legally binding instrument. Any changes all parties deem both politically desirable and practically feasible could be introduced in the form of a new ministerial declaration superseding (elements of) the Ottawa Declaration without any need to go through the laborious process of amending an existing treaty and waiting for new provisions to enter into force. In today's world of complex and dynamic systems, this is a substantial advantage. It allows us to avoid lock-in regarding the various elements of the Arctic governance system. This also means that any revisions in the constitutive provisions of the Arctic Council the parties were to adopt at this stage could be revisited again in the light of continuing changes in needs for governance and adjusted in a comparatively easy manner. The real question, then, is whether it is possible to reach consensus in political terms on adjustments in the constitutive provisions of the Arctic Council. In my judgment, any serious effort to address this matter must focus on two issues: membership in the council and the framing of the council's remit. Taking the question of membership first, a simple solution would be to expand the category of members of the council, much as the Antarctic Treaty System has done in adopting a relaxed interpretation of the provisions regarding qualifications for membership in Article 9 of the Antarctic Treaty to enlarge the group of states accepted as Antarctic Treaty Consultative Parties. However, any proposal along these lines would encounter determined opposition from existing Arctic Council members and most likely from the Permanent Participants. This suggests the need to develop more innovative approaches to the issue of membership in the council. For example, there may be a useful distinction between terrestrial issues of interest mainly to the eight Arctic states and marine and atmospheric issues of interest to a larger membership. There may be room for creating a bicameral system in which there is some recognized division of authority between a council open to a broader membership and a regional board whose members are the eight Arctic states. Or it may be possible to devise a system of weighted membership based on some measure of contribution to the work of the council, as in the case of the World Bank. The point is not to promote the adoption of any one of these options. Rather, it would make good sense to devote time and energy to developing innovative approaches to membership that would make sense in enhancing the capacity of the Arctic Council to manage the Arctic regime complex in the coming years, without running into intense opposition from any major stakeholder groups including the Permanent Participants. In some respects, adjusting the Arctic Council's remit strikes me as an easier (though still complex) issue to address. The Ottawa Declaration directs attention to matters of environmental protection and sustainable development, while saying explicitly that the council should not deal with matters of military security. The obvious motivation underlying this provision was a desire on the part of the signatories to focus on matters of low politics in the interests of maintaining the Arctic as a zone of peace and cooperation [30]. Understandable at the time, it turns out to be impossible to adhere to this formulation as a matter of practice in today's world. In part, this has to do with the fact that sustainable development is an inclusive category, which subsumes environmental protection and encompasses the vast array of other concerns included within the UN's Sustainable Development Goals. Partly, the difficulty arises from the fact that with the reemergence of Russia as a great power, the rising interest in the Arctic on the part of China as an emerging superpower, and the defensive posture the United States has adopted, high politics have returned to the Arctic. As I have already indicated, this does not mean that we should expect the occurrence of armed clashes in the circumpolar Arctic anytime soon. However, it is apparent that efforts to address a wide range of Arctic issues now have a political dimension they lacked before and that this aspect of the international relations of the Arctic is destined to become even more salient in the coming years. To some degree, the Arctic Council has begun to adapt to this development in practice, though it has avoided any formal recognition of such a shift. The creation of related bodies like the Arctic Economic Council and the Arctic Coast Guard Forum constitutes a significant development in the practices of the council implicitly if not explicitly. This is not a bad thing, though it might make sense at some stage to acknowledge this development explicitly and to consider its implications systematically rather than making believe publicly that no such adjustments are being made. One helpful step would be to acknowledge that sustainable development is an overarching concern encompassing the three pillars of environmental integrity, economic prosperity, and sociocultural wellbeing and to take steps to adjust or even reorganize the council's activities on the basis of this conceptualization of its remit. Making adjustments in the constitutive provisions of the Arctic Council could enhance the capacity of the council to manage the Arctic regime complex in several significant ways. In concrete terms, it would open-up opportunities to make progress regarding issues that have stumped the council in recent years. A striking example involves the efforts of the Task Force on Arctic Marine Cooperation (TFAMC), created by the council in 2015 at the start of the US Chairmanship and extended in 2017 at the start of the Finnish Chairmanship to develop a comprehensive and coherent approach to marine issues in the Arctic. To be frank, the TFAMC has failed. The reasons are not difficult to identify. In its current configuration, the remit of the council is not broad enough to tackle an issue involving areas located outside the jurisdiction of the Arctic states (e.g., the Central Arctic Ocean), as well as the activities of organizations that are not subject to supervision on the part of the Arctic Council (e.g., the International Maritime Organization). This means the TFAMC was doomed from the outset. But this does not mean the relevant issues have gone away. A reconfigured Arctic Council with a more explicit remit to operate as the manager/coordinator of the Arctic regime complex might fare better in addressing this issue. The key to the role of the council under this scenario would be an emphasis on facilitating interactions and even promoting synergistic interplay among the various elements of the Arctic regime complex concerned with marine issues rather than attempting to come to terms with the issue on its own. More broadly, a reconfigured Arctic Council could focus on the challenge of developing a new narrative relating to Arctic governance to adjust or replace the Arctic zone of peace narrative that played an important role in earlier years. To be clear, we should still be emphasizing peaceful engagement and coordination in addressing needs for governance in the Arctic. However, conditions today differ in important respects from conditions prevailing in the 1990s. The Arctic agenda and the global agenda have merged regarding a range of critical issues. High politics have returned to the interactions among major players in the Arctic. What might become major premises of a new narrative? It is not the purpose of this article to provide an answer to this question. Any useful answer should emerge from a concerted effort involving both practitioners and analysts to consider a range of possibilities. Nevertheless, it seems clear already that a suitable narrative to undergird the next stage in the evolution of Arctic governance should emphasize the importance of paying attention to the idea of stewardship in orchestrating efforts to maintain the integrity of the Arctic's biophysical, economic and cultural systems [31]; the need to devise creative ways to handle interactions between the Arctic and the global system, and the usefulness of approaching governance as a matter of managing the Arctic regime complex rather than endeavoring to negotiate a comprehensive Arctic treaty. ## 5 Conclusions My answer to the question posed in the title of this article is affirmative. As someone who was present at the creation of the Arctic Council and who has followed the work of the council closely over the intervening 25 years, I have no hesitancy in concluding that the council has performed well during this period. However, times have changed; the premises on which the council was founded have eroded and will continue to erode during the near future. The members of the council could endeavor to make use of this body to hold the line against the impact of the changes now unfolding. This would mean resisting adjustments in the constitutive provisions of the council and making an effort to use the council as a bulwark, supporting claims to primacy based on the assertion of \"sovereignty, sovereign interests, and jurisdiction\" on the part of the Arctic states. In my judgment, this strategy is not only likely to prove unsuccessful as a means of resisting the pressure of outside forces affecting the Arctic; it is also likely to fail as an approach to managing the increasingly complex collection of governance arrangements relating to the Arctic in an effective manner. It would be better, I believe, to take seriously the issues arising when we focus on possible adjustments in the constitutive provisions of the Arctic Council relating both to membership and to the characterization of the council's remit. Any effort to achieve success in this endeavor would face an array of complex challenges. Given the return of high politics to the Arctic, specific efforts to come to terms with these challenges could well fail. This makes it critical to avoid naive expectations regarding what can be accomplished in this realm over a limited period of time. Still, this is not an excuse for choosing to avoid the issue or assuming that efforts to promote constructive reforms in Arctic governance are bound to fail. Social institutions that perform the function of governance are not meant to be treated as sacred constructs to be preserved unchanged in the face of all pressures arising from major alterations in the political, socioeconomic and biophysical settings within which they are embedded. Restructuring the terms of governance systems at the first sign of adversity is not a virtue. However, refusing to consider adjustments in the face of profound changes is not a recipe for success. The Pan-Arctic Options Project, funded under NSF/Belmont Forum Award No. 1660449, supported the preparation of this article. The author declares no conflict of interest. ## References * Arctic Council (1996) Arctic Council. Declaration on the Establishment of the Arctic Council. Adopted on 19 September 1996. Available online: [https://paarchive.arctic-council.org/handle/11374/83/discover](https://paarchive.arctic-council.org/handle/11374/83/discover) (accessed on 26 July 2019). * Arctic Council (2013) Arctic Council. Vision for the Arctic. Adopted at the Arctic Council Ministerial Meeting in Kiruna, Sweden on 15 May 2013. Available online: http:hdl.handle.new/11374/287 (accessed on 26 July 2019). * Young (2019) Young, O.R. Constructing the 'New' Arctic: The Future of the Circumpolar North in a Changing Global Order. _Outl. Glob. Transform. Politics Econ. Law_**2019**. forthcoming. * Ilulissat Declaration (2008) Ilulissat Declaration. Declaration of the Arctic Ocean Conference. Adopted on 28 May 2008. Available online: [https://cil.nus.edu.sg/wp-content/uploads/2017/07/2008-Ilulissat-Declaration.pdf](https://cil.nus.edu.sg/wp-content/uploads/2017/07/2008-Ilulissat-Declaration.pdf) (accessed on 26 July 2019). * Roston (2016) Roston, E. _The World Has Discovered a $1 Trillion Ocean_; Bloomberg Business: New York, NY, USA, 2016. * Anderson (2009) Anderson, A. _After the Ice: Life, Death, and Geopolitics in the New Arctic_; Smithsonian Books: New York, NY, USA, 2009. * Young (1985) Young, O.R. The Age of the Arctic. _Foreign Policy_**1985**, 61, 160-179. [CrossRef] * Gorbachev (1987) Gorbachev, M. Speech in Murmansk on the Occasion of the Presentation of the Order of Lenin and the Gold Star to the City of Murmansk. Delivered on 1 October 1987. Available online: [https://www.barentsinfo.fi/docs/gorbachev_speech.pdf](https://www.barentsinfo.fi/docs/gorbachev_speech.pdf) (accessed on 26 July 2019). * Mulroney (1989) Mulroney, B.; Office of the Prime Minister. Notes for an Address by the Right Honourable Brian Mulroney, Prime Minister of Canada. In Proceedings of the Conference on Security and Co-operation in Europe Summit, Paris, France, 24 November 1989. * Rovaniemi Declaration (1991) Rovaniemi Declaration on the Protection of the Arctic Environment. Adopted on 14 June 1991. Available online: [https://arcticcircle.uconn.edu.NatTesources/Policy/rovanemi.html](https://arcticcircle.uconn.edu.NatTesources/Policy/rovanemi.html) (accessed on 26 July 2019). * Young (1998) Young, O.R. _Creating Regimes: Arctic Accords and International Governance_; Cornell University Press: Ithaca, NY, USA, 1998. * Corell (2009) Corell, H. The Arctic: An Opportunity to Cooperate and to Demonstrate Statesmanship. _Vanderbilt J. Transnall. Law_**2009**, 42, 1065-1079. * Graczyk (2011) Graczyk, P. Observers in the Arctic Council: Evolution and Prospects. _Yearb. Polar Int. Law_**2011**, 3, 575-633. [CrossRef] * Keskitalo (2004) Keskitalo, C. _Negotiating the Arctic: The Construction of an International Region_; Routledge: London, UK, 2004. * Keskitalo and Carina (2019) Keskitalo, E.; Carina, H. _The Politics of Arctic Resources: Change and Continuity in the 'Old North' of Northern Europe_; Routledge: London, UK, 2019. * Serreze (2018) Serreze, M.C. _Brave New Arctic: The Untold Story of the Melting North_; Princeton University Press: Princeton, NJ, USA, 2018. * Whiteman and Dmitry (2018) Whiteman, G.; Dmitry, Y. Poles Apart: The Arctic & Management Studies. _J. Manag. Stud._**2018**, _55_, 873-879. * (18) Zelli, F.; van Asselt, H. The institutional fragmentation of global environmental governance. _Glob. Environ. Politics_**2013**, _13_, 1-13. [CrossRef] * (19) Munton, D.; Soroos, M.; Nikitina, E.; Levy, M. Acid Rain in Europe and North America. In _The Effectiveness of International Environmental Regimes_; Young, O.R., Ed.; MIT Press: Cambridge, MA, USA, 1999; pp. 155-247. * (20) Arctic Governance Project 2010. Arctic Governance in an Era of Transformative Change: Critical Questions, Governance Principles, Ways Forward. Report of the Arctic Governance Project. Distributed on 14 April 2010. Available online: [https://www.arcticgovernance.org](https://www.arcticgovernance.org) (accessed on 26 July 2019). * (21) Alter, K.J.; Kal, R. The Rise of International Regime Complexity. _Annu. Rev. Law Soc. Sci._**2018**, _14_, 329-349. [CrossRef] * (22) Raustiala, K.; David, G.V. The Regime Complex for Plant Genetic Resources. _Int. Organ._**2004**, _55_, 277-309. [CrossRef] * (23) Keohane, R.O.; David, G.V. The Regime Complex for Climate. _Perspect. Politics_**2011**, \\(9\\), 7-23. [CrossRef] * (24) Berkman, P.A.; Lang, M.A.; Walton, D.W.H.; Young, O.R. _Science Diplomacy: Antarctica, Science, and the Governance of International Spaces_; Smithsonian Institution Scholarly Press: Washington, DC, USA, 2011. * (25) Oberthur, S.; Gehring, T.G. _Institutional Interaction in Global Environmental Governance: Synergy and Conflict among International and EU Policies_; MIT Press: Cambridge, MA, USA, 2006. * (26) Oberthur, S.; Stokke, O.S. _Managing Institutional Complexity: Regime Interplay and Global Environmental Change_; MIT Press: Cambridge, MA, USA, 2011. * (27) Young, O.R. Building an international regime complex for the Arctic: Current status and next steps. _Polar J._**2012**, \\(2\\), 391-407. [CrossRef] * (28) Stone, D.P. _The Changing Arctic Environment: The Arctic Messenger_; Cambridge University Press: Cambridge, UK, 2015. * (29) Balton, D.; Fran, U. _A Strategic Plan for the Arctic Council: Recommendations for Moving Forward_; The Arctic Initiative of the Belfer Center for Science and International Affairs of Harvard University and the Polar Institute of the Woodrow Wilson International Center for Scholars: Washington, DC, USA, 2019. * (30) English, J. _Ice and Water: Politics, Peoples, and the Arctic Council_; Allen Lane: Toronto, ON, Canada, 2013. * (31) Chapin, F.S., III; Sommerkorn, M.; Robards, M.D.; Hillmer-Pegram, K. Ecosystem stewardship: A resilience framework for arctic conservation. _Glob. Environ. Chang._**2015**, _24_, 207-217. [CrossRef]
Conditions in the Arctic today differ from those prevailing during the 1990s in ways that have far-reaching implications for the architecture of Arctic governance. What was once a peripheral region regarded as a zone of peace has turned into ground zero for climate change on a global scale and a scene of geopolitical maneuvering in which Russia is flexing its muscles as a resurgent great power, China is launching economic initiatives, and the United States is reacting defensively as an embatelled but still potent hegemon. This article explores the consequences of these developments for Arctic governance and specifically for the role of the Arctic Council. The article canvasses options for adjusting the council's membership and its substantive remit. It pays particular attention to opportunities for the council to play a role in managing the increasingly complex Arctic regime complex. climate change; geopolitics; governance; regime complex + Footnote †: journal: _Journal of the Royal Society of London_
Provide a brief summary of the text.
186
arxiv-format/2203_14820v1.md
Convolutional Neural Networks for Reflective Event Detection and Characterization in Fiber Optical Links Given Noisy OTDR Signals Khouloud Abdelli _Advanced Technology & Chair of Communications_ _ADVA & Kiel University_ 82152 Munich, Germany [email protected] Helmut Griesser _Advanced Technology & Chair of Communications_ _ADVA Optical Networking SE_ 82152 Munich, Germany [email protected] Stephan Pachnicke _Chair of Communications_ _Kiel University_ 24143 Kiel, Germany [email protected] ## I Introduction Real-time fiber link monitoring and diagnosis is of crucial importance. It enables to quickly discover and pinpoint the faults in fiber optics and thereby helps to reduce operation-and-maintenance expenses (OPEX), to minimize the mean time to repair (MTTR) and to enhance the network quality. Fiber optic link monitoring has been widely performed using optical time domain reflectometry (OTDR), an optoelectronic instrument commonly used to measure fiber characteristics and to detect and locate cable faults. The OTDR operates like an optical radar. By injecting a series of optical pulses into the fiber under test, these pulses are partially reflected and scattered back towards the source due to the Rayleigh scattering. The strength of the reflected signals is recorded as a function of propagation time of the light pulse, which can be converted into the length of the optical fiber [1]. Consequently, with the recorded OTDR trace (or waveform) the positions of the faults, including fiber misalignment/mismatch, fiber breaks, angular faults, dirt on connectors and macro-bends [2] along the fiber can be identified. These events can be broadly categorized as either reflective or non-reflective. The reflective events are Fresnel peaks and result from sudden changes in the density of the material, usually fiber-to-air transitions. The non-reflective events are results of mode field diameter (MFD) variations caused by geometric changes or differences in the glass fiber and lead to attenuation but no reflection [3]. Analyzing OTDR traces can be tricky, even for experienced field engineers, mainly due to the noise overwhelming the signal, leading to inaccurate or unreliable event detection and localization. Averaging multiple OTDR measurements helps to reduce the noise and thus to improve the performance of OTDR event analysis approaches in terms of event detection and fault localization accuracy. However, the averaging process is time consuming. As the optical fiber real-time detection is considered as the industry standard, it is essential to have an automated reliable technique that detects and locates events in a timely manner and with high accuracy while processing noisy OTDR data without the need to perform long averaging and without requiring support by trained personnel. Conventionally, the OTDR event analysis technique is based on a two-point method combined with the least square approximation technique that calculates the best fit line between two markers placed on the section of the OTDR trace to calculate the distance to the event and loss at an event (a connector or splice) between the two markers [4]. Although this method is very simple, it is coarse and noise-sensitive [5]. Some OTDR event detection and localization approaches based on either the Gabor transform [6] or the Wavelet analysis [1] have been proposed. However, they are either numerically complex or locate the faults inaccurately, particularly for OTDR signals with low SNR levels. Recently, data-driven approaches based on machine learning (ML) have shown great potential in processing fault diagnostics given sequential data. We have presented ML models for laser failure detection and prediction given noisy current sensor data [7]-[9]. In this paper, we propose a novel diagnostic model based on convolutional neural networks (CNNs) for fiber reflective event detection, localization, and characterization in terms of reflectance. The overview of the proposed approach is shown in Fig. 1. Our approach is applied to simulated OTDR data modelling the reflective fault patterns for different SNR values ranging from 0 to 30 dB. It takes as inputs the preprocessed OTDR sequences and predicts the characteristics of the identified event. The results show that the presented approach detects and locates the reflective events with higher accuracy compared to the conventional OTDR techniques thanks to the capability of the CNNs to learn and extract the optimal features underlying the reflective event pattern even at low SNR values. The remainder of the paper is organized as follows. Section 2 describes the simulation setup for data generation and the architecture of the proposed approach. The results showing the performance evaluation of the proposed model as well as its comparison with a conventional OTDR approach are presented in Section 3. Conclusions are drawn in Section 4. ## II Setup & Configurations ### _Data Generation_ To train the ML approach, synthetic OTDR data is generated. We simulate the operational principle of OTDR by modelling the different OTDR components like pulse generator, coupler, photodiode. A reflector is used to induce a reflective event in the fiber link. For our simulation, the SNR is considered as the relevant figure-of-merit since the different simulation parameters namely the laser power, the length of the fiber, the attenuation and the receiver photodiode sensitivity influence the SNR. The OTDR simulation is conducted using VPIphotonics Transmission Maker. The data generation model is depicted in Fig. 2. The pulse generator module creates rectangular pulses of pulse width \\(T\\) filtered by a Bessel low pass filter with a bandwidth of \\(\\frac{0.6}{T}\\). An EAM imprints the filtered pulses on the laser light. The laser and the modulator are assumed to be ideal. The optical pulses are then launched into the bidirectional fiber. By hitting the reflector (95% reflection), the signal is sent back to the photodiode that converts them to an electrical current. The model assumes that the photodiode is ideal (i.e. a simple squaring device) and that all noise is white Gaussian and added at the transimpedance amplifier (TIA) of the photodiode, resulting in a certain SNR value. The SNR (in dB) is defined as \\[\\text{SNR}=10\\ \\text{log}_{10}\\frac{\\text{reflective event energy}}{\\text{noise energy at the event position}} \\tag{1}\\] The fiber model is linear, assuming that the optical launch power is low. Due to the high reflection at the end of the fiber, Rayleigh scattering in the fiber is negligible and not considered in the fiber model. The pulse width \\(T\\) is fixed to 100 ns, whereas the SNR is chosen from uniform distributions. 30,000 OTDR traces are generated. The OTDR signals are then segmented into fixed sliding windows of length 35. We randomly select from each segmented trace 8 sequences: 4 sequences containing no reflective event and 4 sequences including a part or the whole peak (i.e. reflective fault) pattern. In total, a data set comprised of 240,000 sequences is built. Our approach takes as input the sequence of signal power values and outputs the \\(Class_{id}\\) (0: no reflective event, 1: reflective event), the reflective event position index within the sequence, and the reflectance \\(R\\). The reflectance can be derived from the peak height compared to reference measurements for different cleaved fiber ends (see Section VI in [10]). The generated data is normalized and divided into a 60% training dataset, a 20% validation dataset and a 20% test dataset. ### _CNN based Model_ Overview of CNN CNNs are a type of artificial neural networks, biologically inspired by the modular structure of the human visual cortex. They have been widely used in computer vision and have become the state of the art for several object recognition tasks such as handwritten digit recognition. They are powerful at extracting features. The architecture of a CNN comprises different hidden layers namely the convolutional, the pooling and the fully connected layers, made by stacking them on top of each other in sequence. The convolutional layers are composed of filters, the neurons of the layer, and the feature maps. The output of a filter is applied to the previous layer and used to extract features. The pooling layers are used to down-sample the previous layers' feature map by consolidating the features learned and expressed in the previous layer's feature map. The fully connected layers are the normal feedforward neural network layer used to create final nonlinear combinations of features and for making predictions by the network. Proposed CNN-based Model The overall structure of the proposed model is depicted in Fig. 3. The architecture of our approach comprises mainly of 4 convolutional layers, a max pooling layer, a flattened layer and three fully connected feedforward layers defined for the three tasks namely reflective event detection \\(T_{1}\\), event position estimation \\(T_{2}\\) and reflectance prediction \\(T_{3}\\). The convolutional layers contain 64, 32, 32 and 16 filters respectively followed by a dropout layer to avoid overfitting. The features extracted by the convolutional and max pooling Fig. 1: Overview of the proposed approach with offline training and inference Fig. 2: Data generation process layers are flattened into a one-dimensional vector before being transferred to the three fully connected feedforward layers composed of 16 neurons. The overall loss function used to update the weights of the model based on the error between the predicted and the desired output can be formulated as \\[Loss_{total}=\\sum_{l=1}^{3}\\lambda_{l}.\\mathit{loss}_{\\tau_{l}} \\tag{2}\\] where \\(loss_{\\tau_{l}}\\)denotes the loss of task \\(T_{l}\\), and the first task loss is the binary cross-entropy loss, whereas the other losses are regression losses (mean squared error). The loss weights \\(\\lambda_{l}\\) are hyperparameters to be tuned. For our experiments, the weight of each task loss is set to 0.33. ## III Results and Discussions ### _ML Model Performance Evaluation_ To evaluate the performance of the proposed model, several metrics were used including the detection rate (i.e. detection probability) (\\(P_{d}\\)) for the reflective event detection task evaluation and the root mean square error (RMSE) metric for the evaluation of the event position estimation and reflectance prediction tasks. \\(P_{d}\\) is the portion of the total number of reflective events that were correctly detected. It is defined as follows: \\[P_{d}=\\frac{TP}{TP+FN} \\tag{3}\\] where TP is the number of true reflective event detections, and FN is the number of false negative ones. The false alarm probability ( \\(P_{FA}\\)) is expressed as: \\[P_{FA}=\\frac{FP}{FP+TN} \\tag{4}\\] where FP is the number of false positives, and TN is the number of true negatives. Figure 4 shows the effects of the SNR on the reflective event detection capability of our approach for different levels of \\(P_{FA}\\). As expected, \\(P_{d}\\)increases with SNR. For SNR values higher than 13 dB, \\(P_{d}\\) is approaching 1. For lower SNR values, the performance is worse as it is tricky to differentiate the reflective event from the noise and thereby the misclassification rate is higher. To investigate the influence of the SNR on the event localization accuracy, we evaluated the event position estimation capability of our model with three test datasets: one dataset containing the whole reflective event pattern, one dataset comprising just a part of the reflective event, and one dataset including the mix of the whole and partial reflective event patterns. The results of the evaluation are depicted in Fig. 5. the RMSE is higher and it is less than 2 m for SNR values higher than 21 dB. Figure 6 shows that the RMSE of the reflectance prediction decreases as SNR increases. For lower SNR values (SNR \\(\\leq\\)10 dB), the RMSE is higher than 10 dB, and for higher SNR values, it can be as low as 3 dB. ### _ML Model versus Conventional method_ It should be noted that the developed ML model is trained with data including partial and whole peak sequences since the model should be applied to arbitrarily segmented OTDR sequences. For a fair comparison of the developed ML model with a conventional rank-1 matched subspace detector (R1MSDE [11]), an unseen test dataset, containing only the complete reflective event pattern or no event, was generated. R1MSDE uses the theory of matched subspace detection and associated maximum likelihood estimation procedures to distinguish connection splice events from noise and the Rayleigh component in the OTDR data. As in [11] the reflective event is modelled by a rectangular pulse with prior knowledge of the duration of the event. Given that the matched filter is optimum for unipolar modulation detection for a linear channel with additive Gaussian noise and that the optimum OTDR detector for single pulses (i.e. single reflective events) cannot be better due to the unknown peak position, the optimum unipolar modulation performance serves as an upper bound for an optimum peak detector. The detection rate \\(P_{d}\\) for the case of optimum detection is calculated as follows: \\[P_{d}=\\frac{1}{2}\\text{erfc}\\left(2(\\delta\\text{-}1)\\sqrt{\\frac{1}{2}SNR_{lin }}\\right) \\tag{5}\\] where erfc denotes the complementary error function, \\(SNR_{lin}\\) is the linear SNR and \\(\\delta\\) is the detection threshold expressed as a function of \\(P_{FA}\\) \\[\\delta=\\frac{\\text{erfcinv}\\left(\\frac{2\\,P_{FA}}{\\sqrt{2\\,SNR_{lin}}}\\right)} {\\sqrt{2\\,SNR_{lin}}} \\tag{6}\\] A comparison of the different detectors in terms of \\(P_{d}\\) for a \\(P_{FA}\\) of 0.1, as shown in Fig. 7, demonstrates that the ML model outperforms the R1MSDE and is closer to the upper bound of the optimum detector. The results of the comparison of the ML model performance with R1MSDE in terms of RMSE of the event position are shown in Fig. 8. For lower SNR values, the ML model outperforms the R1MSDE by achieving a smaller peak position error of less than 5 m. As SNR increases, the peak position error of R1MSDE is getting closer but is still worse than the proposed ML model. ## IV Conclusions An ML model based on a CNN is proposed to detect and locate reflective events and estimate their reflectance given noisy OTDR data. The results showed that the ML model outperformed the conventional OTDR technique. Future work will include improving the reflectance prediction capability. ## References * [1] X. Gu et al., \"Estimation and detection in OTDR using analyzing wavelets,\" Proceedings of IEEE-SP International Symposium on Time- Frequency and Time-Scale Analysis, Philadelphia, PA, USA, pp. 353-356, 1994. * [2] M.-M. Rad et al, \"Passive optical network monitoring: challenges and requirements,\" IEEE Communications Magazine, vol. 49, no. 2, pp. s45-S52, 2011. * [3] R. Ellis, \"Explanation of Reflection Features in Optical Fiber as Sometimes Observed in OTDR Measurement Traces\", white paper, May 2007. * [4] The Fiber Optic Association, Inc, Optical Time Domain Fig. 8: Peak position estimation error (RMSE) for the ML model vs. Fig. 6: Reflectance prediction error (RMSE) of the ML model Fig. 7: Comparison of the detection rate for various algorithms. Reflectometer (OTDR), The Fiber Optic Association, Inc, 2013. Accessed on: Apr. 4, 2021. [Online]. Available: [https://www.thefoa.org/tech/ref/testing/OTDR/OTDR.html](https://www.thefoa.org/tech/ref/testing/OTDR/OTDR.html). * [5] H. Kong et al., \"Events Detection in OTDR Data Based on a Method Combining Correlation Matching with STFT,\" ACP, 2014. * [6] F. Liu et al, \"Detection and Estimation of Connection Splice Events in Fiber Optics Given Noisy OTDR Data -- Part I: GSR / MDL Method,\" in IEEE Transactions on Instrumentation and Measurement, vol. 50, no. 1, pp. 47-58, (2001). * [7] K. Abdelli, et al., \"Machine Learning Based Laser Failure Mode Detection,\" ICTON, 2019. * [8] K. Abdelli, et al., \"Lifetime Prediction of 1550 nm DFB Laser using Machine learning Techniques,\" OFC, March 2020. * [9] K. Abdelli, et al., \"Machine Learning based Data Driven Diagnostic and Prognostic Approach for Laser Reliability Enhancement,\" ICTON, 2020 * [10] F. P. Kapron et al., \"Fiber-optic reflection measurements using OCWR and OTDR techniques,\" Journal of lightwave technology, vol. 7, no. 8, pp. 1234-1241, 1989. * [11] F. Liu et al., \"Detection and location of connection splice events in fiber optics given noisy OTDR data. Part II. R1MSDE method,\" in IEEE Transactions on Instrumentation and Measurement, vol. 53, no. 2, pp. 546-556, April 2004.
Fast and accurate fault detection and localization in fiber optic cables is extremely important to ensure the optical network survivability and reliability. Hence there exists a crucial need to develop an automatic and reliable algorithm for real-time optical fiber faults' detection and diagnosis leveraging the telemetry data obtained by an optical time domain reflectometry (OTDR) instrument. In this paper, we propose a novel data-driven approach based on convolutional neural networks (CNNs) to detect and characterize the fiber reflective faults given noisy simulated OTDR data, whose SNR (signal-to-noise ratio) values vary from 0 dB to 30 dB, incorporating reflective event patterns. In our simulations, we achieved a higher detection capability with low false alarm rate and greater localization accuracy even for low SNR values compared to conventionally employed techniques. fiber fault diagnosis, optical time domain reflectometry, machine learning, long short-term memory
Write a summary of the passage below.
177
arxiv-format/2307_15274v1.md
# On the Distribution of Probe Traffic Volume Estimated from Their Footprints [ Imaucho Otomo Takashima, Shiga 520-1635, Japan [ Department of Engineering Sciences, University of Agder Jon Lilletuns vei 9 4879 Grimstad, Norway [ Zachry Department of Civil and Environmental Engineering, Texas A&M University 3127 TAMU College Station, Texas 77843-3127, United States [ Zachry Department of Civil and Environmental Engineering, Texas A&M University 3127 TAMU College Station, Texas 77843-3127, United States ## 1 Introduction Traffic volume is a fundamental element of transportation engineering (Greenshields, 1934), urban planning, real estate valuation, air pollution models (Luria et al., 1990; Okamoto et al., 1990), wildlife protection (Seiler and Heldlin, 2006), and marketing (Alexander et al., 2005). Traffic counts are typically performed at fixed locations using equipment such as pneumatic tubes, loop coils, radars, ultrasonic sensors, video cameras, and light detection and ranging (LiDAR) systems (Zhao et al., 2019). While conventional traffic counts are believed to have acceptable precision, traffic counts at fixed locations are constrained in space, time, and budget. For this reason, average annual daily traffic (AADT), which is one of the basic traffic metrics in traffic engineering, is often estimated based on 24- or 48-hour traffic counts with temporal adjustments (Jessberger et al., 2016; Krile and Schroeder, 2016; Ritchie, 1986). Nevertheless, this scalability constraint still places transportation professionals in a leash. For example, researchers have pointed out a lack of reliable traffic volume data in substantive road safety analyses (Chen et al., 2019; El-Basyouny and Sayed, 2010; Mitra and Washington, 2012; Zarei and Hellinga, 2022). To maximise the value of limited numbers of traffic counts, extensive research efforts have been devoted to developing traffic volume estimation methods focused on calibration and its accuracy. Such approaches include travel demand modelling (Zhong and Hanson, 2009), spatial kriging (Selby and Kockelman, 2013), support vector machines (Sun and Das, 2015), linear and logistic regressions (Apronti et al., 2016), geographically weighted regression (Pulugurtha and Mathew, 2021), locally weighted power curves (Chang and Cheon, 2019), and clustering (Sfyridis and Agnolucci, 2020). ### Probe Data in Traffic Volume Estimation With the advancements in information technology, expectations for traffic volume availability have increased. In the United States, for example, the Highway Safety Improvement Program (HSIP) asks state departments of transportation to prepare traffic volume data even on low-volume roads (Federal Highway Administration, 2016). As mobile devices compatible with global navigation satellite systems (GNSSs) have spread throughout our daily lives, opportunities to estimate traffic volumes based on passively collected location data have gained industry attention (Caceres et al., 2008; Harrison et al., 2020). Road agencies have started exploring the feasibility of using probe data to estimate traffic volumes (Codjoe et al., 2020; Fish et al., 2021; Krile and Slone, 2021; Macfarlane and Copley, 2020; Zhang et al., 2019) because probe volumes and traffic volumes tend to be positively correlated. In proprietary products providing AADT estimations, reports have found negative correlations between true traffic volumes and estimation accuracy as measured by percentage errors (Barrios and Casburn, 2019; Roll, 2019; Schewel et al., 2021; Tsapakis et al., 2020, 2021; Turner et al., 2020; Yang et al., 2020). Machine learning methods have become popular calibration tools for traffic volume estimation by using probe location data. For instance, Meng et al. (2017) and Zhan et al. (2017) applied spatio-temporal semi-supervised learning and an unsupervised graphical model, respectively, to taxi trajectories in Chinese cities to estimate citywide traffic volumes. With a Maryland probe dataset, Sekula et al. (2018), for example, showed that neural networks could significantly improve estimation accuracy. In Kentucky, Zhang and Chen (2020) used annual average daily probes (AADP) and betweenness centrality to estimate AADTs across the state. Using random forest, they found that an AADP of 53 was the lower threshold for having a mean absolute percentage error (MAPE) of less than 20 % to 25 %. Machine learning methods, including support vector machines and gradient boosting, provide practical solutions for calibrating high-dimensional data and improving estimation accuracy (Schewel et al., 2021). ### Types of Probe Data Figure 1 illustrates different types of probe data: point data (Figure 0(a)) and line data (Figure 0(b)). Point data refer to data that contain information to identify a point location (e.g., geographic coordinates) on a surface, such as the Earth's ellipsoid. Location data are usually first recorded and stored as point data. In contrast, line data, also called trajectories, paths, or routes, consist of a series of point data of an entity connected chronologically (Markovic et al., 2019). Conventional traffic counts require information on passing objects over a cross-section at a fixed location. With probe data, one can count the number of probes passing through a specific location based on trajectories reconstructed from point data (e.g., GPS Exchange Format (GPX)) when the point data meet all of the following conditions: * Each probe has a pseudonym (e.g., device identifier). * Each point data has a timestamp in the ordinal scale or a higher level of measurement. * The recording interval is small enough to determine a route. In other words, data that meet these conditions have less anonymity because one can track each probe's locations and time simultaneously (de Montjoye et al., 2013). In fact, all of the aforementioned studies used line data of probes to estimate traffic volumes. However, some point data are unsuitable for the precise reconstruction of line data. Sparsely recorded probe data are an example (Sun et al., 2013). In addition, agencies might not be able to obtain detailed line data in which they can identify a probe's geographic coordinates and timestamps at once, depending on privacy regulations and data providers' policies. To relax such constraints on probe data availability in traffic volume estimation, this paper presents a method for estimating probe traffic volumes using passively collected point location data without route reconstruction. In addition, we describe the exact distribution of the unbiased estimator with which one can assess the estimation precision. On the other hand, we will hardly tap into detailed calibration methods against known traffic volumes, which ultimately influences traffic volume estimation accuracy. In the following sections, we derive analytical relationships between traffic variables and estimated probe counts with example calculations. Numerical and microscopic traffic simulations further demonstrate the conformity of the model. Finally, we discuss the characteristics, limitations, applications, and opportunities of the model. ## 2 Theory This section describes the problem, provides our findings with proofs, and offers illustrative examples. We adhere to the International System of Units throughout the paper unless stated otherwise. ### Problem Statement We define a \"probe\" as a device that records its position as point data in the Earth's spatial reference system (e.g., geographic coordinates). For instance, a smartphone and connected vehicle can be a probe1. Let \\(m\\)\\(\\{m\\in\\mathbf{Z}^{nonneg}\\}\\) and \\(\\hat{m}\\) denote the number of probes passing through a unit segment during an observation period and its estimator, respectively. We present the distribution of \\(\\hat{m}\\) under the following conditions. Footnote 1: By this definition, the number of probes does not necessarily match the number of vehicles (e.g., multiple smartphones on a vehicle). Assume that each probe traverses the Earth's surface at a speed of \\(S\\)\\(\\{S\\in\\mathbf{R}^{+}\\}\\) m/s, where \\(S\\) is an independent and identically distributed (i.i.d.) continuous random variable. We denote the realised value of \\(S\\) as \\(s\\)\\(\\{s\\in\\mathbf{R}^{+}\\}\\). Let \\(g(s)\\)\\(\\{g(s)\\in\\mathbf{R}^{nonneg}\\mid 0\\leq g(s);\\int_{0}^{\\infty}g(s)\\mathrm{d}s=1\\}\\) be the probability density function (PDF) of the probe speed population. The population of \\(s\\) is a hypothetical infinite group of \\(s\\). Therefore, the possibility that multiple probes are carried by one vehicle at the same \\(s\\) is already considered in \\(g(s)\\) as a part of the distribution. All probes share the same data recording interval \\(t\\)\\(\\{t\\in\\mathbf{R}^{+}\\}\\) s. In a uniform motion, each probe records its position as point data (i.e., \"_footprints_\") at an interval of \\(t\\) s. The probe speed is recorded during this process. Probe identifiers \\(i\\) or detailed timestamps are not necessarily recorded, but data points have at least nominal information to identify a recorded time range of interest (e.g., a label of \"July 2023\"). We assume that there are no errors or failures in the positioning or recording. After the probe point location data are recorded, an analyst draws a \\(d\\)-m virtual cordon (\\(\\{d\\in\\mathbf{R}^{+}\\}\\)) over the data measured along the road segment of interest. This spatial data cropping procedure makes each probe record its first location in the virtual cordon at a uniformly distributed random timing within any nonnegative seconds less than \\(t\\) after a probe enters the cordon. The analyst may extract data within the time range of interest, as needed. The virtual cordon will contain \\(n\\)\\(\\{n\\in\\mathbf{Z}^{nonneg}\\}\\) data points at a speed of \\(s_{a}\\) where \\(a\\)\\(\\{a\\in\\mathbf{Z}^{nonneg}\\mid a\\leq n\\}\\) is a record identifier. Figure 2 shows an example of a virtual cordon containing eight data points. Although the figure distinguishes the two probes, this work does not assume that analysts have information to identify individual probes. Figure 1: Illustrated virtual cordons over probe point data and line data (reconstructed trajectories). ### Unbiased Estimator of \\(m\\) If we define \\(\\hat{m}\\) as \\[\\hat{m}=\\frac{t}{d}\\sum_{a=1}^{n}s_{a},\\forall m,d,t,n,s \\tag{1}\\] \\(\\hat{m}\\) is an unbiased estimator of the true probe traffic volume \\(m\\) (Equation 2). \\[\\text{E}[\\hat{m}]=m,\\forall m \\tag{2}\\] Because uniform motion is assumed, \\(s_{i}=s_{a}\\) within any probe and \\(s_{i}t\\) is the distance along a cordon the \\(i\\)th probe traverses in \\(t\\) s. Using \\(n_{i}\\) as the number of data points within a cordon from the probe, Equation 1 can be reduced to \\[\\hat{m}=\\frac{ts_{i}n_{i}}{d} \\tag{3}\\] for the \\(i\\)th probe. In Equations 1 and 3, \\(n_{i}\\) can be broken down into \\(n_{i}=\\tilde{n}_{i}+K_{i}\\) where \\(\\tilde{n}_{i}\\) {\\(\\tilde{n}\\in\\mathbf{Z}^{nonneg}\\)} is the minimum number of data points that could be recorded in the virtual cordon. It is calculated with the floor function as \\[\\tilde{n}_{i}=\\left\\lfloor\\frac{d}{s_{i}t}\\right\\rfloor \\tag{4}\\] Here, \\(K_{i}\\) is a Bernoulli random variable representing the number of additional data points per probe {\\(K\\in\\{0,1\\}\\)} observed in addition to \\(\\tilde{n}_{i}\\) data points. Because uniform motion is assumed and a probe leaves its first record in the cordon at a random time at \\(t\\) s or before once the probe enters the cordon. Naturally, an additional data point is recorded at the probability of the fractional part of \\(d/(s_{i}t)\\). When we define the fractional part as \\(p_{i}\\) {\\(p\\in\\mathbf{R}^{nonneg}\\mid 0\\leq p<1\\)}, \\[p_{i}=\\frac{d}{s_{i}t}\\mod 1 \\tag{5}\\] Because \\(K_{i}\\) follows the Bernoulli distribution \\(\\text{Ber}(p_{i})\\), its expected value \\(\\text{E}[K_{i}]\\) is \\(p_{i}\\). From Equations 3, 4, and 5, \\(\\text{E}[\\hat{m}]\\), the expected value of \\(\\hat{m}\\), is \\[\\text{E}[\\hat{m}]=\\frac{s_{i}t}{d}\\left[\\left\\lfloor\\frac{d}{s_{i}t}\\right \\rfloor+\\left(\\frac{d}{s_{i}t}\\mod 1\\right)\\right]=1 \\tag{6}\\] when \\(m=1\\). Accordingly, \\(\\text{E}[\\hat{m}]=m\\) for any \\(m\\). Therefore, \\(\\hat{m}\\) is an unbiased estimator of \\(m\\). Figure 2: An illustrated virtual cordon over point data (\\(m=2\\)). #### 2.2.1 Example 1 We assume \\(d=100\\) and \\(t=1\\) in Figure 2. The expected number of records within the segment from probe B (\\(s_{i}=30\\)) is \\(100/(30\\cdot 1)\\approx 3.333\\); therefore, at least three records are observed (i.e., \\(\\hat{n}=3\\)). Since it is impossible to observe 3.333 records, one more record is observed at a probability of approximately 0.333 (i.e., \\(p_{i}\\approx 0.333\\)). In Figure 2, \\(m=2\\), \\(\\mathrm{E}[\\hat{m}]=2\\) and \\(\\hat{m}=1.9\\). If the cordon had contained the data points only from probe A, \\(m=1\\), \\(\\mathrm{E}[\\hat{m}]=1\\) and \\(\\hat{m}=1\\). If the cordon had included the data points only from probe B, \\(m=1\\), \\(\\mathrm{E}[\\hat{m}]=1\\) and \\(\\hat{m}=0.9\\). ### Variance of \\(\\hat{m}\\) **Lemma 2**.: When we denote the variance of \\(\\hat{m}\\) as \\(\\mathrm{Var}[\\hat{m}]\\): \\[\\mathrm{Var}[\\hat{m}]=\\frac{mt^{2}}{d^{2}}\\int_{0}^{\\infty}b(s,d,t)g(s)\\mathrm{ d}s \\tag{7}\\] where \\[b(s,d,t)=s^{2}p(1-p)=s^{2}\\left(\\frac{d}{st}\\mod 1\\right)\\left[1-\\left( \\frac{d}{st}\\mod 1\\right)\\right] \\tag{8}\\] Proof.: The variance of \\(\\hat{m}\\) originates from the discreteness of the number of recorded data points, namely, the Bernoulli random variable \\(K\\). From Equation 3 and the multiplication rule of probability, \\(\\mathrm{Var}[\\hat{m}\\mid S=s_{i}]\\) is proportional to the variance of the Bernoulli distribution \\(p(1-p)\\) multiplied by the scaling factor \\(st/d\\) raised to a power of 2. Because \\(S\\sim g(s)\\), integrating \\(s^{2}t^{2}p(1-p)g(s)/d^{2}\\) over \\(s\\) gives the variance of \\(\\hat{m}\\) per probe. Because \\(S\\) is assumed to be i.i.d., \\(\\mathrm{Var}[\\hat{m}]\\propto m\\) from the additivity of variances. #### 2.3.1 Example 2 Hereafter, we use a finite mixture of normal distributions by Park et al. (2010) as \\(g(s)\\) unless stated otherwise. The speed distribution had been fitted to data collected on Interstate Highway 35 (I-35) in Texas. It is comprised of four normal distributions \\(N(\\upmu,\\upsigma^{2})\\) defined by \\(\\upmu=(27.042,24.000,9.394,4.294)\\), \\(\\upsigma=(1.831,4.797,3.167,1.686)\\), \\(w=(0.647,0.223,0.055,0.074)\\), and \\(\\sum w_{j}=1\\) where \\(\\upmu\\in\\mathbf{R}\\) is a tuple (i.e., a finite ordered list) of mean speed in m/s, \\(\\upsigma\\left\\{\\upsigma\\in\\mathbf{R}^{nonneg}\\right\\}\\) is a tuple of standard deviation in m/s before truncation, and \\(w\\)\\(\\left\\{w\\in\\mathbf{R}^{nonneg}\\,|\\,w\\leq 1\\right\\}\\) defines the proportions of the normal distributions within the mixture. The distribution was truncated at \\(s=0\\) and \\(s=40\\). The resulting \\(g(s)\\) is a mixture of four truncated normal distributions, defined by the following equations (Figure 2(a)): \\[g(s\\mid\\upmu,\\upsigma,0,40)=\\begin{cases}\\sum_{j=1}^{4}w_{j}\\uppsi(s\\mid \\upmu_{j},\\upsigma_{j},0,40),&0<s\\leq 40\\\\ 0,&\\text{otherwise}\\end{cases} \\tag{9}\\] where \\(\\upalpha<\\upbeta\\), \\(0<\\upsigma\\), and \\[\\uppsi(x\\mid\\upmu,\\upsigma,\\upsigma,\\upbeta)=\\frac{\\Phi\\left( \\frac{x-\\upmu}{\\upsigma}\\right)}{\\upsigma\\left[\\Phi\\left(\\frac{\\upbeta-\\upmu }{\\upsigma}\\right)-\\Phi\\left(\\frac{\\upalpha-\\upmu}{\\upsigma}\\right)\\right]} \\tag{10}\\] \\[\\Phi(x)=\\frac{e\\left(\\frac{-x^{2}}{2}\\right)}{\\sqrt{2\\uppi}}\\] (11) \\[\\Phi(x)=\\frac{1}{2}\\left[1+\\mathrm{erf}\\left(\\frac{x}{\\sqrt{2}} \\right)\\right] \\tag{12}\\] Assuming \\(d=300\\) and \\(t=4\\), Figure 2(b) displays \\(4^{2}/300^{2}\\cdot b(s,300,4)\\), the variance in the estimated probe traffic volume as a function of \\(s\\) (Equation 8). If \\(S\\) were uniformly distributed between 0 and 40 (i.e., \\(S\\sim U(0,40]\\)), the area under the function in Figure 2(b) would have been proportional to the variance of the estimated probe traffic volume (i.e., \\(\\mathrm{Var}[\\hat{m}\\mid S=s_{i}]\\)). Here, we want to weigh \\(4^{2}/300^{2}\\cdot b(s,300,4)\\) by \\(g(s)\\) because \\(S\\sim g(s)\\). The operation gives Figure 2(c), where the area under the function, 0.019, is the theoretical variance of \\(\\hat{m}\\) from a probe (Equation 7). ### Shape of \\(\\hat{m}\\) **Theorem 1**.: Let \\(u\\) be a nonnegative integer \\(\\left\\{u\\in\\mathbf{Z}^{nonneg}\\right\\}\\) that operationally substitutes \\(\\tilde{n}\\). With the previously defined variables and a function, the PDF of \\(\\hat{m}\\) is given as \\(f(\\hat{m};m)\\): \\[f(\\hat{m};m)=f^{\\prime*m}(\\hat{m}) \\tag{13}\\] where \\(f^{\\prime*m}(\\hat{m})\\) denotes \\(m\\)-fold self-convolution of \\(f^{\\prime}(\\hat{m})\\). The function \\(f^{\\prime}(\\hat{m})\\) is defined as \\[f^{\\prime}(\\hat{m})=\\sum_{u=0}^{\\infty}\\sum_{k=0}^{1}h(\\hat{m};t,d,u,k) \\tag{14}\\] where \\[h(\\hat{m};t,d,u,k)=\\begin{cases}g\\left(\\dfrac{d\\hat{m}}{t(u+k)}\\right)\\dfrac{ p^{k}(1-p)^{1-k}d}{t(u+k)},&(u=0\\wedge k\ eq 0)\\vee\\left(u\ eq 0\\wedge\\dfrac{u+k} {u+1}<\\hat{m}\\leq\\dfrac{u+k}{u}\\right)\\\\ 0,&\\text{otherwise}\\end{cases} \\tag{15}\\] Proof.: From Equations 4 and 5, \\(s\\) uniquely determines \\(\\tilde{n}\\) and \\(p\\) once \\(d\\) and \\(t\\) are determined. In addition, any single \\(s\\) has a mutually exclusive set of \\(k\\) as the outcome of a Bernoulli trial. In Equation 3, \\(\\hat{m}\\) is a linear function of \\(s\\) with slope \\(t(\\tilde{n}+k)/d\\). Because the probe speed \\(S\\) is i.i.d., the summation of all relative frequencies for possible occurrences of \\(\\tilde{n}\\) and \\(k\\) by \\(\\hat{m}\\) gives the PDF of \\(\\hat{m}\\); therefore, the PDF of \\(\\hat{m}\\) contains the joint probability function \\(g(s)p^{k}(1-p)^{1-k}\\). In Equations 14 and 15, \\(u\\) substitutes for \\(\\tilde{n}\\). Let \\(x\\) be a nonnegative real number \\(\\left\\{x\\in\\mathbf{R}^{nonneg}\\right\\}\\) and \\(\\delta\\) be an infinitesimal interval. The probability that \\(\\hat{m}\\) takes a value in the interval \\((x,x+\\delta]\\) is calculated by integrating the PDF of \\(\\hat{m}\\) over the interval. From Equation 3, \\(m=0\\) when \\(u+k=0\\); otherwise, the interval of \\(s\\) corresponding to \\((x,x+\\delta]\\) is \\((s,s+\\delta^{\\prime}]=(dx/\\left[t(u+k)\\right],dx/\\left[t(u+k)\\right]+\\delta d /\\left[t(u+k)\\right]]\\), where \\(dx/\\left[t(u+k)\\right]\\) is \\(s\\) as a function of \\(\\hat{m}\\) and \\(d/\\left[t(u+k)\\right]\\) is the reciprocal of the slope of \\(\\hat{m}\\) as a function of \\(s\\) (e.g., Figure 4). However, the interval of \\(s\\) must be constant regardless of \\(\\hat{m}\\) in the PDF of \\(\\hat{m}\\) because \\(\\hat{m}\\) results from \\(S\\), but not vice versa. Therefore, the joint probability of \\(u\\) and \\(k\\), in fact, must be multiplied by \\(d/\\left[t(u+k)\\right]\\), which is the reciprocal of the slope of \\(s\\) as a function of \\(\\hat{m}\\). When \\(S\\) is i.i.d., \\(\\hat{m}\\) is also i.i.d. (Equation 3). Hence, the PDF of \\(\\hat{m}\\) emerges as an \\(m\\)-fold self-convolution of the PDF where \\(m=1\\) (Equation 13). **Corollary 1**.: As \\(m\\) approaches infinity, the shape of \\(f(\\hat{m};m)\\) converges to that of a normal distribution: \\[\\lim_{m\\rightarrow\\infty}f(\\hat{m};m)=N\\left(m,\\dfrac{mt^{2}}{d^{2}}\\int_{0} ^{\\infty}b(s,d,t)g(s)\\mathrm{d}s\\right) \\tag{16}\\] Proof.: Because \\(\\hat{m}\\) is i.i.d., Equation 16 is derived from the classical central limit theorem on lemmata 1 and 2. #### 2.4.1 Example 3 Assuming \\(d=300\\) and \\(t=4\\), Figure 4 plots Equation 3 (i.e., when \\(m=1\\)). The combinations of \\(\\tilde{n}\\) and \\(k\\) fall into an infinite periodic pattern along the \\(s\\)-axis because \\(\\tilde{n}\\) increases towards infinity as \\(s\\) approaches \\(0\\). Because \\(S\\sim g(s)\\) Figure 3: Variance derivation when \\(d=300\\), \\(t=4\\), and \\(S\\sim g(s)\\). we want to take the relative frequency of speed and each \\(k\\) by multiplying the probability mass function (PMF) of \\(Ber(p)\\) by \\(g(s)\\). After this operation, we obtain the overall frequency of the combination of \\(\\tilde{n}\\) and \\(k\\) by \\(s\\) (Figure 5). From Figure 4, it is apparent that the density in an interval of \\(\\hat{m}\\) can emerge from more than one combination of \\(\\tilde{n}\\) and \\(k\\), which have different slopes for \\(\\hat{m}\\) with respect to \\(s\\). Therefore, an infinitesimal interval of \\(\\hat{m}\\) can have different cardinalities of the frequencies projected from the \\(s\\)-axis; thus, we must consider the cardinality of \\(\\hat{m}\\). For example, the length of an infinitesimal interval of \\(\\hat{m}\\) corresponding to any interval between \\(s=25\\) and \\(s=37.5\\) in Figure 4 is 50 % longer when \\(k=1\\) than when \\(k=0\\). Because we are interested in the PDF of \\(\\hat{m}\\), we must normalise the value using the cardinality of \\(\\hat{m}\\). This operation can be performed by dividing the relative frequency given the combination of \\(\\tilde{n}\\) and \\(k\\) by each slope \\(t(\\tilde{n}+k)/d\\) before summation. Equation 14 results in the PDF in Figure 6 in this example. ### Optimum Cordon Length Equation 7 tells that \\(d\\) determines \\(\\text{Var}[\\hat{m}]\\) when \\(t\\) and \\(g(s)\\) are already fixed. Considering that \\(d\\) is often the only parameter that an analyst can control, the art of estimation error minimisation lies in setting a good cordon length \\(d\\). That said, how long should \\(d\\) be in what conditions? Modelling the relationships between \\(\\hat{m}\\) and the other variables gives us a hint on choosing a good cordon length \\(d\\). **Corollary 2**.: Let \\(\\max(d)\\) denote the maximum feasible \\(d\\) within a given segment. When \\(\\max(d)\\) exists, there can be a cordon length \\(d\\) shorter than \\(\\max(d)\\) that minimises the precision of estimating \\(m\\). Such \\(d\\) can be sought by \\(\\underset{0<d\\leq\\max(d)}{\\text{argmin}}\\)\\(obj(d)\\) where \\(obj(d)\\) is an objective function such as the variance-to-mean ratio (VMR) \\[\\text{VMR}[\\hat{m}]=\\frac{\\text{Var}[\\hat{m}]}{\\text{E}[\\hat{m}]}=\\frac{t^{2}} {d^{2}}\\int_{0}^{\\infty}b(s,d,t)g(s)\\text{d}s \\tag{17}\\] Figure 5: The PMFs of \\(Ber(p)\\) weighted by \\(g(s)\\) as a function of \\(s\\) and \\(k\\) when \\(d\\) = 300 and \\(t\\) = 4. or the coefficient of variation (CV) \\[\\text{CV}[\\hat{m}]=\\frac{\\sqrt{\\text{Var}[\\hat{m}]}}{\\text{E}[\\hat{m}]}=\\frac{t}{d} \\sqrt{\\frac{1}{m}\\int_{0}^{\\infty}b(s,d,t)g(s)\\text{d}s} \\tag{18}\\] Proof.: Assume that Corollary 2 is false. In other words, assume that \\(\\text{CV}[\\hat{m}]\\) always monotonically decreases as \\(d\\) increases. When \\(m=1\\), \\(t=4\\) and \\(S\\sim g(s)\\) defined by Equations 9-12, \\(\\text{CV}[\\hat{m}]=0.310\\) when \\(d=150\\) whereas \\(\\text{CV}[\\hat{m}]=0.230\\) when \\(d=110\\). Because there is a counterexample to the assumed proposition that Corollary 2 is false, Corollary 2 is true. #### 2.5.1 Example 4 This example provides graphical descriptions of the proof of Corollary 2. Figure 7 displays an example2: \\(b(s,d,4)g(s)\\) and \\(4^{2}/d^{2}\\cdot b(s,d,4)g(s)\\) as functions of \\(s\\) and \\(d\\) when \\(S\\sim g(s)\\). In Figure 6(a), \\(b(s,d,t)g(s)\\) has a periodic pattern along the \\(d\\)-axis. Figure 6(b) is an extension of Figure 4(c) to the \\(d\\)-axis, where \\(b(s,d,t)g(s)\\) is scaled by \\(t^{2}/d^{2}\\) to plot Equation 17 when \\(m=1\\)). Because \\(\\text{VMR}[\\hat{m}]\\) is inversely proportional to \\(d^{2}\\), a larger \\(d\\) tends to result in a better precision in \\(\\hat{m}\\). This is intuitive considering \\(\\text{Var}[\\hat{m}]\\) arises from the discreteness of the observed number of records. The ratio of the additional number of records \\(K\\), a Bernoulli random variable, to the total number of records \\(n\\) decreases as the cordon captures more data points, owing to a larger \\(d\\). Footnote 2: Figure 7 plots the function given this specific combination of \\(t\\) and \\(g(s)\\) and will look different with a different set of input values. However, \\(\\text{VMR}[\\hat{m}]\\) or \\(\\text{CV}[\\hat{m}]\\) does not always exhibit a monotonic decrease over \\(d\\). As seen in Figure 7(a), the non-monotonicity of \\(\\text{CV}[\\hat{m}]\\) as a function of \\(d\\) indicates the potential existence of \\(d\\) that locally minimises the VMR or CV when the maximum \\(d\\) exists. When some road geometry dictates the maximum \\(d\\) to be 150 m (e.g., a 150-m road segment immediately bounded by intersections, beyond which traffic volumes can be different) in the condition of Figure 7(a), it would be better to set 110-m \\(d\\) (\\(CV=23.048\\) %) than trying to set 150-m \\(d\\) (\\(CV=30.999\\) %). Figure 7(b) plots \\(\\text{CV}[\\hat{m}]\\) as a function of \\(t\\) when \\(d=300\\). \\(\\text{CV}[\\hat{m}]\\) tends to increase as \\(t\\) increases, but this relationship is not always monotonic. ## 3 Simulations To supplement discussions, we illustrate the proposed model through numerical and microscopic traffic simulations. ### Particle Simulations We compared numerically simulated distributions of \\(\\hat{m}\\) with the theoretical distributions of \\(\\hat{m}\\). #### 3.1.1 Method In Julia 1.8.5, the number of probe footprints was modelled as a series of particles with independent uniform linear motion along a road segment. In this experiment, the emergence of binomial distributions (Equation 5) was assumed trivial. The Distributions.jl package was used to generate statistical distributions under the following two scenarios: scenario 1 (\\(d=300\\) and \\(t=4\\)) and scenario 2 (\\(d=40\\) and \\(t=1\\)). In each scenario, \\(m\\in\\{1,2,4,8\\}\\) and \\(S\\sim g(s)\\) as shown in Figure 2(a). We performed one million simulations using Equation 1 for each combination of scenarios and \\(m\\). #### 3.1.2 Results Table 1 exhibits the descriptive statistics of simulations and theory, while Figure 9 shows the histograms of simulated \\(\\hat{m}\\) and theoretical PDFs of \\(\\hat{m}\\) calculated by Equation 13. The simulation results showed a good match in descriptive statistics between simulated values and theoretical values. As seen in Figure 9, \\(\\hat{m}\\) distributes around \\(m\\), but the PDFs are not necessarily line-symmetric with respect to \\(\\hat{m}=m\\). The PDFs approached normal distributions as \\(m\\) increased. \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\hline Scenario & \\(m\\) & Item & E\\([\\hat{m}]\\) & Var\\([\\hat{m}]\\) & CV\\([\\hat{m}]\\) \\\\ \\hline 1 & 1 & Simulated & 1.000 & 0.019 & 0.137 \\\\ & & Theoretical & 1 & 0.019 & 0.137 \\\\ & 2 & Simulated & 2.000 & 0.037 & 0.097 \\\\ & & Theoretical & 2 & 0.037 & 0.097 \\\\ & 3 & Simulated & 4.000 & 0.075 & 0.068 \\\\ & & Theoretical & 4 & 0.075 & 0.068 \\\\ & & 4 & Simulated & 8.000 & 0.150 & 0.048 \\\\ & & Theoretical & 8 & 0.149 & 0.048 \\\\ 2 & 1 & Simulated & 1.000 & 0.088 & 0.297 \\\\ & & Theoretical & 1 & 0.088 & 0.297 \\\\ & 2 & Simulated & 2.000 & 0.177 & 0.210 \\\\ & & Theoretical & 2 & 0.177 & 0.210 \\\\ & 3 & Simulated & 4.000 & 0.353 & 0.148 \\\\ & & Theoretical & 4 & 0.353 & 0.149 \\\\ & 4 & Simulated & 7.999 & 0.706 & 0.105 \\\\ & & Theoretical & 8 & 0.706 & 0.105 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Descriptive Statistics of \\(\\hat{m}\\) in Simulations and Theory Figure 8: CV\\([\\hat{m}]\\) as a function of \\(d\\) and \\(t\\) when the other variables are fixed. Figure 9: Histograms of Simulated \\(\\hat{m}\\) and PDFs of \\(\\hat{m}\\). We used PTV Vissim 11.00-02 to microscopically simulate vehicular traffic on one-lane straight road links 300 m in length. There were 730 links, each carrying 500 vehicles over 24 h. Vehicle speed distributions were \\(N(26.82,5.00)\\) for 365 links and \\(N(13.41,5.00)\\) for the remaining 365 links (Oppenlander, 1963). The recording interval was \\(t=1\\). Thirty-four average annual traffic volumes (AADTs) of fewer than 1,000 recorded in Hudspeth County, Texas, in 2021 (Texas Department of Transportation, 2022) were used as the ground truth for average daily traffic (ADT) (Table 2). The probe penetration rate was assumed to be 2 % everywhere, and the observation period was seven days; therefore, \\(m\\approx(\\text{ADT})\\times 0.02\\times 7\\) in this experiment. A random integer between 5 and 70 was set as \\(d\\) at each location. Table 2 summarises modelled speed distribution, ADT, \\(m\\), \\(d\\), and theoretical VMR\\([\\hat{m}]\\) (Equation 17) by site. Based on \\(m\\), virtual probes were randomly assigned to vehicles that had already been simulated. For instance, at site 5121, one of the 365 links simulating the speed of \\(N(13.41,5.00)\\) was randomly chosen to represent this location. Within the link, a virtual cordon (\\(d=41\\)) was set at a random location, and trajectories from 75 randomly chosen vehicles were captured within the virtual cordon to compute \\(\\hat{m}\\) using Equation 1. This procedure was performed at all the sites in each trial. To estimate ADTs based on \\(\\hat{m}\\), the experiment assumed that two out of the 34 ADTs were known (e.g., when traffic counters existed) and calculated the ratios of ADT to \\(\\hat{m}\\) at these sites. In each trial, regression models were created using Julia's GLM.jl package for all possible combinations (i.e., \\({}_{34}\\)C\\({}_{2}=561\\)) of \"known\" ADT counts. For every combination of known ADTs, an ordinary least square (OLS) regression and a linear regression with weighted least square (WLS) using the reciprocal of VMR\\([\\hat{m}]\\) (Equation 17) were performed while assuming the intercept was 0. We performed 2,023 trials. In each trial, MAPE was calculated among the 32 estimated ADTs (i.e., excluding the pair used to develop each regression model) against the ground truth ADTs with every possible combination of \"known\" ADTs. The mean MAPE of the 561 combinations was considered as the average MAPE of the trial. If our model is correct, linear regressions on observed \\(\\hat{m}\\) inversely weighted by VMR\\([\\hat{m}]\\) should estimate the traffic volumes better than OLSs do. #### 3.2.2 Results The OLS and WLS resulted in the the mean average coefficients of determination were \\(R^{2}=0.970\\) for OLS and \\(R^{2}=0.986\\) for WLS. The mean average MAPE of OLS was 0.097, whereas that of WLS was 0.086. Although both methods estimated ADTs well, the results should be interpreted relatively. Figure 10 shows sorted differences in average MAPE between OLS and WLS, where positive values indicate improvements in WLS (i.e., average MAPE of WLS subtracted from that of OLS). The WLS yielded a better average MAPE than OLS in 2,001 out of the 2,023 (98.91 %) trials. As we had predicted, the results exemplified the efficacy of Equation 17 for developing a traffic volume estimation model. ## 4 Implication of the Model This paper documented the distribution of \\(\\hat{m}\\) and estimated probe counts based on the point probe location data recorded at fixed intervals. The final section discusses the model's implications regarding theory, applications, and opportunities. Figure 10: Sorted differences in MAPE between OLS and WLS. Positive values indicate improvements in WLS. ### Model Characteristics Practitioners can use \\(\\hat{m}\\) as an unbiased estimator of probe traffic volumes in any timeframes. The more probes are present, the more closely the distribution of \\(\\hat{m}\\) can be approximated by a normal distribution. The estimation precision measured as \\(\\text{CV}[\\hat{m}]\\) is inversely proportional to the square root of actual probe volume \\(m\\), roughly proportional to recording interval \\(t\\), and roughly inversely proportional to cordon length \\(d\\) (Equation 18). In other words, the higher the probe volume, the more precise the volume estimates are likely to be, while the degree of marginal improvement decreases as the traffic volume increases. A lower probe speed also tends to result in better precision when other conditions remain the same. In reality, the speed distribution \\(g(s)\\) can change along with \\(d\\) unless \\(S\\) truly follows uniform motion; therefore, the theoretical optimal cordon length \\(d\\) should be considered a suggestion rather than a perfect means of optimisation. Therefore, it is a reasonable strategy to set the longest possible \\(d\\) that fits the road segment that carries a single traffic probe volume when an analyst does not know the probe data recording interval \\(t\\) or the speed distribution \\(g(s)\\). However, the relationship between \\(d\\) and \\(\\text{CV}[\\hat{m}]\\) is not always monotonic. Depending on the recording interval and speed distribution, there is a local optimum cordon length \\(d\\) that maximises the precision of \\(\\hat{m}\\) estimation (Figure 8a). Although the authors are unaware of the exact data processing methods used in proprietary traffic volume estimation software, the estimation precision is likely to improve by setting an optimal cordon length \\(d\\) in these products if they inherently rely on probe point data with speed information. In developing a traffic volume estimation model, calibration among \\(\\hat{m}\\) is required to convert the values into traffic volume estimates. Because probes, in reality, are not likely to be distributed homogeneously among road users, this procedure ultimately determines traffic volume estimation accuracy. In this process, modellers can use the theoretical variance to effectively weight \\(\\hat{m}\\) (Figure 10). Knowledge of how the distribution emerges can improve the traffic volume estimation models, as shown in Figure 8. Our method equips modellers with the capability to incorporate even low traffic volumes into their calibration models, as the distribution of \\(\\hat{m}\\) cannot always be approximated by a normal distribution when the actual probe volume is low. The theoretical PDF of the estimated probe traffic volume allows modellers or analysts to perform interval estimation on \\(m\\). Depending on the calibration model, probe traffic volume estimates with confidence intervals (CIs) can also be used to improve the calibration accuracy against known traffic volumes. Practitioners should be aware of some other elements when applying the proposed method to probe point location data. First, spatial characteristics should be considered when drawing virtual cordons. For example, a modeller must pay attention to grade-separated facilities, tunnels, crosswalks, sidewalks, and cell phone location data from flying objects. Sometimes, probe data need to be coded to avoid capturing location data from unintended road users, as we truncated the high speed in our example calculation. With real traffic, the variance of \\(\\hat{m}\\) can become larger than the theoretical variance because GNSS is not free from systematic and random errors (Markovic et al., 2019). Although centimetre-level positioning is available with some GNSS (Choy et al., 2015), most GNSS argumentations are associated with horizontal errors varying up to 3-15 meters (Merry and Bettinger, 2019; Zandbergen and Barbeau, 2011). As a result, speed measurement is also associated with some errors (Guido et al., 2014). Because speed distribution plays a crucial role in estimating traffic volumes in the proposed method, it is essential to make an effort to reduce speed bias (Ahsani et al., 2019) in the data acquisition process. #### 4.1.1 Limitations Although we addressed the theoretical aspect of traffic volume estimation from probe point data, the proposed model is not free from limitations in practical settings. One limitation is that our model assumes i.i.d. uniform motion as the speed among the probes. As actual traffic conditions may not necessarily suit these assumptions, modellers and analysts should recognise this limitation. With careful selection of cordon locations (both spatially and temporally), biases from these assumptions can be weakened. In traffic volume estimation, another limitation of the model is that the PDF formulation (Equation 13) of \\(\\hat{m}\\) includes the true probe volume \\(m\\) itself. Although this does not prevent the computation of \\(\\hat{m}\\) (Equation 1) or \\(\\text{VMR}[\\hat{m}]\\) (Equation 17), this recursion is sometimes not ideal, because the probe volume is usually estimated when the probe volume \\(m\\) is unknown. In this context, this study is descriptive and may not be a silver bullet for issues that some readers might have expected to solve. Nevertheless, the theoretical elements of the estimated probe trafficvolume still contribute to the lineage of traffic volume estimation research in that we described how the distribution of \\(\\hat{m}\\) emerges. ### Applications The proposed method can contribute to various aspects of traffic volume estimation. First, it allows agencies to use marginal point probe data without pseudonyms or granular timestamps. For example, they can enhance the quality of traffic volume estimation by utilising sparsely recorded probe data, which would have been ignored without our method. Depending on how much marginal probe point data are available compared with the line data already available, probe location data without pseudonyms can be a sleeping lion. Furthermore, the model predicts \"the economy of scale\", encompassing probe data valuation. A higher recording frequency (\\(\\because\\) Equation 18) and homogeneity make the traffic volume estimation more precise and accurate, respectively. As a result, probe location data with a high recording frequency and homogeneity are more valuable for traffic volume estimation. Thus, agencies could perform cost-benefit analyses based on the specific goals they want to achieve. Another economy of scale arises from the synergistic effect of acquiring traffic counts at fixed locations. Probe traffic volumes can be used to estimate traffic volumes at many locations. This fact does not smear the importance of fixed-location traffic counts, because it is impossible to calibrate the values against traffic volumes without ground truths. A higher density of reliable traffic count data from conventional devices can enhance the proposed method by providing additional calibration points. Therefore, governments investing in continuous traffic monitoring infrastructures can expect an even larger return on investment (ROI) than expected. As reported by Turner (2021), the evaluation of big data quality and valuation has been of concern among transportation professionals, as machine learning models can quickly become black boxes for data users, including decision-makers. In addition to the data availability enhancement, the distribution of \\(\\hat{m}\\) can be used to calculate the valuation of probe point data. From Equation 18, it, for example, may be reasonable to formulate the value of point probe data somewhat inversely proportional to the data recording interval \\(t\\). ### Opportunities The proposed technique can positively impact society, as transportation systems are woven into daily human activities. On a global scale, traffic volume estimations based on probe point data can positively impact agencies and nations with limited financial and human resources (Lord et al., 2003; Yannis et al., 2014). In particular, the method will be useful for low-volume rural roads, where traditional or passive traffic recording tools may not be cost efficient (Das, 2021). Because remote highways tend to have long uninterrupted segments, drawing long virtual cordons would help transportation professionals estimate probe traffic volumes quite precisely. Such traffic volume information along rural highways can be used to develop safety performance functions (SPFs) more thoroughly and continuously than ever before (Tsapakis et al., 2021). Because traffic volume estimation using probe data is in its infancy, there are many research opportunities in this field. Future research related to traffic volume estimation from probe point data would include the relaxation of the i.i.d. and uniform motion constraints in the distribution of \\(\\hat{m}\\), the development of universal indices to describe the homogeneity of probe data, a framework to evaluate the transferability of the data, cost-benefit analyses of probe location data, and real-time crash hotspot identification. Our model paves the way for unleashing probe point data as a means of social good. In the 1940s, Greenshields (1947) analysed traffic using a series of aerial photographs taken at fixed intervals. Decades later, we have opportunities to improve the quality of transportation through \"snapshots\" of probes recorded at a fixed interval but with unprecedented scalability. Interorganizational collaborations, including cooperation between the public and private sectors, will be crucial in bringing technology to life to tackle various societal challenges. ## Acknowledgements We would like to express our sincere gratitude to Traf-IQ, Inc. for providing the first author with access to microscopic traffic simulation software. The first author would like to express gratitude to Dr. Daniel Romero at the University of Agder for his valuable advice in the field of statistics. ## Funding Source Declaration This research was funded in part by the A.P. and Florence Wiley Faculty Fellow provided by the College of Engineering at Texas A&M University. ## References * Ahsani et al. (2019) Ahsani, V., M. Amin-Naseri, S. Knickerbocker, and A. Sharma (2019). Quantitative Analysis of Probe Data Characteristics: Coverage, Speed Bias and Congestion Detection Precision. _Journal of Intelligent Transportation Systems__23_(2), 103-119. [https://doi.org/10.1080/15472450.2018.1502667](https://doi.org/10.1080/15472450.2018.1502667). * Alexander et al. (2005) Alexander, S. M., N. M. Waters, and P. C. Paquet (2005). Traffic Volume and Highway Permeability for a Mammalian Community in the Canadian Rocky Mountains. _The Canadian Geographer / Le Geographe canadien__49_(4), 321-331. [https://doi.org/10.1111/j.0008-3658.2005.00099.x](https://doi.org/10.1111/j.0008-3658.2005.00099.x). * Apronti et al. (2016) Apronti, D., K. Kasibati, K. Gerow, and J. J. Hepner (2016). Estimating Traffic Volume on Wyoming Low Volume Roads Using Linear and Logistic Regression Methods. _Journal of Traffic and Transportation Engineering (English Edition)__3_(6), 493-506. [https://doi.org/10.1016/j.jtte.2016.02.004](https://doi.org/10.1016/j.jtte.2016.02.004). * Barrios and Casburn (2019) Barrios, J. and R. Casburn (2019). Estimating Turning Movement Counts from Probe Data. Technical report, Kittleson & Associates, Inc., Portland, OR. Accessed March 12, 2023, [https://www.kittelson.com/wp-content/uploads/2019/11/Estimating-Turning-Movement-Counts-from-Probe-Data_Kittelson.pdf](https://www.kittelson.com/wp-content/uploads/2019/11/Estimating-Turning-Movement-Counts-from-Probe-Data_Kittelson.pdf). * Caceres et al. (2008) Caceres, N., J. Wideberg, and F. G. Benitez (2008). Review of Traffic Data Estimations Extracted from Cellular Networks. _IET Intelligent Transport Systems__2_(3), 179-192. [https://doi.org/10.1049/iet-its:20080003](https://doi.org/10.1049/iet-its:20080003). * Chang and Cheon (2019) Chang, H.-h. and S.-h. Cheon (2019). The Potential Use of Big Vehicle GPS Data for Estimations of Annual Average Daily Traffic for Unmeasured Road Segments. _Transportation__46_(3), 1011-1032. [https://doi.org/10.1007/s11116-018-9903-6](https://doi.org/10.1007/s11116-018-9903-6). * Chen et al. (2019) Chen, P., S. Hu, Q. Shen, H. Lin, and C. Xie (2019). Estimating Traffic Volume for Local Streets with Imbalanced Data. _Transportation Research Record__2673_(3), 598-610. [https://doi.org/10.1177/036119811983347](https://doi.org/10.1177/036119811983347). * Choy et al. (2015) Choy, S., K. Harima, Y. Li, M. Choudhury, C. Rizos, Y. Wakabayashi, and S. Kogure (2015). GPS Precise Point Positioning with the Japanese Quasi-Zenith Satellite System LEX Augmentation Corrections. _Journal of Navigation__68_(4), 769-783. [https://doi.org/10.1017/S0373463314000915](https://doi.org/10.1017/S0373463314000915). * Codjoe et al. (2020) Codjoe, J., R. Thapa, and A. S. Yeboah (2020, December). Exploring Non-Traditional Methods of Obtaining Vehicle Volumes. Technical Report FHWA/LA.20/635, Louisiana Transportation Research Center, Baton Rouge, LA. [https://rosap.ntl.bts.gov/view/dot/58337](https://rosap.ntl.bts.gov/view/dot/58337). * Das (2021) Das, S. (2021). Traffic Volume Prediction on Low-volume Roadways: A Cubist Approach. _Transportation Planning and Technology__44_(1), 93-110. [https://doi.org/10.1080/03081060.2020.1851452](https://doi.org/10.1080/03081060.2020.1851452). * de Montjoye et al. (2013) de Montjoye, Y.-A., C. A. Hidalgo, M. Verleysen, and V. D. Blondel (2013). Unique in the crowd: The privacy bounds of human mobility. _Scientific Reports__3_(1), 1376. [https://doi.org/10.1038/srep01376](https://doi.org/10.1038/srep01376). * El-Basyouny and Sayed (2010) El-Basyouny, K. and T. Sayed (2010). Safety Performance Functions with Measurement Errors in Traffic Volume. _Safety Science__48_(10), 1339-1344. [https://doi.org/10.1016/j.ssci.2010.05.005](https://doi.org/10.1016/j.ssci.2010.05.005). * Federal Highway Administration (2016) Federal Highway Administration (2016, April). Federal Register Volume 81, Number 50. [https://www.govinfo.gov/content/pkg/FR-2016-03-15/html/2016-05190.htm](https://www.govinfo.gov/content/pkg/FR-2016-03-15/html/2016-05190.htm). * Fish et al. (2021) Fish, J. K., S. E. Young, A. Wilson, and B. Borlaug (2021, September). Validation of Non-Traditional Approaches to Annual Average Daily Traffic (AADT) Volume Estimation. Technical Report FHWA-PL-21-033, National Renewable Energy Laboratory, Golden, CO. [https://rosap.ntl.bts.gov/view/dot/64900](https://rosap.ntl.bts.gov/view/dot/64900). * Greenshields (1934) Greenshields, B. D. (1934). The Photographic Method of Studying Traffic Behavior. In _Proceedings of the Thirteenth Annual Meeting of the Highway Research Board Held at Washington, D.C. December 7-8, 1933. Part I: Reports of Research Committees and Papers_, Volume 13, pp. 382-396. Highway Research Board. [https://onlinepubs.trb.org/Onlinepubs/hrbproceedings/13/13-026.pdf](https://onlinepubs.trb.org/Onlinepubs/hrbproceedings/13/13-026.pdf). * Greenshields (1947) Greenshields, B. D. (1947). The Potential Use of Aerial Photographs in Traffic Analysis. In _Proceedings of the Twenty-Seventh Annual Meeting of the Highway Research Board Held at Washington, D.C. December 2-5, 1947_, Washington, D.C., pp. 291-297. Highway Research Board. [https://onlinepubs.trb.org/Onlinepubs/hrbproceedings/27/27-028.pdf](https://onlinepubs.trb.org/Onlinepubs/hrbproceedings/27/27-028.pdf). * Guido et al. (2014) Guido, G., V. Gallelli, F. Saccomanno, A. Vitale, D. Rogano, and D. Festa (2014). Treating Uncertainty in the Estimation of Speed from Smartphone Traffic Probes. _Transportation Research Part C: Emerging Technologies__47_, 100-112. [https://doi.org/10.1016/j.trc.2014.07.003](https://doi.org/10.1016/j.trc.2014.07.003). * Harrison et al. (2020) Harrison, G., S. M. Grant-Muller, and F. C. Hodgson (2020). New and Emerging Data Forms in Transportation Planning and Policy: Opportunities and Challenges for \"Track and Trace\" Data. _Transportation Research Part C: Emerging Technologies__117_, 102672. [https://doi.org/10.1016/j.trc.2020.102672](https://doi.org/10.1016/j.trc.2020.102672). Jessberger, S., R. Krile, J. Schroeder, F. Todt, and J. Feng (2016). Improved Annual Average Daily Traffic Estimation Processes. _Transportation Research Record__2593_(1), 103-109. [https://doi.org/10.3141/2593-13](https://doi.org/10.3141/2593-13). * Krile and Schroeder (2016) Krile, R. and J. Schroeder (2016, February). Assessing Roadway Traffic Count Duration and Frequency Impacts on Annual Average Daily Traffic Estimation: Evaluating Special Event, Recreational Travel, and Holiday Traffic Variability. Technical Report FHWA-PL-16-016, Battelle, Columbus, OH. [https://rosap.ntl.bts.gov/view/dot/58038](https://rosap.ntl.bts.gov/view/dot/58038). * Final Report A. Technical Report FHWA-PL-021-040, Battelle, Columbus, OH. [https://rosap.ntl.bts.gov/view/dot/64901](https://rosap.ntl.bts.gov/view/dot/64901). * Lord et al. (2003) Lord, D., H. M. Abdou, A. N'Zue, G. Dionne, and C. Laberge-Nadeau (2003). Traffic Safety Diagnostics and Application of Countermeasures for Rural Roads in Burkina Faso. _Transportation Research Record__1846_(1), 39-43. [https://doi.org/10.3141/1846-07](https://doi.org/10.3141/1846-07). * Luria et al. (1990) Luria, M., R. Weisinger, and M. Peleg (1990). CO and NOx Levels at the Center of City Roads in Jerusalem. _Atmospheric Environment. Part B. Urban Atmosphere__24_(1), 93-99. [https://doi.org/10.1016/0957-1272](https://doi.org/10.1016/0957-1272)(90)90014-L. * Macfarlane and Copley (2020) Macfarlane, G. S. and M. J. Copley (2020, December). A Synthesis of Passive Third-Party Data Sets Used for Transportation Planning. Technical Report UT-20.20, Brigham Young University, Provo, UT. [https://rosap.ntl.bts.gov/view/dot/54890](https://rosap.ntl.bts.gov/view/dot/54890). * Markovic et al. (2019) Markovic, N., P. Sekula, Z. Vander Laan, G. Andrienko, and N. Andrienko (2019). Applications of Trajectory Data From the Perspective of a Road Transportation Agency: Literature Review and Maryland Case Study. _IEEE Transactions on Intelligent Transportation Systems__20_(5), 1858-1869. [https://doi.org/10.1109/TITS.2018.2843298](https://doi.org/10.1109/TITS.2018.2843298). * Meng et al. (2017) Meng, C., X. Yi, L. Su, J. Gao, and Y. Zheng (2017). City-Wide Traffic Volume Inference with Loop Detector Data and Taxi Trajectories. In _Proceedings of the 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems_, SIGSPATIAL '17, New York, NY. Association for Computing Machinery. [https://doi.org/10.1145/3139958.3139984](https://doi.org/10.1145/3139958.3139984). * Merry and Bettinger (2019) Merry, K. and P. Bettinger (2019). Smartphone GPS Accuracy Study in an Urban Environment. _PLOS ONE__14_(7), e0219890. [https://doi.org/10.1371/journal.pone.0219890](https://doi.org/10.1371/journal.pone.0219890). * Mitra and Washington (2012) Mitra, S. and S. Washington (2012). On the Significance of Omitted Variables in Intersection Crash Modeling. _Accident Analysis & Prevention__49_, 439-448. [https://doi.org/10.1016/j.aap.2012.03.014](https://doi.org/10.1016/j.aap.2012.03.014). * Okamoto et al. (1990) Okamoto, S., K. Kobayashi, N. Ono, K. Kitabayashi, and N. Katatani (1990). Comparative Study on Estimation Methods for NOx Emissions from a Roadway. _Atmospheric Environment. Part A. General Topics__24_(6), 1535-1544. [https://doi.org/10.1016/0960-1686](https://doi.org/10.1016/0960-1686)(90)90062-R. * Oppenlander (1963) Oppenlander, J. C. (1963). Sample Size Determination for Spot-Speed Studies at Rural, Intermediate, and Urban Locations. _Highway Research Record: Journal of the Highway Research Board_ (35), 78-80. [https://onlinepubs.trb.org/Onlinepubs/hrr/1963/35/35-004.pdf](https://onlinepubs.trb.org/Onlinepubs/hrr/1963/35/35-004.pdf). * Park et al. (2010) Park, B.-J., Y. Zhang, and D. Lord (2010). Bayesian Mixture Modeling Approach to Account for Heterogeneity in Speed Data. _Transportation Research Part B: Methodological__44_(5), 662-673. [https://doi.org/10.1016/j.trb.2010.02.004](https://doi.org/10.1016/j.trb.2010.02.004). * Pulugurtha and Mathew (2021) Pulugurtha, S. S. and S. Mathew (2021). Modeling AADT on Local Functionally Classified Roads Using Land Use, Road Density, and Nearest Nonlocal Road Data. _Journal of Transport Geography__93_, 103071. [https://doi.org/10.1016/j.jtrangeo.2021.103071](https://doi.org/10.1016/j.jtrangeo.2021.103071). * Ritchie (1986) Ritchie, S. G. (1986). Statistical Approach to Statewide Traffic Counting. _Transportation Research Record__1090_, 14-21. [https://onlinepubs.trb.org/Onlinepubs/trr/1986/1090/1090-003.pdf](https://onlinepubs.trb.org/Onlinepubs/trr/1986/1090/1090-003.pdf). * Roll (2019) Roll, J. (2019). Evaluating Streetlight Estimates of Annual Average Daily Traffic in Oregon. Technical Report OR-RD-19-11, Oregon Department of Transportation, Salem, OR. [https://www.oregon.gov/odot/Programs/ResearchDocuments/StreetlightEvaluation.pdf](https://www.oregon.gov/odot/Programs/ResearchDocuments/StreetlightEvaluation.pdf). * Schewel et al. (2021) Schewel, L., S. Co, C. Willoughby, L. Yan, N. Clarke, and J. Wergin (2021, September). Non-Traditional Methods to Obtain Annual Average Daily Traffic (AADT). Technical Report FHWA-PL-21-030, StreetLight Data, San Francisco, CA. [https://rosap.ntl.bts.gov/view/dot/64897](https://rosap.ntl.bts.gov/view/dot/64897). * Seiler and Helldin (2006) Seiler, A. and J. O. Helldin (2006). Mortality in Wildlife Due to Transportation. In J. Davenport and J. L. Davenport (Eds.), _The Ecology of Transportation: Managing Mobility for the Environment_, pp. 165-189. Dordrecht, Netherlands: Springer Netherlands. [https://doi.org/10.1007/1-4020-4504-2_8](https://doi.org/10.1007/1-4020-4504-2_8). * S. S. * Sekula et al. (2018) Sekula, P., N. Markovic, Z. Vander Laan, and K. F. Sadabadi (2018). Estimating Historical Hourly Traffic Volumes via Machine Learning and Vehicle Probe Data: A Maryland Case Study. _Transportation Research Part C: Emerging Technologies__97_, 147-158. [https://doi.org/https://doi.org/10.1016/j.trc.2018.10.012](https://doi.org/https://doi.org/10.1016/j.trc.2018.10.012). * Selby and Kockelman (2013) Selby, B. and K. M. Kockelman (2013). Spatial Prediction of Traffic Levels in Unmeasured Locations: Applications of Universal Kriging and Geographically Weighted Regression. _Journal of Transport Geography__29_, 24-32. [https://doi.org/10.1016/j.jtrangeo.2012.12.009](https://doi.org/10.1016/j.jtrangeo.2012.12.009). * Sfyridis and Agnolucci (2020) Sfyridis, A. and P. Agnolucci (2020). Annual Average Daily Traffic Estimation in England and Wales: An application of Clustering and Regression Modelling. _Journal of Transport Geography__83_, 102658. [https://doi.org/10.1016/j.trangeo.2020.102658](https://doi.org/10.1016/j.trangeo.2020.102658). * Sun and Das (2015) Sun, X. and S. Das (2015, July). Developing a Method for Estimating AADT on All Louisiana Roads. Technical Report FHWA/LA.14/548, University of Louisiana at Lafayette, Lafayette, LA. [https://rosap.ntl.bts.gov/view/dot/29681](https://rosap.ntl.bts.gov/view/dot/29681). * Sun et al. (2013) Sun, Z., B. Zan, X. J. Ban, and M. Gruteser (2013). Privacy Protection Method for Fine-grained Urban Traffic Modeling Using Mobile Sensors. _Transportation Research Part B: Methodological__56_, 50-69. [https://doi.org/10.1016/j.trb.2013.07.010](https://doi.org/10.1016/j.trb.2013.07.010). * Texas Department of Transportation (2022, December). TxDOT AADT Annuals. Accessed March 8, 2023, [https://gis-txdot.opendata.arcgis.com/datasets/txdot-aadt-annuals/explore](https://gis-txdot.opendata.arcgis.com/datasets/txdot-aadt-annuals/explore). * Tsapakis et al. (2020) Tsapakis, I., L. Cornejo, and A. Sanchez (2020). Accuracy of Probe-Based Annual Average Daily Traffic (AADT) Estimates in Border Regions. Technical report, Texas A&M Transportation Institute, El Paso, Texas. [https://static.tti.tamu.edu/tti.tamu.edu/documents/TTI-2020-1.pdf](https://static.tti.tamu.edu/tti.tamu.edu/documents/TTI-2020-1.pdf). * Tsapakis et al. (2021) Tsapakis, I., S. Das, A. Khodadadi, D. Lord, J. Morris, and E. Li (2021, March). Use of Disruptive Technologies to Support Safety Analysis and Meet New Federal Requirements. Technical report, Texas A&M Transportation Institute, College Station, TX. [https://safed.vti.vt.edu/wp-content/uploads/2021/04/Final-Version-04-113-Use-of-Disruptive-Technologies-to-Support-Safety-Analysis-and-Meet-New-Federal-Requirements.pdf](https://safed.vti.vt.edu/wp-content/uploads/2021/04/Final-Version-04-113-Use-of-Disruptive-Technologies-to-Support-Safety-Analysis-and-Meet-New-Federal-Requirements.pdf). * Tsapakis et al. (2021) Tsapakis, I., S. Turner, P. Koeneman, and P. R. Anderson (2021, September). Independent Evaluation of a Probe-Based Method to Estimate Annual Average Daily Traffic Volume. Technical Report FHWA-PL-21-032, Texas A&M Transportation Institute, College Station, TX. [https://rosap.ntl.bts.gov/view/dot/64899](https://rosap.ntl.bts.gov/view/dot/64899). * Turner et al. (2020) Turner, S., I. Tsapakis, and P. Koeneman (2020, November). Evaluation of StreetLight Data's Traffic Count Estimates From Mobile Device Data. Technical Report MN 2020-30, Texas A&M Transportation Institute, College Station, TX. [https://rosap.ntl.bts.gov/view/dot/57948](https://rosap.ntl.bts.gov/view/dot/57948). * Turner (2021) Turner, S. M. (2021). Making the Most of Big Data and Data Analytics. _ITE Journal__91_(2), 24-26. * Yang et al. (2020) Yang, H., M. Cetin, and Q. Ma (2020, March). Guidelines for Using StreetLight Data for Planning Tasks. Technical Report FHWA/VTRC 20-R23, Virginia Transportation Research Council, Charlottesville, VA. [https://rosap.ntl.bts.gov/view/dot/55501](https://rosap.ntl.bts.gov/view/dot/55501). * Yannis et al. (2014) Yannis, G., E. Papadimitriou, and K. Folla (2014). Effect of GDP Changes on Road Traffic Fatalities. _Safety Science__63_, 42-49. [https://doi.org/10.1016/j.ssci.2013.10.017](https://doi.org/10.1016/j.ssci.2013.10.017). * Zandbergen and Barbeau (2011) Zandbergen, P. A. and S. J. Barbeau (2011). Positional Accuracy of Assisted GPS Data from High-sensitivity GPS-enabled Mobile Phones. _Journal of Navigation__64_(3), 381-399. [https://doi.org/10.1017/s0373463311000051](https://doi.org/10.1017/s0373463311000051). * Zarei and Hellinga (2022) Zarei, M. and B. Hellinga (2022). Method for Estimating the Monetary Benefit of Improving Annual Average Daily Traffic Accuracy in the Context of Road Safety Network Screening. _Transportation Research Record_, 03611981221115720. [https://doi.org/10.1177/03611981221115720](https://doi.org/10.1177/03611981221115720). * Zhan et al. (2017) Zhan, X., Y. Zheng, X. Yi, and S. V. Ukkusuri (2017). Citywide Traffic Volume Estimation Using Trajectory Data. _IEEE Transactions on Knowledge and Data Engineering__29_(2), 272-285. [https://doi.org/10.1109/TKDE.2016.2621104](https://doi.org/10.1109/TKDE.2016.2621104). * Zhang and Chen (2020) Zhang, X. and M. Chen (2020). Enhancing Statewide Annual Average Daily Traffic Estimation with Ubiquitous Probe Vehicle Data. _Transportation Research Record__2674_(9), 649-660. [https://doi.org/10.1177/0361198120931100](https://doi.org/10.1177/0361198120931100). * Zhang et al. (2019) Zhang, X., C. V. Dyke, G. Erhardt, and M. Chen (2019). _Practices on Acquiring Proprietary Data for Transportation_. Washington, D.C.: The National Academies Press. [https://doi.org/10.17226/25519](https://doi.org/10.17226/25519). * Zhao et al. (2019) Zhao, J., H. Xu, H. Liu, J. Wu, Y. Zheng, and D. Wu (2019). Detection and Tracking of Pedestrians and Vehicles Using Roadside LiDAR Sensors. _Transportation Research Part C: Emerging Technologies__100_, 68-87. [https://doi.org/10.1016/j.trc.2019.01.007](https://doi.org/10.1016/j.trc.2019.01.007). * Zhong and Hanson (2009) Zhong, M. and B. L. Hanson (2009). GIS-based Travel Demand Modeling for Estimating Traffic on Low-class Roads. _Transportation Planning and Technology__32_(5), 423-439. [https://doi.org/10.1080/03081060903257053](https://doi.org/10.1080/03081060903257053).
Collecting traffic volume data is a vital but costly piece of transportation engineering and urban planning. In recent years, efforts have been made to estimate traffic volumes using passively collected probe data that contain spatiotemporal information. However, the feasibility and underlying principles of traffic volume estimation based on probe data without pseudonyms have not been examined thoroughly. In this paper, we present the exact distribution of the estimated probe traffic volume passing through a road segment based on probe point data without trajectory reconstruction. The distribution of the estimated probe traffic volume can exhibit multimodality, without necessarily being line-symmetric with respect to the actual probe traffic volume. As more probes are present, the distribution approaches a normal distribution. The conformity of the distribution was demonstrated through numerical and microscopic traffic simulations. Theoretically, with a well-calibrated probe penetration rate, traffic volumes in a road segment can be estimated using probe point data with high precision even at a low probe penetration rate. Furthermore, sometimes there is a local optimum cordon length that maximises estimation precision. The theoretical variance of the estimated probe traffic volume can address heteroscedasticity in the modelling of traffic volume estimates. Traffic Volume, AADT, Probe Data, Point Data, Telematics, Privacy Protection 0000-0002-8870-780X]Kentaro Iio 0000-0002-4882-780X]Gulshan Noorsumar 0000-0002-4882-780X]Dominique Lord 0000-0002-1873-340X]Yunlong Zhang
Write a summary of the passage below.
321
isprs/19438457_959d_4755_b365_c6fa4cbf6ddc.md
Reprocessing close range terrestrial and UAV photogrammetric projects with the DBAT toolbox for independent verification and quality control A. Murtiyoso Corresponding author P. Grussenmeyer N. Borlin Photogrammetry and Geomatics Group, ICube Laboratory UMR 7357, INSA Strasbourg, France - (armadi.murtiyoso, pierre.grussenmeyer)@insa-strasbourg.fr Department of Computing Science, Umeal University, Sweden - [email protected] ## 1 Introduction Close range photogrammetry has often been used to acquire 3D data (e.g. shape, position, and size) from images (Grussenmeyer et al., 2002). The rise in the use of UAVs (Unmanned Aerial Vehicles) and rapid developments in imaging technology and image processing have increased the use of close range photogrammetry for mapping purposes (Murtiyoso and Grussenmeyer, 2017). This relatively low cost solution (Barasnari et al., 2014) for mapping and reality-based 3D modelling is often complemented by commercial, easy-to-use photogrammetric and/or SfM (Structure-from-Motion) software packages. Although some open source software alternatives exist (Pierot-Desellings and Clery, 2012; Gonzalez-Aguilera et al., 2016, commercial software such as Eos System's Photomodeler, Pix4D, and Agisoft Photoscan remain very popular, especially outside the photogrammetric community due to their simplicity in creating fairly accurate results (Grussenmeyer et al., 2002; Remondino et al., 2014; Burns and Delparte, 2017). Commercial solutions typically hide the algorithms and show a simplified interface to the user in order to make it easy to generate the desired result. This is an advantage for many users, especially those that are not used to the classical photogrammetric workflow. At the same time it complicates a transparent and independent check of the result of each stage of the workflow. One main and important aspect of the photogrammetric workflow is the external orientation or camera pose estimation step, in which the positions and rotational attitudes of each of the camera stations are determined. The camera pose estimation problem is often resolved using a bundle adjustment process with initial values coming from several possible approaches such as relative orientation, spatial resection, Direct Linear Transformation (DLT), etc. (Luhmann et al., 2014). The primary aim of this paper is to test whether the open source toolbox DBAT (Damped Bundle Adjustment Toolbox) (Borfin and Grussenmeyer, 2013) can be used to perform the bundle adjustment reprocessing of terrestrial and UAV photogrammetric projects that were previously processed using the commercial software Agisoft Photoscan (PS). Compared to classical aerial photography, images provided by terrestrial and UAV close range acquisitions present a particular problem absent in traditional aerial photography, in that the image and control point configuration is often irregular. It is therefore in the interest of some users to understand the results in a more detailed manner. In addition, the projects used in this paper contain up to a few hundred images, which is higher than the number of images tested for DBAT in previously published studies. A secondary aim of this paper is thus to present a larger case study for DBAT. ## 2 Software and Related Work The UAV was originally designed for military purposes, but has since seen many civilian uses in recent years. Photogrammetry by UAV opens many possibilities for its application in close-range situations. It complements terrestrial acquisition of 3D information (Nex and Remondino, 2014). In its role as an aid to photogrammetric work, the UAV has seen many applications in various fields, such as disaster management (Achille et al.,2015; Baiocchi et al., 2013), 3D building reconstruction (Roca et al., 2013), surveying/mapping (Cramer, 2013), and heritage documentation (Chiabrando et al., 2015; Murtiyoso and Grussenmeyer, 2017). The Damped Bundle Adjustment Tools (DBAT) is a series of functions developed in the Matlab language that enables the user to reprocess bundle adjustment projects generated by Photomodeler or Photoscan. While the toolbox was originally developed to test various techniques (\"damping\") to improve the convergence radius for the bundle adjustment process (Bordin and Grussenmeyer, 2013), DBAT also provide comprehensive statistics once converged such as posterior variance and correlations between estimated parameters. DBAT has been tested for several cases, including camera calibration (field calibration and laboratory-based, coded-target plate calibration) (Bordin and Grussenmeyer, 2014), large scale aerial photographs (small image sample) (Bordin and Grussenmeyer, 2016), as well as several tests in close range configurations (Dall'Asta et al., 2015). In this paper, the use of DBAT on PS projects is emphasized, since PS generates few statistics and is therefore more difficult to verify on its own. This paper uses DBAT version 0.6.5.5. Agsifot Photoscan (PS) is a 3D reconstruction software which may be used for both aerial and terrestrial images. PS offers an easy-to-use graphical interface and workflow, with further possibilities to perform automation using Python scripts. PS employs a computer vision-leaning approach to generate 3D models. This presents a particular challenge to compare it to photogrammetric conventions since some terms (Granshaw, 2016) and definitions (Hastedt and Luhmann, 2015; Murtiyoso et al., 2017a) are different. DBAT has recently been developed to accommodate these differences between photogrammetry and computer vision, particularly in terms of lens distortion parameters. This paper uses the PS version 1.3.4 build 5067. ## 3 Research design ### Data sets Two data sets were used in this paper; a UAV data set and a terrestrial close range data set. The UAV data set was of the StPierre-le-Jeune church which has previously been modelled using several software solutions (Murtiyoso et al., 2017a) and will serve as a basis for the reprocessing using DBAT. The StPierre-le-Jeune church is situated at the recently enlisted UNESCO World Heritage Site of Neustadt, in the city of Strasbourg, France. Although the church in its entirety has been documented in 3D, for the purposes of this research only the principal facade will be reprocessed in DBAT. The St-Pierre data set consisted of 239 images (Figure 1) each with a 38 MP resolution. Among these images, 67 were taken from a perpendicular point of view while the rest were oblique images taken with the sensor oriented upwards, downwards, to the left, and to the right. This configuration was used in order to take into account the geometric requirements of a convergent photogrammetric block, as well as to cover difficult parts of the object during the dense matching step. The used UAV was the Sensely Abloris, which has the capability to maintain an approximate distance to the object. This enabled the data set to have a roughly constant camera-to-object distance and therefore constant theoretical Ground Sampling Distance (GSD). In this case, the theoretical GSD is 1.4 mm for a distance of 8 meters. A total of 9 ground control points (GCPs) were measured on the fagade, with a precision of 5 mm. From these 9 GCPs, 3 were selected as check points (CPs). The choice of GCPs and CPs follows the convention usually used in classical aerial photogrammetry (Kraus and Waldhausl, 1998). In addition to this field acquisition, the Abloris sensor was calibrated beforehand using a set of coded targets that was put in a dedicated room. The coded-targets were measured using a total station in order to give a rigorous setup for the calibration. The sensor was then calibrated in PS. The precalibrated values were used in one of the scenarios tested in this paper, while tests using approximate values derived from the images' EXIF file were also performed. The second data set, \"Lacey\", was a terrestrial close range acquisition of several World War I graffiti in the Maison Blanche underground site in Neuville-Saint-Vaast, northern France. The graffitis have a special historical interest as they were made by Canadian soldiers several days prior to the Battle of Vimy Ridge on April 1917. The entire Maison Blanche has similarly been modelled in a previous research (Murtiyoso et al., 2017b), and in this paper only a segment will be reprocessed in DBAT. The Lacey comprises of 346 images at 50 MP acquired by a Canon EOS 5DR camera with a 28 mm lens. Five coded-target GCPs were measured on this data set. In order to perform verifications for the results, two additional check points were measured indirectly from the laser scanning point cloud of the same site. The laser scanner used to this end was a FARO Focus X330. The Lacey data set was taken from an average camera-to-object distance of 2 meters, giving a theoretical GSD of 0.3 mm. Figure 1: (a) One image of the main facade in the St-Pierre data set. (b) An orthophoto of the facade. Red triangles denote GCPs. Green triangles denote CPs. (c) The site for the Lacey data set ### Experiments Several test scenarios were performed in this research in order to test DBAT's ability to reprocess photogrammetric projects in different conditions. The main difference between the scenarios lies in the self-calibration parameter configuration. The scenarios marked beginning with an \"S\" signify the processing of the St-Pierre data set, while those marked with \"L\" at the beginning signify the processing of the Lacey data set (see Table 1). All scenarios were recreated in DBAT and the results were compared. The quality criteria of interest were chosen to be the RMS values of the GCP errors, and the RMS values of the CP errors. In addition, the estimated calibration parameters were also analysed in order to see if DBAT could recreate the project. The GCP RMS may be seen as a measure of internal bundle adjustment precision of the respective algorithms, while the CP RMS may give an idea on the accuracy of the solution compared to ground truth data. The precision of the GCP measurements was taken into account during the bundle adjustment in PS and DBAT as weighting factors. In addition, the a priori marking precision for both manual and automatic object points (OPs) in both data sets were fixed at 1 pixel. The choice of this value was done in order to facilitate the comparison between PS and DBAT. ## 4 Results and discussions An illustration of the result of the bundle adjustment in the two algorithms tested is shown by Figure 2. In general, all algorithms managed to reach convergence in their computation and orient all images in all of the proposed scenarios. Several analyses regarding the results of the self-calibration process and the performance of each algorithm will be described in this section. The self-calibration analysis of the UAV sensor will be emphasized in this paper, while the GCP and CP analysis will be performed for both data sets. The UAV sensor self-calibration is of particular interest since it consists of a small low-cost sensor in a parallel geometry acquisition, compared to the convergent geometry of the DSLR camera in the Lacey data set. ### Self-calibration results Detailed results of the self-calibration for the St-Pierre data set can be seen in Table 2. As a comparison, a column containing the precalibrated values was also added to Table 3. In general DBAT has successfully reprocessed the PS projects in terms of camera calibration values. For the focal length, DBAT has managed to calculate values with an average difference against PS of 6.25 um. As for the principal point offset, DBAT's results were virtually the same as PS's, within 3 significant numbers. \\begin{table} \\begin{tabular}{c|c|c} \\hline **Scenario** & **Dataset** & **Description** \\\\ \\hline S1 & St-Pierre & Self-calibration with K1, K2, K3, P1, and \\\\ L1 & Lacey & P- using EXIF initial values \\\\ \\hline S2 & St-Pierre & Self-calibration with K1, K2, K3, and K3, \\\\ \\hline L2 & Lacey & using EXIF initial values \\\\ \\hline S3 & St-Pierre & Self-calibration with K1, K2, P1, and P2 \\\\ \\hline L3 & Lacey & using EXIF initial values \\\\ \\hline S4 & St-Pierre & Self-calibration with K1, K2, K3, P1, and P2, using precalibrated initial values \\\\ \\hline \\end{tabular} \\end{table} Table 1: Four self-calibration scenarios were tested. In three cases, the EXIF value for the focal length was used as initial value. The EXIF cases were used on both data sets. A fourth scenario with precalibrated initial values was used for the St-Pierre data set Figure 2: Results of the bundle adjustment process showing the orientation of the photos in PS (top row) and DBAT (bottom row). The St-Pierre results are shown in (a) and (c). The Lacey results are shown in (b) and (d)Differences in terms of the distortion parameters were more difficult to ascertain. To this end, the radial distortion curves have been plotted in Figure 3. In the cases of S1, S2, and S4, DBAT managed to generate a similar distortion profile as that of PS, with small differences beginning at the radial distance of 4.25 mm from the projective centre. These minor differences may come from slight errors due to the conversion from PS to DBAT distortion coefficient format. The DBAT format follows the Photomodeler convention in presenting distortion parameters as polynomial coefficients scaled by the focal length, while PS calculates the normalised value of these parameters. Differences with the precalibrated values are to be \\begin{table} \\begin{tabular}{|c|r|r|r|r|r|r|r|r|} \\hline & \\multicolumn{3}{c|}{**S1**} & \\multicolumn{3}{c|}{**S2**} \\\\ \\cline{2-9} & **P5** & **o (mm)** & **DBAT** & **o (mm)** & **PS** & **o (mm)** & **DBAT** & **o (mm)** \\\\ \\hline **f (mm)** & 7.927 & N/A & 7.921 & 0.0001 & 7.927 & N/A & 7.921 & 0.0001 \\\\ \\hline **X0 (mm)** & 5.057 & N/A & 5.057 & 0.0002 & 5.053 & N/A & 5.053 & 0.0001 \\\\ \\hline **Y0 (mm)** & 3.798 & N/A & 3.798 & 0.0002 & 3.801 & N/A & 3.801 & 0.0001 \\\\ \\hline **K1** & 3.963E-03 & N/A & 3.880E-03 & 2.090E-06 & 3.963E-03 & N/A & 3.880E-03 & 2.090E-06 \\\\ \\hline **K2** & -1.696E-04 & N/A & -1.629E-04 & 1.350E-07 & -1.696E-04 & N/A & -1.629E-04 & 1.350E-07 \\\\ \\hline **K3** & 2.144E-06 & N/A & 2.038E-06 & 2.60E-09 & 2.144E-06 & N/A & 2.038E-06 & 2.610E-09 \\\\ \\hline **P1** & -1.875E-05 & N/A & -2.002E-05 & 1.030E-06 & & & & \\\\ \\hline **P2** & 1.624E-05 & N/A & -1.743E-05 & 1.020E-06 & & & & \\\\ \\hline \\end{tabular} \\end{table} Table 2: The estimated parameters and standard deviations for four scenarios for the St-Pierre data set Figure 3: The radial distortion curves corresponding to the estimated K\\({}_{1}\\)-K\\({}_{3}\\) parameters for S1 (a), S2 (b), S3 (c), and S4 (d) expected since the conditions during the calibration are not exactly the same as the conditions during the real acquisition. Furthermore, some difference between PS and DBAT may be expected because PS most probably performs a free network adjustment followed by a conformal 3D transformation, whereas DBAT includes GCPs in its bundle adjustment computation. It is also interesting to note that PS and DBAT arrived at the same calibration values in S1 (with EXIF initial values) and S4 (with precalibrated values as initial values). However, it should be noted that the case of the St-Pierre data set presents a particular case where oblique photos were also included in the bundle adjustment process; this increases the strength of the acquisition network geometry. S3 presents an interesting observation on its distortion curve. By not calculating K3 in the self-calibration process, DBAT and PS's curve diverge almost from the 1 mm radial distance. Furthermore, the \\(\\sigma_{0}\\) value of S3 in DBAT gives a value of 2.070 which presents an anomaly compared to the other cases (see also Table 3). PS also gave a reprojection error of 2.700 pixels. Even though the fact that K3 is not calculated suppresses the correlation (see Table 4) between the estimated calibration parameters, this may indicate that for this particular sensor K3 is nevertheless an important factor. ### GCP and CP verification Comparison of the GCP and CP RMS for the different scenarios tested in this experiment can be seen in Table 3. It should be noted that in this experiment, in order to compare both algorithms, the GCPs were weighted using their precision of 5 mm, while all markings whether automatic or manual were weighted using a uniform marking precision of 1 pixel. Results of the bundle adjustment show that for the UAV St-Pierre data set a maximum difference of 0.3 mm for the GCP RMS between DBAT and PS were observed. The maximum difference of CP RMS for the same data set was also 0.3 mm, for a theoretical GSD of 1.4 mm. For the Lacey data set, the maximum GCP RMS difference was 0.5 mm and 0.7 mm for CP RMS for a theoretical GSD of 0.3 mm. In the Lacey data set, a higher value of RMS can be explained by the fact that the GCPs were measured using a coded-target, which would have had a very small marking precision (the default PS value was 0.1 pixels). This influences the weighting in the bundle adjustment, as shown also on the value of the \\(\\sigma_{0}\\). Indeed, the \\(\\sigma_{0}\\) value of 4.5 on average indicates that one of the a priori standard deviation was heavily underestimated, which is true in this case. Furthermore, the even higher values for the CP RMS in the Lacey data set is due to the fact that the CPs were indirectly measured from the laser scanner data set, and concern points of interest rather than clear artificial marks or coded targets. The centimetric result for the CP RMS is therefore expected. In the St-Pierre data set, several other factors also contribute to the final RMS result. The GCPs were distributed evenly on the facade; however the lack of depth variation between the GCPs may contribute to the final RMS. Furthermore, the noise present on the images also generates another source of error. However, the main objective of the experiment was to compare the performance of PS and DBAT. In this regard, DBAT has managed to reprocess PS projects under approximately the same conditions and weighting, although a slight difference is always to be expected when dealing with a black-box solution. It may then be used as a tool to verify PS's results and perform a quality control on it. ### Quality control using DBAT The experiments and analyses in sections 4.1 and 4.2 indicate that DBAT may be used to reprocess photogrammetric project in the case of UAV data (St-Pierre) and also classical terrestrial close range data (Lacey). One advantage of DBAT lies in the metrics that it provides the user at the end of the bundle adjustment process. Several metrics can therefore be used to assess the quality of the St-Pierre and Lacey projects, and eventually to determine if in some way their quality can be improved. In terms of correlation values, Table 4 shows the high correlation values between the different calibration parameters as well as the number of automatic tie points with high correlation values in all the scenarios tested. In the cases where K3 is calculated in both data sets, the results show a strong correlation between the radial distortion coefficients. The standard deviation values given by DBAT for the calibration parameters are also useful to assess the quality of the self-calibration process. As regards to the automatic tie points, the St-Pierre data set shows that over a quarter of its tie points have a strong correlation of more than 95%. The Lacey data set only has 0.24% of its tie points which has a correlation of more than 95%. A strong correlation may mean that the parameters in the bundle adjustment were not solved correctly. This can therefore be an indication to the quality of the image matching and feature detection process in PS for this data set, or the presence of tie points with few rays at very small angles. Based on this information, a filtering of the automatic tie points for the St-Pierre data set could be performed in order to increase the quality of the bundle adjustment. Indeed, by performing this filtering in DBAT, the high correlations disappeared. \\begin{table} \\begin{tabular}{|l|l|l|} \\hline & \\multicolumn{1}{c|}{**Calibration parameters**} & \\multicolumn{1}{c|}{**Automatic tie points**} \\\\ \\cline{3-4} & \\multicolumn{1}{c|}{**true 10\\%**} & \\multicolumn{1}{c|}{**true 10\\%**} & \\multicolumn{1}{c|}{**true 10\\%**} \\\\ \\hline S1 & K\\_K\\_G\\_96.85\\_K\\_K\\_G\\_96.6 & 26.00\\% & 1.40\\% \\\\ \\hline S2 & K\\_K\\_G\\_96.85\\_K\\_K\\_G\\_96.6 & 26.01\\% & 1.40\\% \\\\ \\hline S3 & \\multicolumn{1}{c|}{25.09\\%} & 1.39\\% \\\\ \\hline S4 & K\\_K\\_G\\_96.75\\_K\\_K\\_G\\_96.5 & 26.00\\% & 1.39\\% \\\\ \\hline L4 & K\\_K\\_G\\_97.194\\_K\\_K\\_G\\_96.5 & 20.24\\% & 0.04\\% \\\\ \\hline L2 & K\\_K\\_G\\_97.25\\_K\\_G\\_96.5 & 0.25\\% & 0.04\\% \\\\ \\hline L3 & \\multicolumn{1}{c|}{0.24\\%} & 0.04\\% \\\\ \\hline \\end{tabular} \\end{table} Table 4: Correlations in the processed projects in DBAT. For the automatic points, the value denotes the percentage of automatic tie points with high correlation values \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\hline & \\multicolumn{1}{c|}{**Software**} & \\multicolumn{1}{c|}{**\\(\\sigma_{0}\\)**} & \\multicolumn{1}{c|}{**\\begin{tabular}{c} **Rog-error**} \\\\ \\multicolumn{1}{c|}{RMS (pix)} & \\multicolumn{1}{c|}{**\\begin{tabular}{c} **GCP RMS** \\\\ (mm) \\\\ \\end{tabular}} & \\multicolumn{1}{c|}{** \\begin{tabular}{c} **CP RMS** \\\\ (mm) \\\\ \\end{tabular}} \\\\ \\hline & PS & N/A & 1.80\\% & 3.8 & 7.2 \\\\ \\hline & DBAT & 1.142 & 1.40\\% & 5.8 & 7.5 \\\\ \\hline & DBAT & 1.84\\% & 1.92\\% & 5.3 & 7.8 \\\\ \\hline & DBAT & 1.144 & 1.492 & 5.1 & 8.1 \\\\ \\hline & DBAT & 2.70\\% & 2.70\\% & 7.8 & 8.7 \\\\ \\hline & DBAAT & 1.20\\% & 1.70\\% & 7.3 & 8.5 \\\\ \\hline & DBAAT & 1.142 & 1.490 & 5.8 & 7.7 \\\\ \\hline & DBAAT & 4.59\\% & 5.12\\% & 5.9 & 15.7 \\\\ \\hline & DBAAT & 5.20\\% & 5.22\\% & 16.4 \\\\ \\hline & DBAAT & 4.542\\% & 5.12\\% & 6.2 & 15.7 \\\\ \\hline & DBAAT & 3.52\\% & 5.84\\% & 16.2 \\\\ \\hline & DBAAT & 4.59\\% & 5.12\\% & 5.9 & 16.2 \\\\ \\hline \\end{tabular} \\end{table} Table 3: Results for the different scenarios, showing the \\(\\sigma_{0}\\), reprojection error RMS, GCP error RMS, and CP error RMSAnother example of metrics which can be derived from DBAT includes the standard deviations for the external orientation parameters. This may be useful in some cases to help users in sorting images which may worsen the results of the bundle adjustment. In the case of the data sets tested, these metrics are shown in histogram form in Figure 4. In the case of S4, a slight increase in rotational standard deviation can be seen for the images numbered around 100 and 120. Whereas for L1, a significant spike can be observed in the rotational standard deviation histogram for the images numbered around 320 and 330. This indicates that the orientation of these images were not precise, and may be a clue to reassess these images in PS and in the worst case suppress them from the project altogether. ## 5 Conclusions This paper aims to reprocess terrestrial and UAV-based photogrammetric projects which have been processed in PS using DBAT. The purpose of such reprocessing is to recreate PS's results as closely as possible, and thereafter derive various statistics which can then be used as a quality control. In this paper, two data sets (one UAV and another terrestrial) have been tested. The experiments have shown that DBAT can be used to reprocess PS projects. DBAT has managed to perform well in terms of self-calibration as well as bundle adjustment, as shown by the experiments comparing DBAT and PS's calibration, CP, and GCP results. Although the algorithm behind PS remains hidden, this method of \"dissection\" permits users to have an idea on the results that they receive from PS as well as their quality. The paper has also shown how DBAT was used as a quality control tool for PS. Correlations and exterior orientation standard deviations are only two among other metrics which may be interesting to PS users in order to assess their photogrammetric project. Based on this extra information, that was otherwise minimal in PS, the user may take decisions to perform modifications on the project as it fits the requirements of the project. The use of black-box like software solutions presents a big advantage to many users as they eliminate processing parameters to the bare necessities. This fact, together with developments in photogrammetric and computer vision algorithms, has largely supported the growth of more commercial and user-friendly software. However, in some cases where precision and robustness is required, a black-box solution may not be enough. It is therefore important to have an open tool which enables users to look into their photogrammetric projects in more detail and to perform quality control and assessment. ## Availability DBAT is an open source toolbox for bundle adjustment based on the Matlab programming language. More information about the toolbox as well as a download of the codes can be accessed from the following link: [https://github.com/niclasborlin/dbat](https://github.com/niclasborlin/dbat). ## References * methodological strategies for the after-quake survey of vertical structures in Mantua (Italy). _Sensors_, 15, pp. 15520-15539. * Baiocchi et al. (2013) Baiocchi, V., Dominici, D. and Mormile, M., 2013. UAV application in post-seismic environment. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, 40(1/W2), pp. 21-25. * Barsanti et al. (2014) Barsanti, S.G., Remondino, F., Fenandez-Palacios, B.J. and Visintini, D., 2014. Critical factors and guidelines for 3D surveying and modelling in Cultural Heritage. _International Journal of Heritage in the Digital Era_, 3, pp. 141-158. * Borlin and Grussenmeyer (2013) Borlin, N. and Grussenmeyer, P., 2013. Bundle adjustment with and without damping. _The Photogrammetric Record_, 28, pp. 396-415. * Borlin and Grussenmeyer (2014) Borlin, N. and Grussenmeyer, P., 2014. Camera Calibration using the Damped Bundle Adjustment Toolbox. _ISPRS Annals of the Photogrammetry, Remote Sensing, and Spatial Information Sciences_, II, pp. 89-96. * Borlin and Grussenmeyer (2016) Borlin, N. and Grussenmeyer, P., 2016. External Verification of the Bundle Adjustment in Photogrammetric Software Using the Damped Bundle Adjustment Toolbox. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, 41(B5), pp. 7-14. * Burns and Delparte (2017) Burns, J.H.R. and Delparte, D., 2017. Comparison of Commercial Structure-From-Motion Photogrammetry Software Used for Underwater Three-Dimensional Modeling of Coral Reef Environments. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, 42(2/W3), pp. 127-131. Figure 4: Histogram of the exterior orientation standard deviations for S4 (a) and L1 (b) Chiabrando, F., Donadio, E. and Rinaudo, F., 2015. SIM for orthophoto generation: a winning approach for cultural heritage knowledge. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, 40(5/W7), pp. 91-98. * a NMCA case study. _Photogrammetric Week_, pp. 165-179. * Dall'Asta et al. (2015) Dall'Asta, E., Thoeni, K., Santise, M., Forlani, G., Giacomini, A. and Roncella, R., 2015. Network design and quality checks in automatic orientation of close-range photogrammetric blocks. _Sensors_, 15, pp. 7985-8008. * Gonzalez-Aguilera et al. (2016) Gonzalez-Aguilera, D., Lopez-Fernandez, L., Rodriguez-Gonzalvez, P., Guerrero, D., Hernandez-Lopez, D., Remondino, F., Menna, F., Nocerino, E., Toschi, I., Ballabeni, A. and Giani, M., 2016. Development of an all-purpose free photogrammetric tool. _The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_, 41, pp. 31-38. * Granshaw (2016) Granshaw, S.I., 2016. Photogrammetric Terminology: Third Edition. _The Photogrammetric Record_, 31, pp. 210-252. * Grussenmeyer et al. (2002) Grussenmeyer, P., Hanke, K., Strelelein, A., 2002. Architectural Photogrammetry. _Digital Photogrammetry_, Kasser, M. and Egels, Y. (Eds.), Taylor & Francis, pp. 300-339. * Hasteldt & Luhmann (2015) Hasteldt, H. and Luhmann, T., 2015. Investigations on the Quality of the Interior Orientation and Its Impact in Object Space for UAV Photogrammetry. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, 40(1/W4), pp. 321-328. * Kraus & Waldhusl (1998) Kraus, K. and Waldhusl, P., 1998. _Manuel de photogrammetrie_. Hermes, Paris. * Luhmann et al. (2014) Luhmann, T., Robson, S., Kyle, S. and Boehm, J., 2014. _Close-Range Photogrammetry and 3D Imaging_, 2nd ed., De Gruyter. * Murtiyoso & Grussenmeyer (2017) Murtiyoso, A. and Grussenmeyer, P., 2017. Documentation of heritage buildings using close-range UAV images: dense matching issues, comparison and case studies. _The Photogrammetric Record_, 32, pp. 206-229. * Murtiyoso et al. (2017a) Murtiyoso, A., Grussenmeyer, P. and Freville, T., 2017a. Close Range UAV Accurate Recording and Modeling of St-Pierre-Ie-Jeune Neo-Romanesque Church in Strasbourg (France). _The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_, 42(2/W3), pp. 519-526. * Murtiyoso et al. (2017b) Murtiyoso, A., Grussenmeyer, P., Guillemin, S., Prilaux, G., 2017b. Centenary of the Battle of Viny (France, 1917): Preserving the Memory of the Great War through 3D recording of the Maison Blanche souterraine. _ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences_, IV-2/W2, pp. 171-177. * Nex & Remondino (2014) Nex, F. and Remondino, F., 2014. UAV: platforms, regulations, data acquisition and processing. _3D Recording and Modelling in Archaeology and Cultural Heritage: Theory and Best Practices_, Remondino, F. and Campana, S. (Eds.), Archaeopress, Oxford, England, pp. 73-86. * Pierrot-Deseilligy & Clery (2012) Pierrot-Deseilligy, M. and Clery, I., 2012. Apero, an Open Source Bundle Adjustment Software for Automatic Calibration and Orientation of Set of Images. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, 38, pp. 269-276. * Remondino et al. (2014) Remondino, F., Spera, M.G., Nocerino, E., Menna, F. and Nex, F., 2014. State of the art in high density image matching. _The Photogrammetric Record_, 29, pp. 144-166. * Roca et al. (2013) Roca, D., Lapuela, S., Diaz-Vilarino, L., Armesto, J. and Arias, P., 2013. Low-cost aerial unit for outdoor inspection of building facades. _Automation in Construction_, 36, pp. 128-135.
Photogrammetry has recently seen a rapid increase in many applications, thanks to developments in computing power and algorithms. Furthermore with the democratisation of UAVs (Unmanned Aerial Vehicles), close range photogrammetry has seen more and more use due to the easier capability to acquire aerial close range images. In terms of photogrammetric processing, many commercial software solutions exist in the market that offer results from user-friendly environments. However, in most commercial solutions, a black-box approach to photogrammetric calculations is often used. This is understandable in light of the proprietary nature of the algorithms, but it may pose a problem if the results need to be validated in an independent manner. In this paper, the Damped Bundle Adjustment Toolbox (DBAT) developed for Matlab was used to reprocess some photogrammetric projects that were processed using the commercial software Agisoft Photoscan. Several scenarios were experimented on in order to see the performance of DBAT in reprocessing terrestrial and UAV close range photogrammetric projects in several configurations of self-calibration setting. Results show that DBAT managed to reprocess PS projects and generate metrics which can be useful for project verification. close range, UAV, bundle adjustment, quality control, photogrammetry, software, DBAT 2017
Summarize the following text.
255
arxiv-format/2212_06858v3.md
# LidarCLIP or: How I Learned to Talk to Point Clouds Georg Hess\\({}^{{\\dagger},1,2}\\) Adam Tonderski\\({}^{{\\dagger},1,3}\\) Christoffer Petersson\\({}^{1,2}\\) Kalle Astrom\\({}^{3}\\) Lennart Svensson\\({}^{2}\\) \\({}^{1}\\)Zenseact \\({}^{2}\\)Chalmers University of Technology \\({}^{3}\\)Lund University {first.last}@zenseact.com [email protected] [email protected] These authors contributed equally to this work. ## 1 Introduction Connecting natural language processing (NLP) and computer vision (CV) has been a long-standing challenge in the research community. Recently, OpenAI released CLIP [33], a model trained on 400 million web-scraped text-image pairs, that produces powerful text and image representations. Besides impressive zero-shot classification performance, CLIP enables interaction with the image domain in a diverse and intuitive way by using human language. These capabilities have resulted in a surge of work building upon the CLIP embeddings within multiple applications, such as image captioning [31], image retrieval [2, 18], semantic segmentation [48], text-to-image generation [34, 36], and referring image segmentation [24, 42]. While most works trying to bridge the gap between NLP and CV have focused on a single visual modality, namely images, other visual modalities, such as lidar point clouds, have received far less attention. Existing attempts to connect NLP and point clouds are often limited to a single application [5, 37, 47] or designed for synthetic data [46]. This is a natural consequence due to the lack of large-scale text-lidar datasets required for training flexible models such as CLIP in a new domain. However, it has been shown that the CLIP embedding space can be extended to new languages [4] and new modalities, such as audio [43], without the need for huge datasets and extensive computational resources. This raises the question if such techniques can be applied Figure 1: Like CLIP, LidarCLIP has many applications, including retrieval for data curation. Here, we demonstrate that the two can be combined through different queries to retrieve potentially safety-critical scenes that a camera-based system may handle poorly. Such scenes are nearly impossible to retrieve with a single modality. to point clouds as well, and consequently open up a body of research on point cloud understanding, similar to what has emerged for images [42, 48]. We propose LidarCLIP, a method to connect the CLIP embedding space to the lidar point cloud domain. While combined text and point cloud datasets are not easily accessible, many robotics applications capture images and point clouds simultaneously. One example is autonomous driving, where data is both openly available and large scale. To this end, we supervise a lidar encoder with a frozen CLIP image encoder using pairs of images and point clouds from the large-scale automotive dataset ONCE [28]. This way, the image encoder's rich and diverse semantic understanding is transferred to the point cloud domain. At inference, we can compare LidarCLIP's embedding of a point cloud with the embeddings from either CLIP's text encoder, image encoder, or both, enabling various applications. While conceptually simple, we demonstrate LidarCLIP's fine-grained semantic understanding for a wide range of applications. LidarCLIP outperforms prior works applying CLIP in the point cloud domain [19, 46] on both zero-shot classification and retrieval. Furthermore, we demonstrate that LidarCLIP can be combined with regular CLIP to perform targeted searches for rare and difficult traffic scenarios,, a person crossing the road while hidden by water drops on the camera lens, see Fig. 1. Finally, LidarCLIP's capabilities are extended to point cloud captioning and lidar-to-image generation using established CLIP-based methods [31, 8]. In summary, our contributions are the following: * We propose LidarCLIP, a new method for embedding lidar point clouds into an existing CLIP space. * We demonstrate the effectiveness of LidarCLIP for retrieval and zero-shot classification, where it outperforms existing CLIP-based methods. * We show that LidarCLIP is complementary to its CLIP teacher and even outperforms it in certain retrieval categories. By combining both methods, we further improve performance and enable retrieval of safety-critical scenes in challenging sensing conditions. * Finally, we show that our approach enables a multitude of applications off-the-shelf, such as point cloud captioning and lidar-to-image generation. ## 2 Related work **CLIP and its applications.** CLIP [33] is a model with a joint embedding space for images and text. The model consists of two encoders, a text encoder \\(\\mathcal{F}_{T}\\) and an image encoder \\(\\mathcal{F}_{I}\\), both of which yield a single feature vector describing their input. Using contrastive learning, these feature vectors have been supervised to map to a common language-visual space where images and text are similar if they describe the same scene. By training on 400 million text-image pairs collected from the internet, the model has a diverse textual understanding. The shared text-image space can be used for many tasks. For instance, to do zero-shot classification with \\(K\\) classes, one constructs \\(K\\) text prompts,, \"a photo of a \\(\\langle\\)class name\\(\\rangle\\)\". These are embedded individually by the text encoder, yielding a feature map \\(Z_{T}\\in\\mathbb{R}^{K\\times d}\\). The logits for an image \\(I\\) are calculated by comparing the image embedding, \\(\\mathbf{z}_{I}\\in\\mathbb{R}^{d}\\), with the feature map for the text prompts, \\(Z_{T}\\), and class probabilities \\(p\\) are found using the softmax function, softmax\\((Z_{T}\\mathbf{z}_{I})\\). In theory, any concept encountered in the millions of text-image pairs could be classified with this approach. Further, by comparing a single prompt to multiple images, CLIP can also be used for retrieving images from a database. Multiple works have built upon the CLIP embeddings for various applications. DALL-E 2 [34] and Stable Diffusion [36] are two methods that use the CLIP space for conditioning diffusion models for text-to-image generation. Other works have recently shown how to use text-image embeddings to generate single 3D objects [38] and neural radiance fields [41] from text. In [48], CLIP is used for zero-shot semantic segmentation without any labels. Similarly, [42] extracts pixel-level information for referring semantic segmentation,, segmenting the part of an image referred to via a natural linguistic expression. We hope that LidarCLIP can spur similar applications for 3D data. **CLIP outside the language-image domain.** Besides new applications, multiple works have aimed to extend CLIP to new domains, and achieved impressive performance in their respective domains. For videos, CLIP has been used for tasks like video clip retrieval [25, 27] and video question answering [45]. In contrast to our work, these methods rely on large amounts of text-video pairs for training. Meanwhile, WAV2CLIP [43] and AudioCLIP [16] extend CLIP to audio data for audio classification, tagging, and retrieval. Both methods use contrastive learning, which typically requires large batch sizes for convergence [6]. The scale of automotive point clouds would require extensive computational resources for contrastive learning, hence we supervise LidarCLIP with a simple mean squared error, which works well for smaller batch sizes and has been shown to promote the learning of richer features [4]. **Point clouds and natural language.** Recently, there has been increasing interest in connecting point clouds and natural language, as it enables an intuitive interface for the 3D domain and opens up possibilities for open-vocabulary zero-shot learning. In [7] and [29], classifiers are supervised with pre-trained word embeddings to enable zero-shot learning. Part2word [39] explores 3d shape retrieval by mapping scans of single objects and descriptive texts to a joint embedding space. However, a key limitation of these approaches is their need for dense annotations [7, 29] or detailed textual descriptions [39], which makes them unable to leverage the vast amount of raw automotive data considered in this paper. Other methods, such as PointCLIP [46] and CLIP2Point [19], use CLIP to bypass the need for text-lidar pairs entirely. Instead of processing the point clouds directly, they render the point cloud from multiple viewpoints and apply the image encoder to these renderings. While this works well with dense point clouds of a single object, the approach is not feasible for sparse automotive data with heavy occlusions. In contrast, our method relies on an encoder specifically designed for the point cloud domain, avoiding the overhead introduced by multiple renderings and allowing for more flexibility in the model choice. ## 3 LidarCLIP In this work, we encode lidar point clouds into the existing CLIP embedding space. As there are no datasets with text-lidar pairs, we cannot rely on the same contrastive learning strategy as the original CLIP model to directly relate point clouds to text. Instead, we leverage that automotive datasets contain millions of image-lidar pairs. By training a point cloud encoder to mimic the features of a frozen CLIP image encoder, the images act as intermediaries for connecting text and point clouds, see Fig. 2. Each training pair consists of an image \\(\\mathbf{x}_{I}\\) and the corresponding point cloud \\(\\mathbf{x}_{L}\\). Regular CLIP does not perform alignment between the pairs, but some preprocessing is needed for point clouds. To align the contents of both modalities, we transform the point cloud to the camera coordinate system and drop all points that are not visible in the image. As a consequence, we only perform inference on frustums of the point cloud, corresponding to a typical camera field of view. We note that this preprocessing is susceptible to errors in sensor calibration and time synchronization, especially for objects along the edge of the field of view. Further, the preprocessing does not handle differences in visibility due to sensor mounting positions,, lidars are typically mounted higher than cameras in data collection vehicles, thus seeing over some vehicles or static objects. However, using millions of training pairs reduces the impact from such noise sources. The training itself is straightforward. An image is passed through the frozen image encoder \\(\\mathcal{F}_{I}\\) to produce the target embedding, \\[\\mathbf{z}_{I}=\\mathcal{F}_{I}(\\mathbf{x}_{I}), \\tag{1}\\] whereas the lidar encoder \\(\\mathcal{F}_{L}\\) embeds a point cloud, \\[\\mathbf{z}_{L}=\\mathcal{F}_{L}(\\mathbf{x}_{L}). \\tag{2}\\] We train \\(\\mathcal{F}_{L}\\) to maximize the similarity between \\(\\mathbf{z}_{I}\\) and \\(\\mathbf{z}_{L}\\). Figure 2: Overview of LidarCLIP. We use existing CLIP image and text encoders (top left), and learn to embed point clouds into the same feature space (bottom left). To that end, we train a lidar encoder to match the features of the frozen image encoder on a large automotive dataset with image-lidar pairs. This enables a wide range of applications, such as scenario retrieval (top right), zero-shot classification, as well as lidar-to-text and lidar-to-image generation (bottom right). To this end, we adopt either the mean squared error (MSE), \\[\\mathcal{L}_{\\text{MSE}}=\\frac{1}{d}(\\mathbf{z}_{I}-\\mathbf{z}_{L})^{T}(\\mathbf{ z}_{I}-\\mathbf{z}_{L}), \\tag{3}\\] or the cosine similarity loss, \\[\\mathcal{L}_{\\text{cos}}=-\\frac{\\mathbf{z}_{I}^{T}\\mathbf{z}_{L}}{\\|\\mathbf{z }_{I}\\|\\|\\mathbf{z}_{L}\\|}, \\tag{4}\\] where \\(\\mathbf{z}_{I},\\mathbf{z}_{L}\\in\\mathbb{R}^{d}\\). The main advantage of using a similarity loss that only considers positive pairs, as opposed to using a contrastive loss, is that we avoid the need for large batch sizes [6] and the accompanying computational requirements. Furthermore, the benefits of contrastive learning are reduced in our settings, since we only care about mapping a new modality into an existing feature space, rather than learning an expressive feature space from scratch. ### Joint retrieval Retrieval is one of the most successful applications of CLIP and is highly relevant for the automotive industry. By retrieval, we mean the process of finding samples that best match a given natural language prompt out of all the samples in a large database. In an automotive setting, it is used to sift through the abundant raw data for valuable samples. While CLIP works well for retrieval out of the box, it inherits the fundamental limitations of the camera modality, such as poor performance in darkness, glare, or water spray. LidarCLIP can increase robustness by leveraging the complementary properties of lidar. Relevant samples are retrieved by computing the similarity between a text query and each sample in the database, in the CLIP embedding space, and identifying the samples with the highest similarity. These calculations may seem expensive, but the embeddings only need to be computed once per sample, after which they can be cached and reused for every text query. Following prior work [33], we compute the retrieval score using cosine similarity for both image and lidar \\[s_{I}=\\frac{\\mathbf{z}_{T}^{T}\\mathbf{z}_{I}}{\\|\\mathbf{z}_{T}\\|\\|\\mathbf{z }_{I}\\|},\\qquad s_{L}=\\frac{\\mathbf{z}_{T}^{T}\\mathbf{z}_{L}}{\\|\\mathbf{z}_{T} \\|\\|\\mathbf{z}_{L}\\|}, \\tag{5}\\] where \\(\\mathbf{z}_{T}\\) is the text embedding. If a database only contains images or point clouds, we use the corresponding score (\\(s_{I}\\) or \\(s_{L}\\)) for retrieval. However, if we have access to both images and point clouds, we can jointly consider the lidar and image embeddings to leverage their respective strengths. We consider various methods of performing joint retrieval. Inspired by the literature on ensembles [12], we can directly combine the image and lidar similarity scores, \\(s_{I+L}=s_{L}+s_{I}\\). We can also fuse the modalities even earlier by combining the features, \\(\\mathbf{z}_{I+L}=\\mathbf{z}_{I}+\\mathbf{z}_{L}\\). We also consider methods to aggregate independent rankings for each modality. One such approach is to consider the joint rank to be the mean rank across the modalities. Another approach we have evaluated is two-step re-ranking [30], where one modality selects a set of candidates which are then ranked by the other modality. One of the most exciting aspects of joint retrieval is the possibility of using different queries for each modality. For example, imagine trying to find scenes where a large white truck is almost invisible in the image due to extreme sun glare. In this case, one can search for scenes where the image embedding matches \"an image with extreme sun glare\" and re-rank the top-K results by their lidar embeddings similarity to \"a scene containing a large truck\". This kind of scene would be almost impossible to retrieve using a single modality. ## 4 Experiments **Datasets.** Training and most of the evaluation is done on the large-scale ONCE dataset [28], with roughly 1 million scenes. Each scene consists of a lidar sweep and 7 camera images, which results in \\(\\sim\\)\\(7\\) million unique training pairs. We withhold the validation and test sets and use these for the results presented below. **Implementation details.** We use the official CLIP package and models, specifically the most capable vision encoder, ViT-L/14, which has a feature dimension \\(d=768\\). As our lidar encoder, we use the Single-stride Sparse Transformer (SST) [10] (randomly initialized). Due to computational constraints, our version of SST is down-scaled and contains about 8.5M parameters, which can be compared to the \\(\\sim\\)\\(85\\)M and \\(\\sim\\)\\(300\\)M parameters of the text and vision encoders of CLIP. The specific choice of backbone is not key to our approach; similar to the variety of CLIP image encoders, one could use a variety of different lidar encoders. However, we choose a transformer-based encoder, inspired by the findings that CLIP transformers perform better than CLIP ResNets [33]. SST is trained for 3 epochs, corresponding to \\(\\sim\\)\\(20\\) million training examples, using the Adam optimizer and the one-cycle learning rate policy. For full details, we refer to our code. **Retrieval ground truth & prompts.** One difficulty in quantitatively evaluating the retrieval capabilities of LidarCLIP is the lack of direct ground truth for the task. Instead, automotive datasets typically have fine-grained annotations for each scene, such as object bounding boxes, segmentation masks, etc. This is also true for ONCE, which contains annotations in terms of 2D and 3D bounding boxes for 5 classes, and metadata for the time of day and weather. We leverage these detailed annotations and available metadata, to create as many retrieval categories as possible. For object retrieval, we consider a scene positive if it contains one or more instances of that object. To probe the spatial under standing of the model, we also propose a \"nearby\" category, searching specifically for objects closer than \\(15\\,\\mathrm{m}\\). We verify that the conclusions hold for thresholds between \\(10\\,\\mathrm{m}\\) and \\(25\\,\\mathrm{m}\\). Finally, to minimize the effect of prompt engineering, we follow [15] and average multiple text embeddings to improve results and reduce variability. For object retrieval, we use the same 85 prompt templates as in [15], and for the other retrieval categories, we use similar patterns to generate numerous relevant prompts templates. The exact prompts are provided in the source code. ### Zero-shot classification CLIP's [33] strong open-vocabulary capabilities have made it popular for zero-shot classification. We construct this task by treating each annotated object in ONCE as a separate classification sample. Typically, LidarCLIP outputs a set of voxel features that are pooled into a single, global, CLIP feature. We construct the object embeddings by only pooling features for voxels inside the corresponding bounding box, without any object-specific training/fine-tuning. We compare our performance to PointCLIP [46] and CLIP2Point [19], two works that transfer CLIP to 3d by rendering point clouds from multiple viewpoints and then apply the CLIP image encoder and pool the features from different views. For CLIP2Point, we also include their provided weights, which have been pre-trained on ModelNet40. We create their object embeddings by extracting only points within the annotated bounding boxes and adapt their standard evaluation protocol. Results for LidarCLIP and their best-performing model are shown in Sec. 4.1 where we report top-1 accuracy average over objects and classes, as the data contains a few majority classes. The results demonstrate the gain from training a modality specific encoder rather than transferring point clouds into the image domain. Further, we should note that LidarCLIP's instance-level classification performance is achieved without any dense annotations in 3d or 2d. Applications of LidarCLIP for zero-shot point cloud semantic segmentation is left for future work. ### Retrieval To evaluate retrieval, we report the commonly used Precision at rank K (P@K) [14, 26, 35], for \\(K=10,100\\), which measures the fraction of positive samples within the top K predictions. Recall at K is another commonly used metric [14, 35], however, it is hard to interpret when the number of positives is in the thousands, as is the case here. We evaluate the performance of three approaches: lidar-only, camera-only, and the joint approach proposed in Sec. 3.1. We perform retrieval for scenes containing various kinds of objects and present the results in Tab. 2. We also evaluate PointCLIP [46] and CLIP2Point [19], however, their methods are not suited for the large-scale point clouds considered here and consequently barely outperform random guessing. **Object-level.** Interestingly, lidar retrieval performs slightly better than image retrieval on average, despite being trained to mimic the image features, with the biggest improvements in the cyclist class. A possible explanation is that cyclist features are more invariant in the point cloud than in the image, allowing the lidar encoder to generalize to cyclists that go undetected by the image encoder. A noteworthy problem for lidar retrieval is trucks. Upon qualitative inspection, we find that the lidar encoder confuses trucks with buses, which is a drawback with certain objects' features being more invariant in lidar data. We also attempt to retrieve scenes where objects of the given class appear close to the ego vehicle. Here, we can see that joint retrieval truly shines, greatly outperforming single-modality retrieval in classes like truck and pedestrian. One interpretation is that the lidar is more reliable at determining distance, while the image can be leveraged to distinguish between classes (such as trucks and buses) based on textures and other fine details only visible in the image. **Scene-level.** Object-centric retrieval is focused on _local_ details of a scene and should trigger even for a single occluded pedestrian on the side of the road. Therefore, we run another set of experiments focusing on _global_ properties such as weather, time of day, and general 'busyness' of the scene. In Tab. 3, we see that the lidar is outperformed by the camera for determining light conditions. This seems quite expected, and if anything, it is somewhat surprising that lidar can do significantly better than random in these categories. Again, we see that joint retrieval consistently gets the best of both worlds and, in some cases, such as empty scenes, clearly outperforms both single-modality methods. **Separate prompts.** Inspired by the success of joint retrieval, and the complementary aspects of sensing in lidar and camera, we present some qualitative examples where different prompts are used for each modality. Thus, we can find scenes that are difficult to identify with a single modality. Fig. 3 shows retrieval examples where the image was prompted for glare, extreme blur, water on the lens, corruption, and lack of objects in the scene. At the same time, the lidar was prompted for nearby objects such as cars, trucks, and pedestrians. As seen in Fig. 3, the examples indicate \\begin{table} \\begin{tabular}{l l l} \\hline \\hline & Cls. & Obj. \\\\ \\hline PointCLIP [46] & \\(23.8\\%\\) & \\(34.3\\%\\) \\\\ CLIP2Point [19] & \\(32.1\\%\\) & \\(26.2\\%\\) \\\\ CLIP2Point w/ pre-train [19] & \\(29.8\\%\\) & \\(28.2\\%\\) \\\\ LidarCLIP & \\(\\mathbf{43.6}\\%\\) & \\(\\mathbf{62.1}\\%\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Zero-shot classification on ONCE _val_, top-1 accuracy averaged over classes/objects. that we can retrieve scenes where these objects are almost completely invisible in the image. Such samples are highly valuable both for the training and validation of autonomous driving systems. **Domain transfer.** For studying the robustness of LidarCLIP under domain shift, we evaluate its retrieval performance on a different dataset than it was trained on, namely, the nuScenes dataset [3]. Compared to ONCE, the nuScenes lidar sensor has fewer beams (32 vs 40), lower horizontal resolution, and different intensity characteristics. Further, nuScenes is collected in Boston and Singapore, while ONCE is collected in Chinese cities. The challenge of transferring between these datasets has been shown in unsupervised domain adaptation [28]. Similarly to the ONCE retrieval task, we generate ground truth using the object annotations. We compare the model trained on ONCE with a reference model trained directly on nuScenes in Tab. 6. As expected, the differences in sensor characteristics hamper the ability to perform lidar-only retrieval on the target dataset. Notably, we find that the joint method is robust to this effect, showing almost no domain transfer gap, and outperforming camera-only retrieval even with the ONCE-trained lidar encoder. **Ablations.** As described in Sec. 3, we have two primary \\begin{table} \\begin{tabular}{l|c c c c} \\hline \\hline Train set & \\multicolumn{2}{c}{ONCE} & \\multicolumn{2}{c}{nuScenes} \\\\ \\hline P@K & 10 & 100 & 10 & 100 \\\\ \\hline Image & 0.69 & 0.65 & 0.69 & 0.65 \\\\ Lidar & 0.46 & 0.40 & 0.79 & 0.64 \\\\ Joint & **0.74** & **0.69** & **0.81** & **0.70** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 6: nuScenes _val_ retrieval with different _train_ sets. Performance is averaged over classes. LidarCLIP supports the joint retrieval, even when trained and evaluated on separate datasets. \\begin{table} \\begin{tabular}{l|c c c c c c c c c c c|c c c} \\hline \\hline P@K & 10 & 100 & 10 & 100 & 10 & 100 & 10 & 100 & 10 & 100 & 100 & 100 & 100 \\\\ \\hline \\hline \\multirow{2}{*}{ \\begin{tabular}{l} Image \\\\ Lidar \\\\ Joint \\\\ \\end{tabular} } & 1.0 & 0.96 & 0.9 & **0.94** & 0.9 & 0.94 & 0.9 & 0.83 & 0.5 & 0.37 & 0.84 & 0.81 & 0.76 & 0.67 \\\\ Lidar & **1.0** & **0.99** & 0.8 & 0.74 & **1.0** & **0.97** & 0.8 & 0.82 & **1.0** & **0.58** & 0.92 & 0.82 & 0.88 & 0.78 \\\\ Joint & 0.9 & 0.97 & **1.0** & 0.92 & **1.0** & **0.97** & **1.0** & **0.90** & 0.9 & 0.42 & **0.96** & **0.84** & **0.90** & **0.81** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Retrieval for scenes containing various object categories. We report precision at ranks 10 and 100. Notice the joint classification is superior overall, but there are two categories (bus and cycle), where using only lidar is advantageous. \\begin{table} \\begin{tabular}{l|c c c c c c c c c c c|c c c} \\hline \\hline P@K & 10 & 100 & 10 & 100 & 10 & 100 & 10 & 100 & 10 & 100 & 100 & 100 & 100 & 100 \\\\ \\hline \\hline \\multirow{2}{*}{ \\begin{tabular}{l} Image \\\\ Lidar \\\\ Joint \\\\ \\end{tabular} } & 1.0 & 0.96 & 0.9 & **0.94** & 0.9 & 0.94 & 0.9 & 0.83 & 0.5 & 0.37 & 0.84 & 0.81 & 0.76 & 0.67 \\\\ Lidar & **1.0** & **0.99** & 0.8 & 0.74 & **1.0** & **0.97** & 0.8 & 0.82 & **1.0** & **0.58** & 0.92 & 0.82 & 0.88 & 0.78 \\\\ Joint & 0.9 & 0.97 & **1.0** & 0.92 & **1.0** & **0.97** & **1.0** & **0.90** & 0.9 & 0.42 & **0.96** & **0.84** & **0.90** & **0.81** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Retrieval for scenes containing various object categories. We report precision at ranks 10 and 100. Notice the joint classification is superior overall, but there are two categories (bus and cycle), where using only lidar is advantageous. \\begin{table} \\begin{tabular}{l|c c} \\hline \\hline Loss function & P@10 & P@100 \\\\ \\hline \\hline \\multirow{2}{*}{ \\begin{tabular}{l} Mean squared error \\\\ Cosine similarity \\\\ \\end{tabular} } & **0.869** & **0.810** \\\\ 0.781 & 0.748 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Ablation of the LidarCLIP training loss. We report precision at ranks 10 and 100, averaged over all prompts. Training with MSE leads to better retrieval performance. \\begin{table} \\begin{tabular}{l|c c c c c c c c c|c c c c} \\hline \\hline P@K & 10 & 100 & 10 & 100 & 10 & 100 & 10 & 100 & 100 & 100 & 100 & 100 & 100 \\\\ \\hline \\hline \\multirow{2}{*}{ \\begin{tabular}{l} Image \\\\ Lidar \\\\ Joint \\\\ \\end{tabular} } & 1.0 & 0.96 & 0.9 & **0.94** & 0.9 & 0.94 & 0.9 & 0.83 & 0.5 & 0.37 & 0.84 & 0.81 & 0.76 & 0.67 \\\\ Lidar & **1.0** & **0.99** & 0.8 & 0.74 & **1.0** & **0.97** & 0.8 & 0.82 & **1.0** & **0.58** & 0.92 & 0.82 & 0.88 & 0.78 \\\\ Joint & 0.9 & 0.97 & **1.0** & 0.92 & **1.0** & **0.97** & **1.0** & **0.90** & 0.9 & 0.42 & **0.96** & **0.84** & **0.90** & **0.81** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Retrieval of scenes with various global conditions. We report precision at ranks 10 and 100. Notice the joint classification is superior overall, but there are two categories (bus and cycle), where using only lidar is advantageous. candidates for the training loss function. MSE encourages the model to embed the point cloud in the same position as the image, whereas cosine similarity only cares about matching the directions of the two embeddings. We compare the retrieval performance of two models trained using these losses in Tab. 4. To reduce training time, we use the ViT-B/32 CLIP version, rather than the heavier ViT-L/14. The results show that using MSE leads to significantly better retrieval, even though retrieval uses cosine similarity as the scoring function. We also perform ablations on the different approaches for joint retrieval described in Sec. 3.1. As shown in Tab. 5, the simple approach of averaging the camera and lidar features gives the best performance, and it is thus the approach used throughout the paper. **Investigating lidar sensing capabilities.** Besides its usefulness for retrieval, LidarCLIP can offer more understanding of what concepts can be captured with a lidar sensor. While lidar data is often used in tasks such as object detection [44], panoptic/semantic segmentation [1, 21], and localization [9], research into capturing more abstract concepts with lidar data is limited and focused mainly on weather classification [17, 40]. However, we show that LidarCLIP can indeed capture complex scene concepts, as already demonstrated in Tab. 3. Inspired by this, we investigate the ability of LidarCLIP to extract color information, by retrieving scenes with \"a \\(\\langle\\text{color}\\rangle\\) car\". As illustrated in Figure 4, while LidarCLIP struggles to capture specific colors accurately, it consistently differentiates between bright and dark colors. Such partial color information may be valuable for systems fusing lidar and camera information. Additionally, LidarCLIP learns meaningful features for overall scene lighting conditions, as illustrated in Figure 5. It can retrieve scenes based on the time of day, and is even able to distinguish scenes with many headlights from regular night scenes. Notably, all retrieved scenes are sparsely populated, indicating that LidarCLIP does not rely on biases associated with street congestion at different times of the day. Figure 4: Top-5 retrieved examples from LidarCLIP for different colors. Note that images are only for visualization, point clouds were used for retrieval. LidarCLIP consistently differentiates black and white but struggles with specific colors. Figure 5: Top-5 retrieved examples from LidarCLIP for different lighting conditions (image only for visualization). LidarCLIP is surprisingly good at understanding the lighting of the scene, to the point of picking up on oncoming headlights with great accuracy. Figure 3: Example of retrieval using separate prompts for image and lidar. We query for images with blur, water spray, glare, corruption, and lack of objects, and for point clouds with nearby trucks, pedestrians, cars, etc. By combining the scores of these separate queries, we can find edge cases that are extremely valuable during the training/validation of a camera-based perception system. These valuable objects are highlighted in red, both in the image and point cloud. ### Generative applications To demonstrate the flexibility of LidarCLIP, we integrate it with two existing CLIP-based generative models. For lidar-to-text generation, we utilize an image captioning model called ClipCap [31], and for lidar-to-image generation, we use CLIP-guided Stable Diffusion1[36]. In both cases, we replace the expected text or image embeddings with our point cloud embedding. Footnote 1: [https://github.com/huggingface/diffusers/tree/main/examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) We evaluate image generation with the widely used Frechet Inception Distance (FID) [32]. For this, we randomly select \\(\\approx\\)6000 images from ONCE _val_ and generate images using CLIP-generated captions, CLIP features, or a combination of both. While FID is widely used, it has been shown to sometimes align poorly with human judgment [23]. To complement this evaluation, we use CLIP-FID, with a different CLIP model to avoid any bias. We also implement pix2pix [20] as a baseline for lidar-to-image generation. Notably, this setting not only evaluates the image generation performance but also serves as a proxy for assessing the captioning quality. Our results, presented in Tab. 7, demonstrate that incorporating captions significantly improves the photo-realism of the generated images. Interestingly, LidarCLIP with captions even outperforms image CLIP without captions, underscoring the effectiveness of our approach in generating high-quality images from point cloud data. Some qualitative results are shown in Fig. 6. We find that both generative tasks work fairly well out of the box. The generated images are not entirely realistic, partly due to a lack of tuning on our side, but there are clear similarities with the reference images. This demonstrates that our lidar embeddings can capture a surprising amount of detail. We hypothesize that guiding the diffusion process locally, by projecting regions of the point cloud into the image plane, would result in more realistic images. We hope that future work can investigate this avenue. Similarly, the captions can pick up the specifics of the scene. However, we notice that more 'generic' images result in captions with very low diversity, such as \"several cars driving down a street next to tall buildings\". This is likely an artifact of the fact that the captioning model was trained on COCO, which only contains a few automotive images and has a limited vocabulary. ## 5 Limitations For the training of LidarCLIP, a single automotive dataset was used. While ONCE [28] contains millions of image-lidar pairs, there are only around 1,000 densely sampled sequences, meaning that the dataset lacks diversity when compared to the 400 million text-image pairs used to train CLIP [33]. As an effect, LidarCLIP has mainly transferred CLIP's knowledge within an automotive setting and is not expected to work in a more general setting, such as indoors. Hence, an interesting future direction would be to train LidarCLIP on multiple datasets, with a variety of lidar sensors, scene conditions, and geographic locations. ## 6 Conclusions We propose LidarCLIP, which encodes lidar data into an existing text-image embedding space. Our method is trained using image-lidar pairs and enables multi-modal reasoning, connecting lidar to both text and images. While conceptually simple, LidarCLIP performs well over a range of tasks. For retrieval, we present a method for combining lidar and image features, outperforming their single-modality equivalents. Moreover, we use the joint retrieval method for finding challenging scenes under adverse sensor conditions. We also demonstrate that LidarCLIP enables several interesting applications off-the-shelf, including point cloud captioning and lidar-to-image generation. We hope LidarCLIP can inspire future work to dive deeper into connections between text and point cloud understanding, and explore tasks such as referring object detection and \\begin{table} \\begin{tabular}{l|c c c c c c c} \\hline & C (L) & C (I) & L & I & L+C & I+C & [20] \\\\ \\hline FID \\(\\downarrow\\) & 83.0 & 81.7 & 68.7 & 58.7 & 53.7 & 46.9 & 114.2 \\\\ CLIP-FID \\(\\downarrow\\) & 33.4 & 31.2 & 20.1 & 15.4 & 15.1 & 11.4 & 25.0 \\\\ \\hline \\end{tabular} \\end{table} Table 7: FID and CLIP-FID (ViT-B/32) for \\(\\approx\\)6k generated images from the ONCE _val_. L=lidar, I=image, C (L/I)=caption only, from L/I. Figure 6: Example of generative application of LidarCLIP. A point cloud is embedded into the CLIP space (left, image only for reference) and used to generate text (top) and images. The image generation can be guided with only the lidar embedding (middle) or with both the lidar embedding and the generated caption (right). open-set semantic segmentation. ## Acknowledgements This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Computational resources were provided by the Swedish National Infrastructure for Computing at C3SE and NSC, partially funded by the Swedish Research Council, grant agreement no. 2018-05973. ## References * [1] Mehmet Aygun, Aljosa Osep, Mark Weber, Maxim Maximov, Cyrill Stachniss, Jens Behley, and Laura Leal-Taixe. 4d panoptic lidar segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5527-5537, 2021. * [2] Alberto Baldrait, Marco Bertini, Tiberio Uricchio, and Alberto Del Bimbo. Effective conditioned and composed image retrieval combining clip-based features. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 21466-21474, 2022. * [3] Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 11621-11631, 2020. * [4] Fredrik Carlsson, Philipp Eisen, Faton Rekathati, and Magnus Sahlgren. Cross-lingual and multilingual clip. In _Proceedings of the Thirteenth Language Resources and Evaluation Conference_, pages 6848-6854, 2022. * [5] Dave Zhenyu Chen, Angel X Chang, and Matthias Niessner. Scanferfer: 3d object localization in rgb-d scans using natural language. In _Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XX_, pages 202-221. Springer, 2020. * [6] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In _International conference on machine learning_, pages 1597-1607. PMLR, 2020. * [7] Ali Cheraghian, Shafin Rahman, Townim F Chowdhury, Dylan Campbell, and Lars Petersson. Zero-shot learning on 3d point cloud objects and beyond. _International Journal of Computer Vision_, 130(10):2364-2384, 2022. * [8] Prafulla Dhariwal and Alexander Quinn Nichol. Diffusion models beat GANs on image synthesis. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, _Advances in Neural Information Processing Systems_, 2021. * [9] Mahdi Elhousni and Xinming Huang. A survey on 3d lidar localization for autonomous vehicles. In _2020 IEEE Intelligent Vehicles Symposium (IV)_, pages 1879-1884, 2020. * [10] Lue Fan, Ziqi Pang, Tianyuan Zhang, Yu-Xiong Wang, Hang Zhao, Feng Wang, Naiyan Wang, and Zhaoxiang Zhang. Embracing single stride 3d object detector with sparse transformer. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 8458-8468, 2022. * [11] Lue Fan, Ziqi Pang, Tianyuan Zhang, Yu-Xiong Wang, Hang Zhao, Feng Wang, Naiyan Wang, and Zhaoxiang Zhang. Embracing single stride 3d object detector with sparse transformer. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 8458-8468, 2022. * [12] M.A. Ganae, Minghui Hu, A.K. Malik, M. Tanveer, and P.N. Suganthan. Ensemble deep learning: A review. _Engineering Applications of Artificial Intelligence_, 115:105151, oct 2022. * [13] Chuanxing Geng, Sheng-jun Huang, and Songcan Chen. Recent advances in open set recognition: A survey. _IEEE transactions on pattern analysis and machine intelligence_, 43(10):3614-3631, 2020. * [14] M Rami Ghorab, Dong Zhou, Alexander O'connor, and Vincent Wade. Personalised information retrieval: survey and classification. _User Modeling and User-Adapted Interaction_, 23(4):381-443, 2013. * [15] Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, and Yin Cui. Open-vocabulary object detection via vision and language knowledge distillation. In _International Conference on Learning Representations_, 2022. * [16] Andrey Guzhov, Federico Raue, Jorn Hees, and Andreas Dengel. Audioclip: Extending clip to image, text and audio. In _IEEE International Conference on Acoustics, Speech and Signal Processing_, pages 976-980, 2022. * [17] Robin Heinzler, Philipp Schindler, Jurgen Seekircher, Werner Ritter, and Wilhelm Stork. Weather influence and classification with automotive lidar sensors. In _2019 IEEE intelligent vehicles symposium (IV)_, pages 1527-1534. IEEE, 2019. * [18] Mariya Hendriksen, Maurits Bleeker, Svitlana Vakulenko, Nanne van Noord, Ernst Kuiper, and Maarten de Rijke. Extending clip for category-to-image retrieval in e-commerce. In _European Conference on Information Retrieval_, pages 289-303. Springer, 2022. * [19] Tianyu Huang, Bowen Dong, Yunhan Yang, Xiaoshui Huang, Rynson WH Lau, Wanli Ouyang, and Wangmeng Zuo. Clip2point: Transfer clip to point cloud classification with image-depth pre-training. _arXiv preprint arXiv:2210.01055_, 2022. * [20] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1125-1134, 2017. * [21] Alok Jhaldiyal and Navendu Chaudhary. Semantic segmentation of 3d lidar data using deep learning: a review of projection-based methods. _Applied Intelligence_, pages 1-12, 2022. * [22] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_, 2015. * [23] Tuomas Kynkanniemi, Tero Karras, Minka Aittala, Timo Aila, and Jaakko Lehtinen. The role of imagenet classes in frechet inception distance. In _Proceedings of International Conference on Learning Representations, ICLR_, 2023. * [24] Timo Luddecke and Alexander Ecker. Image segmentation using text and image prompts. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 7086-7096, 2022. * [25] Huaishao Luo, Lei Ji, Ming Zhong, Yang Chen, Wen Lei, Nan Duan, and Tianrui Li. Clip4clip: An empirical study of clip for end to end video clip retrieval and captioning. _Neurocomputing_, 508:293-304, 2022. * [26] Haoyu Ma, Handong Zhao, Zhe Lin, Ajinkya Kale, Zhangyang Wang, Tong Yu, Jiuxiang Gu, Sunav Choudhary, and Xiaohui Xie. Ei-clip: Entity-aware interventional contrastive learning for e-commerce cross-modal retrieval. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 18051-18061, 2022. * [27] Yiwei Ma, Guohai Xu, Xiaoshuai Sun, Ming Yan, Ji Zhang, and Rongrong Ji. X-clip: End-to-end multi-grained contrastive learning for video-text retrieval. In _Proceedings of the 30th ACM International Conference on Multimedia_, pages 638-647, 2022. * [28] Jiageng Mao, Minzhe Niu, Chenhan Jiang, hanxue liang, Jingheng Chen, Xiaodan Liang, Yamin Li, Chaoqiang Ye, Wei Zhang, Zhenguo Li, Jie Yu, Hang Xu, and Chunjing Xu. One million scenes for autonomous driving: ONCE dataset. In _Thirty-fjith Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)_, 2021. * [29] Bjorn Michele, Alexandre Boulch, Gilles Puy, Maxime Bucher, and Renaud Marlet. Generative zero-shot learning for semantic segmentation of 3d point clouds. In _2021 International Conference on 3D Vision (3DV)_, pages 992-1002. IEEE, 2021. * [30] Sewon Min, Kenton Lee, Ming-Wei Chang, Kristina Toutanova, and Hannaneh Hajishirzi. Joint passage ranking for diverse multi-answer retrieval. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 6997-7008, Online and Punta Cana, Dominican Republic, Nov. 2021. Association for Computational Linguistics. * [31] Ron Mokady, Amir Hertz, and Amit H Bermano. Clipcap: Clip prefix for image captioning. _arXiv preprint arXiv:2111.09734_, 2021. * [32] Gaurav Parmar et al. On aliased resizing and surprising subtleties in GAN evaluation. In _CVPR_, 2022. * [33] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International Conference on Machine Learning_, pages 8748-8763. PMLR, 2021. * [34] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with CLIP latents. _arXiv preprint arXiv:2204.06125_, 2022. * [35] Mehwish Rehman, Muhammad Iqbal, Muhammad Sharif, and Mudassar Raza. Content based image retrieval: survey. _World Applied Sciences Journal_, 19(3):404-412, 2012. * [36] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10684-10695, 2022. * [37] David Rozenbergszki, Or Litany, and Angela Dai. Language-grounded indoor 3d semantic segmentation in the wild. In _Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIII_, pages 125-141. Springer, 2022. * [38] Aditya Sanghi, Hang Chu, Joseph G. Lambourne, Ye Wang, Chin-Yi Cheng, Marco Fumero, and Kamal Rahimi Makeshan. Clip-forge: Towards zero-shot text-to-shape generation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 18603-18613, June 2022. * [39] Chuan Tang, Xi Yang, Bojian Wu, Zhizhong Han, and Yi Chang. Part2word: Learning joint embedding of point clouds and text by matching parts to words. _arXiv preprint arXiv:2107.01872_, 2021. * [40] Jose Roberto Vargas Rivero, Thiemo Gerbich, Valentina Teiluf, Boris Buschardt, and Jia Chen. Weather classification using an automotive lidar sensor based on detections on asphalt and atmosphere. _Sensors_, 20(15), 2020. * [41] Can Wang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. Clip-nerf: Text-and-image driven manipulation of neural radiance fields. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 3835-3844, June 2022. * [42] Zhaoqing Wang, Yu Lu, Qiang Li, Xunqiang Tao, Yandong Guo, Mingming Gong, and Tongliang Liu. Cris: Clip-driven referring image segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 11686-11695, 2022. * [43] Ho-Hsiang Wu, Prem Seetharaman, Kundan Kumar, and Juan Pablo Bello. Wav2clip: Learning robust audio representations from clip. In _IEEE International Conference on Acoustics, Speech and Signal Processing_, pages 4563-4567. IEEE, 2022. * [44] Yutian Wu, Yueyu Wang, Shuwei Zhang, and Harutoshi Ogai. Deep 3d object detection networks using lidar data: A review. _IEEE Sensors Journal_, 21(2):1152-1171, 2021. * [45] Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, and Christoph Feichtenhofer. VideoCLIP: Contrastive pretraining for zero-shot video-text understanding. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, Online, Nov. 2021. Association for Computational Linguistics. * [46] Renrui Zhang, Ziyu Guo, Wei Zhang, Kunchang Li, Xupeng Miao, Bin Cui, Yu Qiao, Peng Gao, and Hongsheng Li. Pointclip: Point cloud understanding by CLIP. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 8552-8562, 2022. * [47] Lichen Zhao, Daigang Cai, Lu Sheng, and Dong Xu. 3dvq-transformer: Relation modeling for visual grounding on point clouds. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 2928-2937, 2021. * [48] Chong Zhou, Chen Change Loy, and Bo Dai. Extract free dense labels from clip. In _Proceedings of the European Conference on Computer Vision_, pages 696-712. Springer, 2022. A Supplementary material ### B Model details Tab. 8 shows the hyperparameters for the SST encoder [11] used for embedding point clouds in the CLIP space. Window shape refers to the number of voxels in each window. Other hyperparameters are used as is from the original implementation. The encoded voxel features from SST are further pooled with a multi-head self-attention layer to extract a single feature vector. Specifically, the CLIP embedding is initialized as the mean of all features and then attends to said features. The pooling uses 8 attention heads, learned positional embeddings, and a feature dimension that matches the current CLIP model, meaning 768 for ViT-L/14 and 512 for ViT-B/32. ## Appendix C Training details All LidarCLIP models are trained on the union of the _train_ and _raw_large_ splits which are defined in the ONCE development kit [28]. The ViT-L/14 version is trained for 3 epochs, while ablations with ViT-B/32 used only 1 epoch for training. We use the Adam optimizer [22] with a base learning rate of \\(10^{-5}\\). The learning rate follows a one-cycle learning rate scheduler with a maximum learning rate of \\(10^{-3}\\) and cosine annealing and spends the first 10% of the training increasing the learning rate. The training was done on four NVIDIA A100s with a batch size of 128, requiring about 27 hours for 3 epochs. ## Appendix D Additional results ### Retrieval In Tab. 9 we show class-wise performance when querying specifically for objects close to the ego-vehicle, as the main manuscript only displayed the average result. We find that lidar retrieval outperforms image retrieval for most classes, especially on Cyclists. **Separate prompts.** Fig. 7 shows additional results for the joint retrieval with separate prompts for the image and lidar encoder. Again, these scenes are close to impossible to retrieve using a single modality. **nuScenes qualitative results.** In Fig. 11, we show qualitative retrieval results on nuScenes using a ONCE-trained lidar encoder. As expected from the quantitative results, LidarCLIP does not generalize well overall. However, its performance is decent for distinct, large, objects, and it has some notion of distance. ### Zero-shot scene classification Here, we provide additional examples of zero-shot classification using LidarCLIP. However, rather than object-level classification, we do zero-shot classification on entire scenes. We compare the performance of image-only, lidar-only, and the joint approach. Fig. 8 shows a diverse set of samples from the validation and test set. In many cases, LidarCLIP and image-based CLIP give similar results, highlighting the transfer of knowledge to the lidar domain. In some cases, however, the two models give contradictory classifications. For instance, LidarCLIP misclassifies the cyclist as a pedestrian, potentially due to the upright position and fewer points for the bike than the person. While their disagreement influences the joint method, \"cyclist\" remains the dominating class. Another interesting example is the image of the dog. None of the models manage to confidently classify the presence of an animal in the scene. This also highlights a shortcoming with our approach, where the image encoder's capacity may limit what the lidar encoder can learn. This can be circumvented to some extent by using even larger datasets, but a more effective approach could be local supervision. For instance, using CLIP features on a patch level to supervise frustums of voxel features, thus improving the understanding of fine-grained details. \\begin{table} \\begin{tabular}{l l l} \\hline \\hline Parameter & Value & Unit \\\\ \\hline Voxel size & (0.5, 0.5, 6) & \\(\\mathrm{m}\\) \\\\ Window shape & (12, 12, 1) & - \\\\ Point cloud range & (0, -20, -2, 40, 20, 4) & \\(\\mathrm{m}\\) \\\\ No. encoder layers & 4 & - \\\\ \\(d\\) model & 128 & - \\\\ \\(d\\) feedforward & 256 & - \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 8: Hyperparameters for SST encoder. Figure 7: Example of retrieval using separate prompts for image and lidar. We query for images with blur, water spray, glare, corruption, and lack of objects, and for point clouds with nearby trucks, pedestrians, cars, etc. By combining the scores of these separate queries, we can find edge cases that are extremely valuable during the training/validation of a camera-based perception system. These valuable objects are highlighted in red, both in the image and point cloud. In Fig. 9 we highlight the importance, and problem, of including reasonable classes for zero-shot classification. The example scene contains a three-wheeler driving down the street. For the left sub-figure, none of the text prompts contains the word three-wheeler. Consequently, the model is confused between car, truck, cyclist, and pedestrian as none of these are perfect for describing the scene. When including \"three-wheeler\" as a separate class in the right sub-figure, the model accurately classifies the main subject of the image. To avoid such issues, we would like to create a class for unclassified or unknown objects, such that the model can express that none of the provided prompts is a good fit. Optimally, the model should be able to express what this class is, either by providing a caption or retrieving similar scenes, which can guide a human in naming and including additional classes. We hope that future work, potentially inspired by open-set recognition [13], can study this more closely. ### LidarCLIP for lidar sensing capabilities In Fig. 10, we show additional examples of retrieved scenes for various colors. Note that images are only shown for reference. Similar to results in Sec. 4.2, LidarCLIP has no understanding of distinct colors but can discriminate between dark and bright. For instance, \"a gray car\" returns dark grey cars, while none of the cars for \"a yellow car\" are yellow. ### Lidar to image and text We provide additional examples of generative applications of LidarCLIP in Fig. 12. These are randomly picked scenes with no tuning of the generative process. The latter is especially important for images, where small changes in guidance scale2 and number of diffusion steps have a massive impact on the quality of the generated images. Furthermore, to isolate the impact of our lidar embedding, we use the same parameters and random seeds for the different scenes. This leads to similar large-scale structures for images with the same seed, which is especially apparent in the rightmost column. Footnote 2: The guidance scale is a parameter controlling how much the image generation should be guided by CLIP, which may stand in contrast to photorealism. In most of these cases, the generated scene captures at least some key aspect of the embedded scene. In the first row, all generated scenes show an empty road with many road paintings. There is also a tendency to generate red lights. Interestingly, several images show localized blurry artifacts, similar to the raindrops in the source image. The second row shows very little similarity with the embedded scene, only picking up on minor details like umbrellas and dividers. In the third row, the focus is clearly on the bus, which is present in the caption and three out of four generated images. In the fourth row, the storefronts are the main subject, but the generated images do not contain any cars, unlike the caption. In the final scene, we see that the model picks up the highway arch in three out of the four generated images, but the caption hallucinates a red stoplight, which is not present in the source. Figure 8: Qualitative zero-shot classification on the ONCE validation/test set. Figure 10: Top-5 retrieved examples from LidarCLIP for different colors. Note that we show images only for visualization, point clouds were used for retrieval. Figure 9: Example of zero-shot classification that demonstrates the importance of picking good class prompts. When excluding the most appropriate class, “three-wheeler”, the model is highly confused between the remaining classes. Figure 11: Top-5 retrieved examples when transferring LidarCLIP to nuScenes. LidarCLIP generalizes decently for distinct, large, objects, and even maintains some concept of distances. However, smaller objects and less uniform objects, like pedestrians and vegetation, do not transfer well. Figure 12: Example of generative application of LidarCLIP. A point cloud is embedded into the clip space (left, image only for reference) and used to generate text (top) and images (right). All four images are only generated with guidance from the lidar embedding, the caption was not used for guidance.
Research connecting text and images has recently seen several breakthroughs, with models like CLIP, DALL-E 2, and Stable Diffusion. However, the connection between text and other visual modalities, such as lidar data, has received less attention, prohibited by the lack of text-lidar datasets. In this work, we propose LidarCLIP, a mapping from automotive point clouds to a pre-existing CLIP embedding space. Using image-lidar pairs, we supervise a point cloud encoder with the image CLIP embeddings, effectively relating text and lidar data with the image domain as an intermediary. We show the effectiveness of LidarCLIP by demonstrating that lidar-based retrieval is generally on par with image-based retrieval, but with complementary strengths and weaknesses. By combining image and lidar features, we improve upon both single-modality methods and enable a targeted search for challenging detection scenarios under adverse sensor conditions. We also explore zero-shot classification and show that LidarCLIP outperforms existing attempts to use CLIP for point clouds by a large margin. Finally, we leverage our compatibility with CLIP to explore a range of applications, such as point cloud captioning and lidar-to-image generation, without any additional training. Code and pre-trained models are available at github.com/atomderski/lidarclip.
Provide a brief summary of the text.
267
arxiv-format/2405_04121v1.md
# ELiTe: Efficient Image-to-LiDAR Knowledge Transfer for Semantic Segmentation Zhibo Zhang _School of Computer Science_ _Fudan University_ Shanghai, China [email protected] Ximing Yang _School of Computer Science_ _Fudan University_ Shanghai, China [email protected] Weizhong Zhang _School of Computer Science_ _Fudan University_ Shanghai, China [email protected] Cheng Jin* _School of Computer Science_ _Fudan University_ Shanghai, China [email protected] ## I Introduction LiDAR semantic segmentation in autonomous driving relies on recent deep learning advancements [1]. Cross-modal knowledge transfer [2, 3, 4] enhances representation learning by leveraging semantic information from LiDAR and other modalities. However, the performance gap between teacher and student models in cross-modal transfer limits effectiveness due to potential weaknesses in the teacher model. One reason is the data characteristic of the car-mounted camera images obtained during LiDAR data collection [5]. These images usually depict the scene from a single viewpoint, resulting in high repeatability, limited diversity, and a monotonous style, as shown in Figure 1. Additionally, there are often significant disparities in terms of dataset sizes. For instance, the SAM [6] is trained on a dataset consisting of approximately 11M images, while the widely used LiDAR dataset KITTI [7] contains only about 19K image frames. Therefore, compared to models trained on many open images, training strong teacher models on car camera images becomes challenging because they are difficult to obtain comprehensive representations. The weak supervision in training teacher models is another issue. Current methods [3] use sparse mapping of images to LiDAR through perspective projection, creating 2D ground truth. This sparse mapping leads to weak supervision for teacher models, as seen in Figure 2. The lack of dense and precise semantic labels makes it challenging for the teacher model to learn semantic representations effectively. In addition, the promising performance achieved by cross-modal knowledge transfer based methods [3, 8] are always accompanied by a notable increase in the model size over the single-modal LiDAR semantic segmentation approaches [8, 9, 10], in which modern neural networks with massive parameters have already been used. Therefore, the efficient optimization of large-scale parameters during the training process naturally emerges as a significant challenge. To overcome the mentioned limitations, we introduce an Efficient image-to-LiDAR knowledge **T**ransfer (ELiTe) paradigm for semantic segmentation. ELiTe incorporates Patch-to-Point Multi-Stage Knowledge Distillation (PPM-SKD), a key component facilitating knowledge transfer from a Vision Foundation Model (VFM), specifically the Segment Anything Model (SAM) [6]. ELiTe employs Parameter-Efficient Fine-Tuning (PEFT) [11] to expedite large-scale model training of VFM and enhance the teacher model. To address weak supervision caused by sparse, incomplete, and inaccurate labels, ELiTe introduces an efficient semantic pseudo-label generation strategy, SAM-based Pseudo-Label Generation (SAM-PLG). SAM-PLG effectively converts sparse labels into dense and accurate ones, as depicted in Figure 2. Notably, SAM-PLG exhibits appealing features: 1. At the instance level, it compensates for missing sparse Fig. 1: Car Camera Images and Open World Images. mask labels on the car window area; 2. At the semantic level, it produces clearer boundaries compared to sparse labels that exhibit inaccurate overlap between the tree and the building behind it; 3. Dense labels provide richer supervisory information, mitigating the impact of inaccurate ground truth annotations derived from point-to-pixel correspondence. The resulting dense pseudo-label transforms weak supervision into strong supervision, reducing training difficulty and enhancing the teacher model's semantic representation learning. ELiTe significantly improves LiDAR semantic segmentation using multi-modal data, particularly LiDAR point clouds. It achieves impressive performance even with a lightweight model for real-time inference, showcasing strong competitiveness on the SemanticKITTI benchmark. Our main contributions can be summarized as follows: * Introducing ELiTe: an Effective Image-to-LiDAR Knowledge Transfer paradigm using PPMSKD and PEFT. This enhances the teacher model's encoding ability, addressing performance disparity in existing studies between teacher and student models. * Our strategy, SAM-PLG, effectively transforms low-quality sparse labels into high-quality dense labels, mitigating the weak supervision issue and improving the teacher model's encoder capability. * ELiTe demonstrates state-of-the-art results on the SemanticKITTI benchmark, achieving real-time inference efficiency with significantly fewer parameters. ## II Related Work ### _LiDAR Point Cloud Semantic Segmentation_ Recent research emphasizes **voxel-based methods** for efficiency and effectiveness. For example, Cylinder3D [9] uses cylindrical voxels and an asymmetrical network for better performance. **Multi-representation methods** are also trending, combining points, projection images, and voxels, and fusing features across branches. SPVNAS [12] deploys point-voxel fusion with point-wise MLPs and NAS for architecture optimization. RPVNet [10] fuses range-point-voxel representations. Point-Voxel-KD [8] transfers knowledge from point and voxel levels to distill from a larger teacher to a compact student network. Notably, these methods consider only sparse LiDAR data, omitting appearance and texture cues from camera images. **Multi-modal fusion methods** have emerged to leverage the strengths of both cameras and LiDAR, combining information from these complementary sensors. RGBAL [13] maps RGB images using a polar-grid representation and fuses them at different levels. PMF [14] collaboratively fuses data in camera coordinates. However, these methods require multi-sensor inputs during training and inference, posing computational challenges. Obtaining paired multi-modal data is also impractical in real-world scenarios. Most methods aim for models with growing parameters, raising training costs. Ongoing efforts focus on compressing inference model parameters, but achieving practical real-time inference speed remains challenging. ### _2D-to-3D Transfer Learning_ Transfer learning aims to leverage knowledge from data-rich domains to assist learning in data-scarce domains. The idea of distilling knowledge from 2D to 3D has been widely explored in the 3D community. While 2DPASS [3] enhances semantic segmentation performance through 2D prior-related knowledge distillation, its teacher model is weak, and knowledge transfer is inefficient. In general vision tasks, pretraining significantly benefits various downstream tasks. Image2Point [2] transfers knowledge by replicating or inflating weights pre-trained on ImageNet [15] to a similar architecture of 3D models, efficiently adapting the domain by fine-tuning only specific layers. PointCLIP [16] transfers representations learned from 2D images to different 3D point clouds but limits knowledge transfer to simple tasks like object classification due to reliance on point cloud projection for modal alignment. Our work prioritizes real-time, large-scale LiDAR segmentation with minimal parameters and high component decoupling for efficient handling of complex challenges. Fig. 2: Existed one-stage Point-to-Pixel Correspondence based Ground Truth Generation (**PPC-GTG**) and our two-stage SAM-based Pseudo-Label Generation (**SAM-PLG**). The sparse mask labels generated by one-stage PPC-GTG are incomplete, inaccurate, and low-quality. The dense mask labels generated by two-stage SAM-PLG are more accurate, complete, and high quality. ## III Method In this section, we outline our proposed method, ELiTe, including the teacher-student model, cross-modal knowledge distillation (PPMSKD), VFM fine-tuning (PEFT), and pseudo-label generation (SAM-PLG). ### _Framework of ELiTe_ This paper proposes ELiTe, an efficient image-to-LiDAR knowledge transfer paradigm for semantic segmentation. ELiTe leverages rich color and texture information from image knowledge. The workflow includes three main components: the teacher network, the student network, and the distillation network. The student network uses PPMSKD in the distillation network to acquire image domain knowledge from the teacher network. The teacher network undergoes domain-adaptive fine-tuning with PEFT and is supervised by pseudo-labels from SAM-PLG. During inference, both the teacher and distillation networks are discarded to avoid additional computational burden in practical applications. ### _VFM Teacher and Lightweight Student_ In Figure 3, both the VFM teacher network and the lightweight student network are utilized to encode both the image and the LiDAR point cloud. The teacher network utilizes the Vision Transformer (ViT) encoder [17], trained with SAM [6], which leverages attention mechanisms. Attention [18], proven effective across various vision tasks [19, 23, 20, 21, 22, 19, 20, 22, 23], is a key component of ViT. Within these networks, \\(L\\) feature maps are derived from various stages to extract 2D features \\(\\{F_{l}^{Patch}\\}_{l=1}^{L}\\) and 3D features \\(\\{F_{l}^{Point}\\}_{l=1}^{L}\\). Single-stage features are utilized for knowledge distillation, whereas the concatenation of multi-stage features contributes to obtaining the final prediction scores. ### _Patch-to-Point Multi-Stage Knowledge Distillation_ Our PPMSKD is a cross-modal knowledge transfer technique and its patch-to-point distillation scheme enables us to transfer rich knowledge from SAM trained on massive open-world images to LiDAR semantic segmentation. As shown in Figure 3, it comprises \\(L\\) individual single-stage decoders used for both the teacher and student networks. These decoders restore downsampled feature maps to their original sizes and obtain prediction scores for knowledge distillation. In LiDAR networks, **point decoders** are standard classifiers. \\(L\\) point-level single-stage features are upsampled to the original point cloud size and input into single-stage classifiers to obtain prediction scores \\(\\{S_{l}^{Point}\\}_{l=1}^{L}\\). Within the teacher network based on the ViT architecture, the patch-level image features are extracted from the output of \\(L\\) global attention blocks. In **patch decoders**, patch-level features are initially upsampled and feature dimensionality reduced through an upscaling layer, and then further upsampled to the original image size through interpolation. Finally, through pixel-to-point mapping, we can obtain pixels corresponding to LiDAR point clouds, thereby acquiring pixel-level predictions denoted as \\(\\{S_{l}^{Patch}\\}_{l=1}^{L}\\), which correspond one-to-one with point-level features. Key to knowledge transfer, PPMSKD enhances 3D representation at each stage through **multi-stage knowledge distillation** with auxiliary VFM priors. PPMSKD aligns image features with point cloud features, ensuring improved point cloud features. Vanilla KL divergence is the core distillation loss (\\(L_{KD}\\)), with multi-stage distillation losses as follows: \\[L_{KD}=\\sum_{l=1}^{L}D_{KL}(S_{l}^{Patch}||S_{l}^{Point}).\\] ### _Parameter-Efficient Fine-Tuning for VFM Teacher_ In fine-tuning large models, common choices include Full Fine-Tuning, Linear Probing, Robust Fine-Tuning [29, 30], and PEFT [31, 31]. Recently, PEFT in NLP efficiently leverages pre-trained models for downstream applications, reducing the need for extensive fine-tuning and significantly cutting computational and storage demands. Notably, PEFT achieves comparable performance to full fine-tuning, optimizing cost-efficiency. Fig. 3: Framework Overview. It comprises three main components: VFM teacher, lightweight student, and Patch-to-Point Multi-Stage Knowledge Distillation(PPMSKD) networks. The teacher and student networks process image and LiDAR inputs, extracting multi-stage features. In the PPMSKD network, the knowledge from the teacher is transferred to the student. The VFM teacher network undergoes domain-adaptive fine-tuning via PEFT and is supervised by pseudo-labels generated by SAM-PLG. In this figure, TB(WA) and TB(GA) denote Transformer Blocks employing window and global attention, respectively, ”Patch Dec.” signifies the Patch Decoder, and \\(\\odot\\) represents concatenation. Solid lines delineate the data flow, while dashed lines represent the backpropagation supervisory signal. Given a pre-trained weight matrix \\(W(0)\\), LoRA [11], a PEFT technique compatible with ViT, parameterizes \\(\\Delta\\) as a low-rank matrix by the product of two much smaller matrices: \\[W=W^{(0)}+\\Delta=W^{(0)}+BA\\] where \\(W^{(0)},\\Delta\\in R^{d_{1}\\times d_{2}},A\\in R^{r\\times d_{2}}\\) and \\(B\\in R^{d_{1}\\times r}\\) with \\(r\\ll\\{d_{1},d_{2}\\}\\). During fine-tuning, only \\(A\\) and \\(B\\) are updated. We chose to employ AdaLoRA [31] for facilitating domain adaptation within the context of VFM teacher. AdaLoRA subsequently allocates LoRA parameter budgets adaptively based on importance scores. This dynamically adaptive allocation strategy significantly improves both model performance and parameter efficiency, effectively reinforcing the VFM teacher's capacity for domain adaptation. ### _SAM-based Pseudo-Label Generation_ Inspired by SAM [6] and its promptable segmentation using sparse points and low-resolution dense masks, we introduce SAM-PLG for sparse-label image segmentation. This innovative strategy addresses the challenge of weak supervision. In the SAM predictor framework, each inference input is defined as a pair \\((P_{i},M_{i})=((X,Y),(H_{lr}\\times W_{lr}))\\), with the corresponding output comprising three valid masks. Here, \\(P_{i}\\) represents the pixel coordinates of the prompt query, and \\(M_{i}\\) denotes a low-resolution dense embedding mask at a quarter of the original resolution. In the dense mask, a positive value indicates the presence of a foreground pixel for the instance prompted by pixel \\(P_{i}\\), while a negative value signifies a background pixel. The absolute value of each pixel's numerical value reflects the confidence level associated with its classification as foreground or background. We initially generate sparse labels based on the correspondence between points and pixels. For efficiency, these labels are downsampled to 1/4 resolution and employed as prompt points after resizing to the shape \\(1024\\times 1024\\) specified by SAM. Furthermore, in pursuit of more precise masks, we generate sparse masks based on semantic and instance labels, providing supplementary guiding information. Moving on, we feed the image and output to SAM and remove low-quality masks based on the computed stability scores. Following SAM, mask boxes are computed, and box-based non-maximal suppression (NMS) is implemented to mitigate excessive mask overlaps. For each valid mask, we assign semantic and instance labels by associating it with the predominant ground truth category contained within. Finally, following the ascending order of predicted IoU scores, masks are progressively overlapped to yield the final high-quality pseudo-labels. ## IV Experiments ### _Dataset, Metrics, and Implementation Details_ We evaluate our approach using the large-scale outdoor benchmark SemanticKITTI [5]. This dataset furnishes comprehensive semantic annotations for scans in sequences 00-10 of the KITTI dataset [7]. In line with official guidelines, sequence 08 constitutes the validation split, with the others forming the training split. For the test set, SemanticKITTI employs sequences 11-21 from the KITTI dataset, keeping labels concealed for blind online testing. Our primary evaluation metric is the mean Intersection over Union (mIoU), which calculates the average IoU across all classes. We also assess the method's practicality by measuring its inference speed in frames per second (FPS). Training parameters are used as an indirect metric of training efficiency. We employ the cross-entropy and Lovasz losses as [3] for both image and LiDAR semantic segmentation. We use a SAM pre-trained ViT base [17] encoder as the VFM Teacher network, and the features output from each global attention block are selected for knowledge transfer. ### _Comparison Results_ Table I presents the performance under the single scan configuration. Compared to our baseline 2DPASS-base, ELiTe brings significant improvements (71.4 vs. 67.7). ELiTe enables the use of a student model with only 1.9M parameters for real-time inference (24Hz) and highly competitive performance (71.4). In comparison with real-time inference methods, our performance (71.4 vs. 71.2), speed (24Hz vs. 20Hz), and training efficiency are state-of-the-art, demonstrating the practicality and balance of ELiTe. Notably, our speed significantly surpasses other real-time inference methods (24Hz vs. 12-20Hz). Additionally, compared to non-real-time methods with a large number of parameters, we outperform them in speed (24Hz vs. 7Hz) and training efficiency (6.8M vs. 74.3M), remaining highly competitive in performance. ### _Ablation Studies_ Table II summarizes ablation studies on the SemanticKITTI validation set. The student network baseline resulted in a relatively lower mIoU of 62.6 (_a_). The use of vanilla knowledge distillation alone yielded marginal improvement, with a mere 0.9 increase in mIoU (_b_ vs. _a_). PPMSKD introduced more efficient knowledge transfer, contributing an additional 2.0 improvement (_c_ vs. _a_). Additionally, for memory conservation, certain patch-level padding features were discarded, causing a minor performance dip of 0.4 (_d_ vs. _c_). In the subsequent comparisons, we explore effective Patch Encoder forms to optimize PPMSKD. Finally, by employing a feature dimension reduction and downsampling approach similar to SAM's decoder, using an upscale-interpolate method, we achieved superior results (_g_ vs. _c, e, f_). This outcome was attained while adhering to memory optimization considerations (_g_ vs. _d_). Due to memory and computational constraints, we employed three fine-tuning methods rather than fully training the teacher model. These methods improved model performance (_h, i, j_ vs. _d_), including partial teacher layer fine-tuning, LoRA, and AdaLoRA. Importantly, it is observed that LoRA-based approaches yielded the most favorable results (_i, j_ vs. _h_). In the end, with optimal PPMSKD and VFM training, we achieved a notable score of 66.2 (_k_ vs. _g, j_). Additionally, the integration of pseudo-labels SAM-PLG generated further enhanced performance (_l_ vs. _k_), affirming the positive impact of strong teacher supervision on the student's progress. ### _Comprehensive Analysis_ In evaluating our approach to the teacher network, assessments were conducted using both pure 2D and fused-modal 2DPASS [3], along with the ELiTe methods. Results in Table III show that while fused teachers outperformed pure 2D counterparts (64.6 vs. 21.5), their actual benefit to the student network was limited (65.4 vs. 65.2). In the context of 2DPASS, due to the complexity of extracting effective 2D features, fused teacher features gradually resembled those of the LiDAR student network after attention-based weighting. This inadvertently led to a self-distillation loop, posing a challenge for the student to imitate image-specific features. Conversely, within the ELiTe framework, although the performance of the optimal VFM teacher still notably lags behind that of the student (34.0 vs. 64.6), the student's performance shows improvement compared to 2DPASS(66.2 vs. 65.4). Thiseffectively underscores a concept: **the student needs not only a brilliant teacher but also one who has unique and new knowledge.** Furthermore, by incorporating fused teachers into ELiTe, its results are similar to vanilla 2DPASS (64.9 & 65.5 vs. 64.6 & 65.4), this principle is reiterated, underscoring that the top-performing teacher does not necessarily guarantee the best student performance. ## V Conclusion In this study, we present ELiTe, an Efficient Image-to-LiDAR Knowledge Transfer paradigm. ELiTe effectively addresses weak teacher model challenges in cross-modal knowledge transfer for LiDAR semantic segmentation with significantly few parameters. It combines VFM Teacher, PPMSKD, PEFT, and SAM-PLG methods to optimize teacher model encoding, promote domain adaptation, facilitate knowledge transfer to a lightweight student model, and employ a pioneering pseudo-label generation strategy to enhance semantic representations' robustness. Through comprehensive SemanticKITTI benchmark evaluation, ELiTe demonstrates superior real-time inference performance, showcasing the potential for enhancing LiDAR-based perception across applications. In this appendix, we present additional details of implementations, experiments, and visualizations for a better understanding. ### _Sam-Plg_ We demonstrate the specific automatic generation process in Algorithm Stage 1 and 2. In Stage 1, we initially generate sparse labels based on the correspondence between points and pixels. For efficiency, these labels are downsampled to 1/4 resolution and employed as prompt points after resizing to the shape specified by SAM (\\(1024{\\times}1024\\)). Furthermore, in pursuit of more precise masks, we generate sparse masks based on semantic and instance labels, providing supplementary guiding information. Moving on to Stage 2, we feed the image and output of Stage 1 to SAM and remove low-quality masks based on the computed stability scores. Following SAM, mask boxes are computed, and box-based non-maximal suppression (NMS) is implemented to mitigate excessive mask overlaps. For each valid mask, we assign semantic and instance labels by associating it with the predominant ground truth category contained within. Finally, following the ascending order of predicted IoU scores, masks are progressively overlapped to yield the final high-quality pseudo-labels. ``` 0: Sparse semantic level label \\(L_{s}\\in R^{H\\times W}\\), instance level label \\(L_{i}\\in R^{H\\times W}\\) 0: The low resolution size \\([H_{lr},W_{lr}]\\), high and low confidence score \\(\\theta_{h},\\theta_{l}\\) for dense mask 0: Point prompts \\(P\\in R^{N_{lr}\\times 2}\\), low resolution mask input \\(M\\in R^{N_{lr}\\times H_{lr}\\times W_{lr}}\\) 1: Let \\(L_{s}^{lr},L_{s}^{lr}\\gets downsampling(L_{s},L_{i})\\); 2: Let \\(P\\gets where(L_{s}^{lr}\\) is not ignore label); 3: Let \\(N_{lr}\\gets len(P)\\); 4: Let \\(M\\gets zeros([N_{lr},H_{lr},W_{lr}])\\); 5:for\\(i\\) in \\(range(N_{lr})\\)do 6:if\\(L_{s}^{lr}[P[i]]\\) is not ignore label then 7:\\(M[i][L_{s}^{lr}[P[i]]\ eq L_{s}^{lr}[P[i]]]\\leftarrow-\\theta_{h}\\); 8:if\\(L_{s}^{lr}[P[i]]\\) is valid then 9:\\(M[i][L_{i}^{lr}[P]=L_{i}^{lr}[P[i]]]\\leftarrow\\theta_{h}\\); 10:else 11:\\(M[i][L_{s}^{lr}[P]=L_{s}^{lr}[P[i]]]\\leftarrow\\theta_{l}\\); 12:endif 13:endif 14:endfor 15:return\\(P\\), \\(M\\) ``` **Algorithm 1** SAM-based Pseudo-Label Generation (Stg. 1) ### _Implementation Details_ #### Iv-B1 Lightweight Student The LiDAR student encoder is a modified SPVCNN [12] with few parameters, whose hidden dimensions are 64 to speed up the network and the number of layers is 4. **Input**: Car camera image \\(I\\in R^{H\\times W\\timesrgb}\\), point prompts \\(P\\), low resolution mask input \\(M\\) **Parameter**: The SAM model \\(SAM\\), the stability score filtering threshold \\(\\theta_{stability}\\), the box IoU cutoff used by non-maximal suppression \\(\\theta_{box\\_nms}\\) **Output**: Semantic pseudo-label \\(PL_{s}\\in R^{H\\times W}\\), instance pseudo-label \\(PL_{i}\\in R^{H\\times W}\\) ``` 1: Let \\(data\\leftarrow\\{\\}\\); 2:for\\(i\\) in \\(range(N_{lr})\\)do 3:\\(masks,iou\\_predictions\\gets SAM(I,P[i],M[i])\\); 4:\\(S_{stability}\\gets calculate\\_stability\\_score(masks)\\); 5:for\\(mask\\) in \\(masks\\)do 6:if\\(S_{stability}>\\theta_{stability}\\)then 7:\\(box\\gets mask\\_to\\_box(mask)\\) 8:\\(data+=\\{mask,iou\\_prediction,box\\}\\) 9:endif 10:endfor 11:endfor 12:endfor 13:\\(data\\gets non\\_maximum\\_suppression(data,\\theta_{box\\_nms})\\) 14:Sort \\(data\\) in ascending order by \\(iou\\_prediction\\) 15: Let \\(PL_{s},PL_{i}\\gets zeros([H,W]),zeros([H,W])\\); 16:for\\(d\\) in \\(data\\)do 17:\\(PL_{s}[d[mask]]\\gets most\\_common(L_{s}[d[mask]])\\); 18:\\(PL_{i}[d[mask]]\\gets most\\_common(L_{i}[d[mask]])\\); 19:endfor 20:return\\(PL_{s},PL_{i}\\) ``` **Algorithm 2** SAM-based Pseudo-Label Generation (Stg. 2) #### Vi-B2 Loss Function We employ the cross-entropy and Lovasz losses as [3] for both 2D and 3D semantic segmentation: \\(L_{seg}=L_{wce}+L_{lovasz}\\). Due to the utilization of segmentation heads in both the teacher and student networks, a total of \\(2L+2\\) segmentation losses are employed. Among them, the weights of the \\(2L\\) single-stage losses are uniformly set to \\(1/L\\). Apart from the segmentation and distillation losses, an additional loss for computing orthogonal regularization is incorporated to enhance the AdaLoRA optimization process. #### Vi-B3 Training and Inference Only 64 training epochs were used in the ablation experiment and comprehensive analysis. Test-time augmentation is only applied during the inference on test split. #### Vi-B4 Sam-Plg In SAM-PLG, we adopted the pre-trained parameters from SAM with ViT-Huge. The low-resolution size for dense masks is set at [256, 256], with high and low confidence scores \\(\\theta_{h},\\theta_{l}\\) of 16 and 1, respectively. The stability score filtering threshold \\(\\theta_{stability}\\) is set at 0.9, and the non-maximal suppression threshold \\(\\theta_{box\\_nms}\\) is set at 0.7. ### _More Experiment Details_ #### Vi-C1 Comparison Results The segmentation performance metrics in the \"Comparison Results\" section are sourced from the official SemanticKITTI [5] website. However, the exception is the \"2DPASS-base\" results, which are obtained using the provided open-source checkpoint following official settings. For the FPS (frames per second) speed metric, testing was conducted on a single NVIDIA GeForce RTX 3090 card on the validation set without using batch size. The \\(torch.cuda.synchronize()\\) function was used during testing to ensure accuracy. #### Vi-C2 Parameters Details For the model parameters, some values are taken from the results reported, while others are derived from the official open-source code's models. The specific calculation method for the total parameters is: \\[sum([x.nebement()\\ for\\ x\\ in\\ model.parameters()]).\\] As shown in Table IV, we present a more detailed depiction of the parameter distribution for ELiTe. Besides, in Table 1 of the main content, it is noted that the training parameters for SPVNAS and Point-Voxel-KD might exceed the inference parameters, hence they have not been recorded. ### _More Visualization Details_ #### Vi-D1 Car Camera Images and Open World Images The \"Open World Images\" is sourced from the SAM [6] demo. The \"Car Camera Images\" are derived from the KITTI [7] dataset's sequence 08, comprising frames numbered 00000-000002, 001000-001002, 002000-002002, and 003000-003002. For ease of viewing, these images have undergone cropping and stretching. The sequential car images exhibit significant redundancy across adjacent frames, resulting in a lack of overall diversity and a monotonous style. #### Vi-D2 Feature Visualization We utilize t-SNE to visualize features from the image teacher and LiDAR student. T-SNE effectively captures neural network clustering patterns, as depicted in Figure 4(b). Notably, LiDAR student features form distinct compact clusters, while image segmentation teacher's features align more closely with conventional image classification features [2], resulting in expansive and continuous clusters. We speculate that the sparse clustering of LiDAR student features could be attributed to constrained data diversity and weaker model generalization, thus avoiding the conventional neural collapse phenomena [32]. The distinct clustering pattern of features emphasizes how the inclusion of image teachers provides distinct image features for the students to emulate. However, it's important to note that the student's feature clustering does not precisely replicate the teacher's, suggesting that traditional knowledge distillation might not facilitate effective cross-modal transfer. This underscores a limitation in our approach. In Figure 4(a)(c), we further employ t-SNE to visualize features from the baseline, 2D, and 3D fused teacher, along with its corresponding student. It is noteworthy that the features from the LiDAR baseline or student consistently form distinct and compact clusters, while the teacher's features exhibit a closer resemblance to conventional image classification features [2], displaying relatively continuous clustering patterns. These observations yield the same conclusion above: the student's features do not accurately mimic the teacher's clustering, indicating that vanilla knowledge distillation still exhibits shortcomings in cross-modal knowledge transfer. #### Vi-D3 Pseudo-Label Visualization We illustrate SAM-PLG's excellence through Figure 5, highlighting our achievement in generating comprehensive and dense pseudo labels. However, we acknowledge sporadic gaps in pseudo-label density and occasional imprecision in segmentation boundaries. These limitations stem from both the inherent inaccuracies within the initial ground truth labels we rely on, and our strategic trade-off between efficiency and absolute precision. The images are sourced from the KITTI [7] dataset sequence 00, encompassing frames with sequential numbers 000000, 001000, 002000, 003000, and 004000. Randomly selected images exhibit high-quality pseudo-labels, effectively encompassing the majority of primary categories and excluding out-of-domain classes such as the sky. #### Vi-E4 Result Visualization In Figure 6, we provide a bird's-eye view of the resulting error between baseline and ELiTe comparison. ELiTe shows less error at close range. Language Processing (NLP), such as GPT-4 [33] and LLaMA [34]. Recently, the concept of **Vision Foundation Model (VFM)** has also emerged in computer vision. Among them, the Segment Anything Model (SAM) [6] is renowned for its powerful image zero-shot instance segmentation. With the emergence of SAM, there have been many attempts [35][36][37] to use it for 3D understanding. They have all achieved remarkable results in their respective fields, but they mainly focus on the results of using SAM directly, rather than incorporating SAM into the training process. ## References * [1]B. Gao, Y. Pan, C. Li, S. Geng, and H. Zhao (2022) Are we hungry for 3d lidar data for semantic segmentation? a survey of datasets and methods. IEEE Trans. Intell. Transp. Syst.23 (7), pp. 6063-6081. Cited by: SSI. * [2]C. Xu, S. Yang, T. Galanti, B. Wu, X. Yue, B. Zhai, W. Zhan, P. Vajda, K. Keutzer, and M. Tomizuka (2022) Image2point: 3d point-cloud understanding with 2d image pretrained models. In ECCV, pp. 638-656. Cited by: SSI. * [3]C. Xu, S. Yang, T. Galanti, B. Wu, X. Yue, B. Zhai, W. Zhan, P. Vajda, K. Keutzer, and M. Tomizuka (2022) Image2point: 3d point-cloud understanding with 2d image pretrained models. In ECCV, pp. 13697, pp. 638-656. Cited by: SSI. * [4]J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall (2019) SemanticKitKit: a dataset for semantic scene understanding of lidar sequences. In ICCV, pp. 9296-9306. Cited by: SSI. [MISSING_PAGE_POST] * [14] Zhuangwei Zhuang, Rong Li, Kui Jia, Qicheng Wang, Yuanqing Li, and Mingkui Tan, \"Perception-aware multi-sensor fusion for 3d lidar semantic segmentation,\" in _ICCV_, 2021, pp. 16260-16270. * [15] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei, \"Imagenet: A large-scale hierarchical image database,\" in _CVPR_, 2009, pp. 248-255. * [16] Renrui Zhang, Ziyu Guo, Wei Zhang, Kunchang Li, Xupeng Miao, Bin Cui, Yu Qiao, Peng Gao, and Hongsheng Li, \"PoinClip: Point cloud understanding by CLIP,\" in _CVPR_, 2022, pp. 8542-8552. * [17] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby, \"An image is worth 16x16 words: Transformers for image recognition at scale,\" in _ICLR_, 2021. * [18] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin, \"Attention is all you need,\" in _Advances in Neural Information Processing Systems_, 2017, pp. 5998-6008. * [19] Kaiyi Zhang, Ximing Yang, Yuan Wu, and Cheng Jin, \"Attention-based transformation from latent features to point clouds,\" in _AAAI_, 2022, pp. 3291-3299. * [20] Rui Cao, Kaiyi Zhang, Yang Chen, Ximing Yang, and Cheng Jin, \"Point cloud completion via multi-scale edge convolution and attention,\" in _MM_, 2022, pp. 6183-6192. * [21] Yibin Wang, Yuchao Feng, Jie Wu, Honghui Xu, and Jianwei Zheng, \"CA-GAN: object placement via coalescing attention based generative adversarial network,\" in _ICME_, 2023, pp. 2375-2380. * [22] Yibin Wang, Haixia Long, Tao Bo, and Jianwei Zheng, \"Residual graph transformer for autism spectrum disorder prediction,\" _Computer Methods and Programs in Biomedicine_, p. 108065, 2024. * [23] Ximing Yang, Zhibo Zhang, Zhenqi He, and Cheng Jin, \"Generate point clouds with multiscale details from graph-represented structures,\" _arXiv preprint arXiv:2112.06433_, 2021. * [24] Martin Gerdzhev, Ryan Razani, Ehsan Taghavi, and Bingbing Liu, \"Torrado-net: multiview total variation semantic segmentation with diamond inception module,\" in _ICRA_, 2021, pp. 9543-9549. * [25] Ran Cheng, Ryan Razani, Ehsan Taghavi, Enxu Li, and Bingbing Liu, \"(af)2-s3net: Attentive feature fusion with adaptive feature selection for sparse semantic segmentation network,\" in _CVPR_, 2021, pp. 12547-12556. * [26] Andres Milioto, Ignacio Vizzo, Jens Behley, and Cyrill Stachniss, \"Rangenet ++: Fast and accurate lidar semantic segmentation,\" in _IROS_, 2019, pp. 4213-4220. * [27] Yang Zhang, Zixiang Zhou, Philip David, Xiangyu Yue, Zerong Xi, Boqing Gong, and Hassan Foroosh, \"Polarnet: An improved grid representation for online lidar point clouds semantic segmentation,\" in _CVPR_, 2020, pp. 9598-9607. * [28] Ryan Razani, Ran Cheng, Ehsan Taghavi, and Bingbing Liu, \"Lite-hdseg: Lidar semantic segmentation using lite harmonic dense convolutions,\" in _ICRA_, 2021, pp. 9550-9556. * [29] Mitchell Wortsman, Gabriel Illarco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, and Ludwig Schmidt, \"Robust fine-tuning of zero-shot models,\" in _CVPR_, 2022, pp. 7949-7961. * [30] Zhibo Zhang, Ximing Yang, Weizhong Zhang, and Cheng Jin, \"Robust fine-tuning for pre-trained 3d point cloud models,\" _arXiv preprint arXiv:2404.16422_, 2024. * [31] Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao, \"Adaptive budget allocation for parameter-efficient fine-tuning,\" in _ICLR_, 2023. * [32] Tomer Galanti, Andris Gyorgy, and Marcus Hutter, \"On the role of neural collapse in transfer learning,\" in _ICLR_, 2022. * [33] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al., \"Gpt-4 technical report,\" _arXiv preprint arXiv:2303.08774_, 2023. * [34] Hugo Touvron, Thibaut Lavali, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptise Roziere, Naman Goyal, Eric Hanhpo, Faisal Azhar, et al., \"Llama: Open and efficient foundation language models,\" _arXiv preprint arXiv:2302.13971_, 2023. * [35] Dingyuan Zhang, Dingkang Liang, Hongcheng Yang, Zhikang Zou, Xiaoqing Ye, Zhe Liu, and Xiang Bai, \"Sam3d: Zero-shot 3d object detection via segment anything model,\" _arXiv preprint arXiv:2306.02245_, 2023. * [36] Yunhan Yang, Xiaoyang Wu, Tong He, Hengshuang Zhao, and Xihui Liu, \"Sam3d: Segment anything in 3d scenes,\" _arXiv preprint arXiv:2306.03908_, 2023. * [37] Jiahong Cen, Zunwei Zhou, Jiemin Fang, Chen Yang, Wei Shen, Lingxi Xie, Dongsheng Jiang, Xiaopeng Zhang, and Qi Tian, \"Segment anything in 3d with nerfs,\" in _NeurIPS_, 2023. Fig. 6: More Result Visualization.
Cross-modal knowledge transfer enhances point cloud representation learning in LiDAR semantic segmentation. Despite its potential, the _weak teacher challenge_ arises due to repetitive and non-diverse car camera images and sparse, inaccurate ground truth labels. To address this, we propose the Efficient Image-to-LiDAR Knowledge Transfer (ELiTe) paradigm. ELiTe introduces Patch-to-Point Multi-Stage Knowledge Distillation, transferring comprehensive knowledge from the Vision Foundation Model (VFM), extensively trained on diverse open-world images. This enables effective knowledge transfer to a lightweight student model across modalities. ELiTe employs Parameter-Efficient Fine-Tuning to strengthen the VFM teacher and expedite large-scale model training with minimal costs. Additionally, we introduce the Segment Anything Model based Pseudo-Label Generation approach to enhance low-quality image labels, facilitating robust semantic representations. Efficient knowledge transfer in ELiTe yields state-of-the-art results on the SemanticKITTI benchmark, outperforming real-time inference models. Our approach achieves this with significantly fewer parameters, confirming its effectiveness and efficiency. 3D point cloud, scene understanding, semantic segmentation, knowledge distillation
Summarize the following text.
231
arxiv-format/2207_14674v1.md
# Enhanced Laser-Scan Matching with Online Error Estimation for Highway and Tunnel Driving Matthew McDermott, Jason Rife, _Tufts University_ ## I Introduction This paper introduces the Iterative Closest Ellipse Transform (ICET), a novel lidar scan matching algorithm. ICET is superior to current state-of-the-art methods of scan matching due to improvements in performance and interpretability. To the best of our knowledge, there is no existing algorithm that can, in addition to estimating transformation between point clouds, both produce online error estimates and predict ambiguous directions as a function of scene geometry. To understand the utility of the contributions of ICET, it is important to consider existing algorithms for lidar scan matching. One of the earliest scan matching methods is the Iterative Closest Point (ICP) algorithm [1]. ICP works by finding transformations that minimize the distances between corresponding points in two scans [2]. Though simple, ICP comes with a number of drawbacks, most importantly the challenge of data association, where each point in the second scan must be matched correctly to a point in the first scan. Point-to-Plane ICP simplifies data association by fitting planes to groups of points and matching those planes between scans [3]; however, the assumption of planar structure is not often representative of real-world scenes. The Normal Distribution Transform is an alternative to ICP and ICP Point-to-Plane that simplifies data association using an unstructured representation of the scene: a voxel grid [4]. The grid enforces spatial relationships among grid clusters without imposing specific geometric models to represent objects in the scene. NDT also replaces individual points with a density function thereby reducing sensitivity to noise, particularly during data association. The most simple implementation of NDT involves optimizing correspondences between distributions in the reference scan and individual points in the later scan (P2D-NDT), though more recent implementations have shown an increase in computational speed with similar performance by optimizing the correspondences between distribution centers of the two scans (D2D-NDT) [5]. One common disadvantage of all variants of NDT, however, is the lack of a quantitative indication of solution quality. In many applications, machine-learning based approaches have been shown to be superior to traditional geometric analyses. With regards to scan matching, recent machine learning approaches such as LO-Net have achieved performance in dead reckoningsimilar to that of geometric based approaches like ICP and NDT [6]. Unfortunately, the nature of end-to-end neural networks results in uninterpretable models where it is difficult to predict patterns of error. In the automotive application, it is particularly important to preserve some level of human readable information in order to develop a rigorous safety case. Our new algorithm ICET addresses limitations of ICP, NDT and LO-Net. To overcome the data association issues of ICP, ICET uses a voxelized density function (much like NDT). To enable performance estimation, however, ICET leverages a nonlinear least-squares solution (as opposed to the optimization methods used in NDT). Error covariance matrices can be obtained from least-squares processing, but these covariance matrices are only made meaningful by introducing a new scene interpretation and dimension reduction step, a step not used in either ICP or NDT. The resulting algorithm enhances accuracy and characterizes errors; moreover, in contrast with LO-Net, ICET remains fully interpretable at every step. The remainder of this paper presents the ICET algorithm and simulation results demonstrating its function. Section II highlights how geometric ambiguity arises and undermines existing scan matching methods. Section III describes our approach to scan matching using a linear least-squares formulation. Section IV introduces a technique to eliminate ambiguities from contributing to scan matching solution. Section V describes simulations to illustrate the algorithm's performance. Results of the simulation are discussed in Section VI. A final section summarizes the paper. ## II Geometric ambiguity The effectiveness of a scan matching technique is highly dependant on both the transformation between the two scans and the underlying geometry of the scene. While it is intuitive that too large a transformation between subsequent scans may make accurate estimation of their relationship impossible, it should be noted that the same is true even for small displacements in scenes that lack distinctive features in one or more direction. Figure 2 illustrates an example. Figure 1: Visualization of ICP, ICP point-to-plane, and NDT. Dotted lines represent correspondences between various features in each scan. Figure 2: Geometrically ambiguous extended surface The figure visualizes an extended feature, which in this case is a wall. Two snapshots are taken at different locations along the wall. A scan matching algorithm would most likely suggest that the two snapshots describe essentially the same physical location in the world, even though they are in fact drawn from distinct, non-overlapping locations. However, the same ambiguity would exist even if the scans were mildly overlapping, because the scans can only reliably be localized in the direction perpendicular to the wall and not along it. Similar scenes occur during driving on some highways, in urban canyons, and in tunnels. In these environments, dominant terrain features (like the wall in Figure 2) may be aligned with the road such that scan matching algorithms are only reliable perpendicular to the road and not along it. An example of an environment in which the dominant features are aligned with the road is shown in Figure 3. This ambiguity associated with extended objects, sometimes called the aperture problem [7], is commonly observed in vision-based and lidar-based localization. Although the aperture problem is well understood in some domains, its impact on lidar scan matching algorithms like NDT is not well understood. One of the primary goals of this paper is to introduce a new approach to mitigate the aperture problem when lidar scan points are voxelized as shown in Figure 4. Figure 4: Voxelization of Lidar scans for three objects of different sizes: small, voxel-sized, and extended. Figure 3: Road section with potential for high geometric ambiguity along the axis aligned with the road, courtesy of Rene Schwietzke [8] The flaw in the NDT algorithm is in its assumption that the lidar points within a voxel belong to a random distribution that lies entirely inside the voxel. This flaw is twofold in that the distribution is neither entirely random nor necessarily confined within a given voxel. These issues are visualized in Figure 4, which visualizes three objects (top row), simulated lidar scans without noise (middle row), and lidar scans with simulated noise (last row). For a point-like object, one much smaller than the size of a grid voxel, the NDT assumptions are reasonable. In this case (left column of figure), the distribution of lidar samples is governed almost entirely by random sensor noise. For an object on the scale of a voxel (middle column), it becomes more clear that the distribution of scan points has a largely deterministic shape, a right angle shape for the vehicle visualized in the figure. In this middle case, NDT's assumption that the distribution is driven by purely random noise begins to break down; however, the Gaussian random-noise distribution assumed by NDT still provides a meaningful description of the object. When the object is extended (right column), however, NDT's assumptions break down completely. NDT represents the wall as multiple local Guassian distributions, each independent from its neighbors. Along the wall direction, localization errors larger than a voxel result in aliasing, with bad data association caused by lidar points slipping into the next cell along the wall. This aliasing problem means that NDT implicitly creates an artificial upper bound for the noise along the length of the extended object. Because maximum distribution length is clipped by the length of a voxel, solutions from NDT are calculated with an implied standard deviation of error in the extended direction of less than one half of the voxel width. This issue compounds the fact that no single error metric is generated by the NDT algorithm. In fact, without resolving the aperture problem, it is clear that it would be very difficult to extract a meaningful accuracy estimate directly from NDT. Our approach is to resolve these issues with a least-squares based approach that incorporates a mechanism for mitigating the aperture problem and that, thereby, enables meaningful online estimates of the accuracy of a lidar scan match. ## III Scan matching as a linearized least-squares problem This section re-envisions voxelized scan matching, replacing the optimization approach of NDT with a least-squares solution in order to introduce an analytical approach for estimating solution accuracy. Obtaining an accuracy estimate for a voxelized lidar scan match is the first novel feature of our ICET algorithm. ICET works by estimating the solution vector \\(\\mathbf{x}\\) that best represents the transformation between two point clouds. This paper will focus on the 2D case, where \\(\\mathbf{x}\\) consists of one rotation angle and two scalar translations: \\(\\theta\\), \\(x\\) and \\(y\\). \\[\\mathbf{x}=\\begin{bmatrix}x\\\\ y\\\\ \\theta\\end{bmatrix} \\tag{1}\\] For the 3D case, by comparison, the vector would consist of a generalized rotation description (e.g. Euler angles or quaternions) and a 3D translation vector. In both the 2D and 3D cases, the transformation vector \\(\\mathbf{x}\\) maps the points in the second (or _new_) scan into the coordinate system used by the first (or _reference_) scan. We assign 2D positions \\({}^{(i)}\\mathbf{p}\\) to describe the location of each point \\(i\\) in the new scan using Cartesian {x,y} coordinates defined in the body-fixed frame at the time of the new scan. The goal of the scan match is to transform the coordinates \\({}^{(i)}\\mathbf{p}\\) into an alternative set of coordinates \\({}^{(i)}\\mathbf{q}\\), corresponding to the original body-fixed frame (at the time reference scan). The \\({}^{(i)}\\mathbf{q}\\) are related to the states in (1) as described below. \\[{}^{(i)}\\mathbf{q}(\\mathbf{x})=\\begin{bmatrix}cos(\\theta)&-sin(\\theta)\\\\ sin(\\theta)&cos(\\theta)\\end{bmatrix}{}^{(i)}\\mathbf{p}-\\begin{bmatrix}x\\\\ y\\end{bmatrix} \\tag{2}\\] Averaging the \\({}^{(i)}\\mathbf{q}(\\mathbf{x})\\) over a voxel gives the voxel mean. Concatenating all of those voxel means into a larger vector gives a nonlinear observer function \\(\\mathbf{y}=\\mathbf{h}(\\mathbf{x})\\), which can be compared to a similar concatenated voxel-mean vector \\(\\mathbf{y}_{0}\\) from the original image. Note that here, and subsequently in this paper, variable pairs like \\(\\mathbf{y}\\) and \\(\\mathbf{y}_{0}\\) are defined such that the case with no subscript indicates the new scan and the case with the \"\\(0\\)\" subscript, the reference scan. Ideally, the voxel-mean vectors are identical for both scans (\\(\\mathbf{y}=\\mathbf{y}_{0}\\)), so \\(\\mathbf{x}\\) can be estimated by inverting the observation equation to solve \\[\\mathbf{h}(\\mathbf{x})=\\mathbf{y}_{0}. \\tag{3}\\] The goal of ICET is to solve (3) using an iterative Newton-Raphson approach. The approach works by linearizing the nonlinear equation via a Taylor series expansion. \\[\\mathbf{y}_{0}=\\mathbf{h}(\\hat{\\mathbf{x}})+\\mathbf{H}\\delta\\mathbf{x}+ \\mathcal{O}(\\delta\\mathbf{x}^{2}) \\tag{4}\\]Here \\(\\mathbf{H}\\) is the _Jacobian_, a matrix of first derivatives of the function \\(\\mathbf{h}\\) with respect to each element of \\(\\mathbf{x}\\). In the Taylor Series expansion, the current best estimate is \\(\\hat{\\mathbf{x}}\\). A linear correction \\(\\delta\\mathbf{x}\\) can be computed assuming that the higher-order terms are zero: \\[\\mathbf{H}\\delta\\mathbf{x}=\\Delta\\mathbf{y} \\tag{5}\\] Here \\(\\Delta\\mathbf{y}=\\mathbf{y}_{0}-\\mathbf{h}(\\hat{\\mathbf{x}})\\). Introducing a weighting matrix \\(\\mathbf{W}\\), the standard weighted least-squares solution [9] is \\[\\delta\\mathbf{x}=(\\mathbf{H}^{T}\\mathbf{W}\\mathbf{H})^{-1}\\mathbf{H}^{T} \\mathbf{W}\\Delta\\mathbf{y}. \\tag{6}\\] After computing \\(\\delta\\mathbf{x}\\) using (6), we can update the estimate \\(\\hat{\\mathbf{x}}\\) according to the following equation, and iterate to convergence. \\[\\hat{\\mathbf{x}}\\rightarrow\\hat{\\mathbf{x}}+\\delta\\mathbf{x} \\tag{7}\\] In order to implement this approach, it is necessary to specify certain terms used above, by providing the precise definitions of the voxel means \\(\\mathbf{y}\\) and \\(\\mathbf{y}_{0}\\), the Jacobian \\(\\mathbf{H}\\), and the weighting matrix \\(\\mathbf{W}\\). First consider the voxel means. A subset of points \\(i\\) from a scan belong to a given voxel \\(j\\). For the original scan these are \\(i\\in^{(j)}\\mathcal{I}_{0}\\) and for the new scan, \\(i\\in^{(j)}\\mathcal{I}\\). For a given voxel, the mean \\({}^{(j)}\\mathbf{y}_{0}\\) of the new-scan points and the mean \\({}^{(j)}\\mathbf{y}\\) of the reference-scan points are computed as follows. \\[{}^{(j)}\\mathbf{y}_{0}=\\frac{1}{|^{(j)}\\mathcal{I}_{0}|}\\sum_{i\\in^{(j)} \\mathcal{I}_{0}}{}^{(i)}\\mathbf{q}_{0} \\tag{8}\\] \\[{}^{(j)}\\mathbf{y}(\\hat{\\mathbf{x}})=\\frac{1}{|^{(j)}\\mathcal{I}|}\\sum_{i\\in^ {(j)}\\mathcal{I}}{}^{(i)}\\mathbf{q}(\\mathbf{\\hat{x}}) \\tag{9}\\] The number of points in each subset is the cardinality of that subset, indicated by the \"\\(|*|\\)\" notation. Statistics are only computed for those voxels where the number of lidar points exceeds a minimum cutoff; correspondences are then generated for voxels, such that there remains a subset of voxels, indexed \\(j\\in J\\), matched between the new and reference scan. The corresponding means (for voxels \\(j\\in J\\)) are concatenated to form the larger observation vectors: \\(\\mathbf{y}_{0}\\) and \\(\\mathbf{y}=\\mathbf{h}(\\hat{\\mathbf{x}})\\). \\[\\mathbf{y}_{0}=\\begin{bmatrix}{}^{(1)}\\mathbf{y}_{0}\\\\ {}^{(2)}\\mathbf{y}_{0}\\\\ {}^{ }\\\\ {}^{(J)}\\mathbf{y}_{0}\\end{bmatrix} \\tag{10}\\] \\[\\mathbf{h}(\\hat{\\mathbf{x}})=\\begin{bmatrix}{}^{(1)}\\mathbf{y}(\\hat{\\mathbf{ x}})\\\\ {}^{(2)}\\mathbf{y}(\\hat{\\mathbf{x}})\\\\ {}^{ }\\\\ {}^{(J)}\\mathbf{y}(\\hat{\\mathbf{x}})\\end{bmatrix} \\tag{11}\\] Computing the \\(\\delta\\mathbf{x}\\) correction in (6) requires differencing the above two vectors to give \\(\\Delta\\mathbf{y}=\\mathbf{y}_{0}-\\mathbf{h}(\\hat{\\mathbf{x}})\\). If the dimension of the scan is \\(N\\), with \\(N=2\\) in the 2D case and \\(N=3\\) in the 3D case, then \\(\\Delta\\mathbf{y}\\in\\mathbb{R}^{JN}\\). Taking the derivative of (11) with respect to each variable in \\(\\mathbf{x}\\), we can write the Jacobian \\(\\mathbf{H}\\) in terms of a submatrix \\({}^{(j)}\\mathbf{H}\\) for each \\(j\\in J\\). \\[\\mathbf{H}=\\begin{bmatrix}{}^{(1)}\\mathbf{H}\\\\ {}^{ }\\\\ {}^{(J)}\\mathbf{H}\\end{bmatrix} \\tag{12}\\] For the 2D case, the \\({}^{(j)}\\mathbf{H}\\) are obtained by substituting (2) into (9) and taking derivatives with respect to each state in (1). \\[{}^{(j)}\\mathbf{H}=\\left(\\begin{array}{cc}-1&0&{}^{(j)}\\mathbf{H}_{\\theta} \\\\ 0&-1&\\end{array}\\right) \\tag{13}\\]Here the vector \\({}^{(j)}\\mathbf{H}_{\\theta}\\) is computed by evaluating derivatives of (3) with respect to \\(\\theta\\) and summing over \\(i\\in{}^{(j)}\\mathcal{I}\\). \\[{}^{(j)}\\mathbf{H}_{\\theta}=\\frac{1}{|{}^{(j)}\\mathcal{I}|}\\sum_{i\\in{}^{(j)} \\mathcal{I}}\\begin{bmatrix}-sin(\\theta)&-cos(\\theta)\\\\ cos(\\theta)&-sin(\\theta)\\end{bmatrix}{}^{(i)}\\mathbf{p} \\tag{14}\\] To evaluate the \\(\\delta\\mathbf{x}\\) correction in (6), we must still define the weighting matrix \\(\\mathbf{W}\\). For an optimal solution [9], \\(\\Delta\\mathbf{y}\\) should be unbiased and the covariance of \\(\\Delta\\mathbf{y}\\) should be inverted to form the weighting matrix \\(\\mathbf{W}\\). The covariance of the observation vector is typically called the _sensor-noise covariance_ and labeled \\(\\mathbf{R}\\). Thus we expect the weighting matrix to have this form: \\[\\mathbf{W}=\\mathbf{R}^{-1} \\tag{15}\\] In many navigation applications, the matrix \\(\\mathbf{R}\\) is determined strictly from a model of sensor noise. In the case of scan matching, we have a luxury in that each voxel mean is computed statistically from many data points, and so we can directly estimate the covariance from the data. For a given voxel \\(j\\), define the reference-scan covariance estimate (computed statistically as the central second moment over the point locations) to be \\({}^{(j)}\\mathbf{Q}_{0}\\), and define the new-scan estimate to be \\({}^{(j)}\\mathbf{Q}\\). \\[{}^{(j)}\\mathbf{Q}_{0}=\\frac{1}{|{}^{(j)}\\mathcal{I}_{0}|-1}\\sum_{i\\in{}^{(j)} \\mathcal{I}_{0}}\\Bigl{(}{}^{(i)}\\mathbf{q}_{0}-\\mathbf{y}_{0}\\Bigr{)}\\Bigl{(}{ }^{(i)}\\mathbf{q}_{0}-\\mathbf{y}_{0}\\Bigr{)}^{T} \\tag{16}\\] \\[{}^{(j)}\\mathbf{Q}(\\hat{\\mathbf{x}})=\\frac{1}{|{}^{(j)}\\mathcal{I}|-1}\\sum_{i \\in{}^{(j)}\\mathcal{I}}\\Bigl{(}{}^{(i)}\\mathbf{q}(\\hat{\\mathbf{x}})-\\mathbf{y }\\Bigr{)}\\Bigl{(}{}^{(i)}\\mathbf{q}(\\hat{\\mathbf{x}})-\\mathbf{y}\\Bigr{)}^{T} \\tag{17}\\] A well-known result is that the covariance \\({}^{(j)}\\boldsymbol{\\Sigma}_{y_{0}}\\) of the sample mean vector \\({}^{(j)}\\mathbf{y}_{0}\\) can be related to the true covariance, using (8) and the central limit theorem [10]. Given a sufficient number of samples in a voxel, the estimated covariances of (16) and (17) can be substituted for the true covariances. Taking this approach, we estimate \\({}^{(j)}\\boldsymbol{\\Sigma}_{y_{0}}={}^{(j)}\\mathbf{Q}_{0}/|{}^{(j)} \\mathcal{I}_{0}|\\) and \\({}^{(j)}\\boldsymbol{\\Sigma}_{y}={}^{(j)}\\mathbf{Q}/|{}^{(j)}\\mathcal{I}|\\). Next, we note \\(\\mathbf{R}\\) describes the variance of \\(\\Delta\\mathbf{y}\\), which is formed by differencing and concatenating the mean vectors \\({}^{(j)}\\mathbf{y}\\) and \\({}^{(j)}\\mathbf{y}_{0}\\) for all voxels \\(j\\in J\\). For each voxel, a local covariance \\({}^{(j)}\\mathbf{R}\\) can be defined considering contributions from \\({}^{(j)}\\mathbf{y}\\) and \\({}^{(j)}\\mathbf{y}_{0}\\). In other words, \\({}^{(j)}\\mathbf{R}={}^{(j)}\\boldsymbol{\\Sigma}_{y_{0}}+{}^{(j)}\\boldsymbol{ \\Sigma}_{y}\\). Assuming point distributions are uncorrelated across voxels, then the full covariance matrix \\(\\mathbf{R}\\) is block diagonal with diagonal elements \\({}^{(j)}\\mathbf{R}\\). That is: \\[\\mathbf{R}=\\begin{bmatrix}{}^{(1)}\\mathbf{R}&\\mathbf{0}&\\dots&\\mathbf{0}\\\\ \\mathbf{0}&{}^{(2)}\\mathbf{R}&\\dots&\\mathbf{0}\\\\ \\vdots&\\vdots&\\ddots&\\vdots\\\\ \\mathbf{0}&\\mathbf{0}&\\dots&{}^{(J)}\\mathbf{R}\\end{bmatrix} \\tag{18}\\] When the number of voxels is large, the covariance matrix \\(\\mathbf{R}\\) is rather sparse, so it is inefficient to construct the full matrix. Instead, a more efficient mechanism for evaluating (6) is to recognize that \\[\\mathbf{H}^{T}\\mathbf{W}\\mathbf{H}=\\sum_{j\\in J}\\Biggl{[}{}^{(j)}\\mathbf{H}^{ T}\\ \\ {}^{(j)}\\mathbf{R}^{-1}\\ \\ {}^{(j)}\\mathbf{H}\\Biggr{]} \\tag{19}\\] and \\[\\mathbf{H}^{T}\\mathbf{W}\\Delta\\mathbf{y}=\\sum_{j\\in J}\\Biggl{[}{}^{(j)} \\mathbf{H}^{T}\\ \\ {}^{(j)}\\mathbf{R}^{-1}\\ \\ \\Bigl{(}{}^{(j)}\\mathbf{y}_{0}-{}^{(j)} \\mathbf{y}(\\hat{\\mathbf{x}})\\Bigr{)}\\Biggr{]}. \\tag{20}\\] Substituting (19) and (20) into (6) greatly increases efficiency for computing the correction \\(\\delta\\mathbf{x}\\). Equation (19) is also very useful for computing the covariance \\(\\mathbf{P}\\), which describes the error of the estimated state \\(\\hat{\\mathbf{x}}\\) relative to the true state \\(\\mathbf{x}_{true}\\). \\[\\mathbf{P}=E[(\\hat{\\mathbf{x}}-\\mathbf{x}_{true})(\\hat{\\mathbf{x}}-\\mathbf{x} _{true})^{T}] \\tag{21}\\]Assuming errors are small perturbations around the solution, it is well known [9] that the linearized equations result in the following expression for the covariance. \\[\\mathbf{P}=(\\mathbf{H}^{T}\\mathbf{W}\\mathbf{H})^{-1} \\tag{22}\\] This last equation represents ICET's prediction of its own accuracy, a prediction that automatically accounts for the geometric distribution of useful voxels and for the quality of the scan points in those voxels. ## IV Ambiguity Management via Dimension Reduction One limitation of the prior section is that the methods assume the lidar points within each voxel are scattered about the mean due to purely random sensor noise. In fact, objects and natural terrain shape the point distribution, such that the observed scan has some degree of deterministic structure. The weighted least-squares solution is optimal only if the point distribution is random, so this deterministic structure degrades solution accuracy. Our approach to limiting the impact of unmodeled deterministic structure is to exclude measurements in directions where the covariance matrix approaches the size of the voxel. Deterministic structure is particularly problematic for objects that extend across multiple voxels (e.g. a long wall), because the within-voxel covariance provides a misleadingly optimistic estimate of measurement accuracy. To detect large variance values and identify their direction, we perform an eigendecomposition on the reference-scan covariance matrix \\({}^{(j)}\\mathbf{Q}_{0}\\) for each voxel \\(j\\in J\\). Each \\({}^{(j)}\\mathbf{Q}_{0}\\) can be coverted to an eigenvalue matrix \\({}^{(j)}\\mathbf{\\Lambda}\\) and an eigenvector matrix \\({}^{(j)}\\mathbf{U}\\) and decomposed as follows. \\[{}^{(j)}\\mathbf{Q}_{0}={}^{(j)}\\mathbf{U}\\ \\ ^{(j)}\\mathbf{\\Lambda}\\ \\ ^{(j)} \\mathbf{U}^{T} \\tag{23}\\] The eigenvalue matrix describes the principal axis lengths (squared) for the variance ellipse. The eigenvector matrix describes the directions of those principal axes. By testing each eigenvalue to see if it exceeds a reasonable threshold, we can identify overly extended distributions. The eigenvalue threshold \\(T\\) is based on the voxel width \\(a\\). It is well known that extended objects of uniform density along their length (e.g. a uniform-density bar) have a variance in that direction of \\(a^{2}/12\\). This result is tabulated in introductory mechanics textbooks like [11], where the variance, or central second moment, is called a _moment of inertia_. Noting covariance matrices computed from lidar data are stochastic, the threshold is reduced slightly to avoid missed detection of extended objects, to a value of \\(T=a^{2}/16\\). We also restrict the minimum voxel width to be significantly wider than the standard deviation of the lidar noise, so that the false detection risk is low for small objects and normal to surfaces. Without loss of generality, we can partition the eigenvalue matrix into two blocks, a first block \\(\\mathbf{\\Lambda}_{P}\\) that includes the values less than the threshold \\(T\\) and a second block \\(\\mathbf{\\Lambda}_{N}\\) that includes the values greater than or equal to the threshold. \\[\\mathbf{\\Lambda}=\\begin{bmatrix}\\mathbf{\\Lambda}_{P}&0\\\\ 0&\\mathbf{\\Lambda}_{N}\\end{bmatrix} \\tag{24}\\] Similarly, we can partition the corresponding eigenvectors into two submatrices, the eigenvectors \\(\\mathbf{U}_{P}\\) describing the dimensions preserved and the eigenvectors \\(\\mathbf{U}_{N}\\) describing the dimensions eliminated. \\[\\mathbf{U}=[\\mathbf{U}_{P}\\quad\\mathbf{U}_{N}] \\tag{25}\\] We can then project each voxel-mean vector into the preserved directions, eliminating the other directions. This operation is only necessary when the eigenvalue dimension is reduced yet not trivial (i.e. when \\(\\mathbf{\\Lambda}\\in\\mathbb{R}^{N\\times N}\\), \\(\\mathbf{\\Lambda}_{P}\\in\\mathbb{R}^{n\\times n}\\), and \\(0<n<N\\)). To account for all cases, we introduce a modified voxel mean, indicated by a tilde, as in \\({}^{(j)}\\tilde{\\mathbf{y}}\\). \\[{}^{(j)}\\tilde{\\mathbf{y}}=\\begin{cases}^{(j)}\\mathbf{y},&n=N\\\\ ^{(j)}\\mathbf{U}_{P}^{T}\\ \\ ^{(j)}\\mathbf{y},&0<n<N\\\\ \\emptyset,&n=0\\end{cases} \\tag{26}\\] The third instance of (26), the null result, occurs only for distributions wide in all directions, as in the case of loose foliage. To compute our solution via (6), the dimension-reduction process must also be applied to other variables including \\({}^{(j)}\\mathbf{y}_{0},{}^{(j)}\\mathbf{H}\\) and \\({}^{(j)}\\mathbf{R}\\). First, the voxel means \\({}^{(j)}\\mathbf{y}_{0}\\) for the reference scan become \\({}^{(j)}\\tilde{\\mathbf{y}}_{0}\\), which has a structure analagous to (26), but with \\({}^{(j)}\\mathbf{y}_{0}\\) substituted for \\({}^{(j)}\\mathbf{y}\\). Second, reducing the dimensions of \\({}^{(j)}\\mathbf{H}\\) and \\({}^{(j)}\\mathbf{R}\\) gives \\({}^{(j)}\\tilde{\\mathbf{H}}\\) and \\({}^{(j)}\\tilde{\\mathbf{R}}\\), defined as follows. \\[{}^{(j)}\\tilde{\\mathbf{H}}=\\begin{cases}^{(j)}\\mathbf{H},&n=N\\\\ ^{(j)}\\mathbf{U}_{P}^{T}\\ {}^{(j)}\\mathbf{H},&0<n<N\\\\ \\emptyset,&n=0\\end{cases} \\tag{27}\\] \\[{}^{(j)}\\tilde{\\mathbf{R}}=\\begin{cases}^{(j)}\\mathbf{R},&n=N\\\\ ^{(j)}\\mathbf{U}_{P}^{T}\\ {}^{(j)}\\mathbf{R}\\ {}^{(j)}\\mathbf{U},&0<n<N\\\\ \\emptyset,&n=0\\end{cases} \\tag{28}\\] In all instances, the _tilde_ indicates the dimension-reduction process. Using the dimension-reduced variables, solution steps (19) and (20) can be rewritten, respectively, as follows. \\[\\mathbf{H}^{T}\\mathbf{W}\\mathbf{H}=\\sum_{j\\in J}\\Biggl{[}^{(j)}\\tilde{\\mathbf{ H}}^{T}\\ \\ ^{(j)}\\tilde{\\mathbf{R}}^{-1}\\ ^{(j)}\\tilde{\\mathbf{H}}\\Biggr{]} \\tag{29}\\] \\[\\mathbf{H}^{T}\\mathbf{W}\\Delta\\mathbf{y}=\\sum_{j\\in J}\\Biggl{[}^{(j)}\\tilde{ \\mathbf{H}}^{T}\\ ^{(j)}\\tilde{\\mathbf{R}}^{-1}\\ \\left({}^{(j)}\\tilde{\\mathbf{y}}_{0}-{}^{(j)}\\tilde{ \\mathbf{y}}\\right)\\Biggr{]}. \\tag{30}\\] Equations (29) and (30) can be substituted into (6) and, in most cases, iterated to obtain the Newton-Raphson solution. There is one corner case to consider, however. If all of the information in one direction is removed (e.g. for the case where the only feature is a straight wall that stretches across the entire scan), then (29) will be rank deficient and therefore non-invertible. A method for handling this special case is discussed in the Appendix. The full algorithm, including both the basic solution and dimension reduction, is summarized in the table labeled Algorithm 1. ``` 1:Initialize solution vector \\(\\mathbf{x}\\)\\(\\triangleright\\) (1) 2:Break down reference scan into voxels 3:Calculate mean and covariance matrix for voxels in reference scan with more than \\(\\tau\\) points \\(\\triangleright\\) (8), (16) 4:Calculate \\(U\\) and \\(L\\) matrices for the reference scan in each qualifying voxel \\(\\triangleright\\) (23) 5:while\\(\\ \\delta\\mathbf{x}\\) is converging do 6: Transform second point cloud by \\(\\mathbf{x}\\)\\(\\triangleright\\) (2) 7:Voxelize transformed point cloud using the same grid used to subdivide the reference scan 8: Compute the new-scan mean and covariance of lidar points in each qualifying voxel \\(\\triangleright\\) (9),(17) 9: Define a nearest-neighbor correspondence between distributions in each scan 10:for each correspondence \\(j\\)do 11: Compute the reduced-dimension means \\({}^{(j)}\\tilde{\\mathbf{y}}_{0}\\) and \\({}^{(j)}\\tilde{\\mathbf{y}}\\) \\(\\triangleright\\) (26) 12: Calculate \\({}^{(j)}\\tilde{\\mathbf{H}}\\)\\(\\triangleright\\) (27) 13: Calculate the sensor-noise matrix \\({}^{(j)}\\tilde{\\mathbf{R}}\\)\\(\\triangleright\\) (28) 14: Compute intermediate quantities as running sums \\(\\triangleright\\) (29),(30) 15:endfor 16:Calculate condition number of \\(\\mathbf{H}^{T}\\mathbf{W}\\mathbf{H}\\)\\(\\triangleright\\) (31) 17:if condition number below cutoff then 18: Determine the state correction \\(\\delta\\mathbf{x}\\)\\(\\triangleright\\) (6) 19:\\(\\tilde{\\mathbf{x}}\\rightarrow\\tilde{\\mathbf{x}}+\\delta\\mathbf{x}\\)\\(\\triangleright\\) (7) 20: Calculate state-error covariance \\(\\mathbf{P}\\)\\(\\triangleright\\) (22) 21:else (See Appendix for details) 22: Determine the subspace correction \\(\\delta\\mathbf{z}\\) and update \\(\\mathbf{x}^{\\prime}\\)\\(\\triangleright\\) (33),(34) 23:\\(\\tilde{\\mathbf{x}}\\rightarrow\\tilde{\\mathbf{x}}+\\mathbf{x}^{\\prime}\\)\\(\\triangleright\\) (35) 24: Calculate subspace-error covariance \\(\\mathbf{\\Gamma}_{P}^{-1}\\)\\(\\triangleright\\) (37) 25:endif 26:endwhile ``` **Algorithm 1** ICET ## V Simulation To demonstrate the central contributions of this paper, ICET was benchmarked against NDT. An implementation of NDT was programmed from scratch using the standard algorithm and optimization routine provided by Biber [4]. Monte-Carlo (MC) simulations of NDT and ICET were conducted through two scenarios featuring simple, simulated driving environments, as illustrated in Figure 5. The first scenario represents a T-intersection. The second scenario represents a straight tunnel or roadway. The T-intersection case involves a scene in which there are both along-track and cross-track features present in the frame. The straight-tunnel case involves a scene of high geometric ambiguity in which there are only along-track features. Both cases were represented in two-dimensions, with vertical walls assumed on the sides of the roadway. The units of the environment were somewhat arbitrary (but each unit length is meant to correspond to approximately 0.05 cm of physical distance). Each MC simulation considered only one environment. For a given environment, each MC trial considered a pair of scans: a reference scan and a new scan depicting the scene after movement. Examples of scan pairs are shown in Figure 5, where the reference scan is green and the new scan, blue. Grid lines in the figure correspond to voxel boundaries, with voxels of about 50 units on a side (about 2.5 m). Mean and covariance of lidar points in each voxel is ilustrated by a two-sigma ellipse. The illustrations indicate that the motion was small, with a translation of about 5 units laterally and 10 units vertically (about 0.25 m laterally and 0.5 m vertically). The yaw rotation between scans was 0.1 radian. The same translation and rotation was applied between scans in all trials, but lidar noise was re-sampled for each trial. Simulated lidar scans were constructed with 4200 samples. Gaussian noise with a standard deviation of 2 units in the \\(x\\) and \\(y\\) directions was applied to each sample. Note that both the ICET and NDT algorithms were implemented with two correspondence methods. The baseline approach defines correspondences only for means co-located within the same grid cell. A modified approach defines correspondences using the _1-NN_ or \"nearest neighbor\" method, which provides more robust initialization. Simulations were run using both correspondence methods with no statistically significant difference in solution accuracy. Therefore, only results from simulations using the nearest-neighbor approach are presented below. ## VI Results and Discussion Results were computed from MC simulations with 1000 trials for each environment. In each trial, the error was computed as \\(\\hat{\\mathbf{x}}-\\mathbf{x}_{true}\\). Mean and variance for lateral (or \\(x\\)-direction) error, longitudinal (or \\(y\\)-direction) error, and rotational errors were computed for each environment, for both the NDT and ICET algorithms. A predicted state-error covariance matrix was also computed for ICET. The distributions appeared zero centered, with no evidence of a systematic mean error across trials. Standard deviations (labeled std error) are tabulated below (in simulation units) for the two translation directions and for rotation. The tunnel geometry was specially configured to suppress all geometric information in one dimension, in order to create a singularity in the computation of (6). To address this issue, a specialty solution was applied, as defined by (33) in the Appendix. Figure 5: Sample transformations for T-intersection and straight tunnel case. Voxels shown as square grid cells. Each ellipse represents the two-sigma covariance ellipse for a grid cell. Lines are drawn across dimensions of distributions in the first scan that have not been reduced. In all of the tunnel-geometry trials, ICET was able to correctly identify the direction of geometric ambiguity and suppress the solution along that axis. For this reason the \\(y\\) direction error is listed as N/A for ICET; the measurements were identified as ambiguous, so no solution was attempted in that direction. The error standard deviations for ICET were consistently better by nearly an order of magnitude as compared to NDT, presumably because the dimension reduction process eliminates processing in ambiguous directions, which NDT believes to be well-characterized (e.g. modeling point-cloud uncertainty based on the voxel dimension, even for environmental features that extend across many voxels). The extreme case of this trend occurred for the tunnel environment, where NDT delivered a lateral error two orders of magnitude larger than the longitudinal error, without enunciating any warning about a potential fault caused by ambiguity associated with extended wall features. The predictive error-covariance matrices computed internally by the ICET algorithm (labeled as \"ICET Predicted\") were consistently representative of the true error distribution (labeled \"ICET Actual\"). **T-Intersection Case** \\begin{tabular}{|c|c|c|c|} \\hline Algorithm & std error \\(x\\) & std error \\(y\\) & std error \\(\\theta\\) (rad) \\\\ \\hline NDT Actual & 0.554 & 0.490 & 0.00251 \\\\ \\hline ICET Actual & 0.1047 & 0.0545 & 0.00035 \\\\ \\hline ICET Predicted & 0.101 & 0.0602 & 0.00035 \\\\ \\hline \\end{tabular} **Straight Tunnel Case (High Geometric Ambiguity)** \\begin{tabular}{|c|c|c|c|} \\hline Algorithm & std error \\(x\\) & std error \\(y\\) & std error \\(\\theta\\) (rad) \\\\ \\hline NDT Actual & 0.330 & 39.26 & 0.00280 \\\\ \\hline ICET Actual & 0.0437 & N/A & 0.00031 \\\\ \\hline ICET Predicted & 0.0454 & Excluded from Solution & 0.00032 \\\\ \\hline \\end{tabular} The results in the table indicate, at least in this simple test case, that ICET predicts accuracy well and that the algorithm identifies instances of geometric ambiguity, thereby verifying our two major claims about the ICET algorithm. It is interesting, too, that the accuracy of ICET, at least on this data set, is noticeably higher than the accuracy of NDT. While ICET effectively negates the effects of ambiguity from extended surfaces (e.g. walls), ambiguity may still occur from other geometric conditions. Regularly spaced features of equal size (e.g. columns) may contribute to aliasing. However, extended flat surfaces remain the most common form of geometric ambiguity in automotive applications [12]. The ability for ICET to output a predicted error covariance makes the algorithm particularly useful in the context of Bayesian filtering algorithms, such as the extended Kalman filter. Constructing a Bayesian filter requires modeling sensor noise. Currently error models are available for individual lidar data points [13] but relatively little information is available [14] to characterize the output of a lidar algorithm such as NDT. The ICET algorithm sidesteps this gap in the literature by directly estimating the accuracy of its output states. Moreover, the ICET accuracy prediction automatically adapts to reflect different environments and terrains, so it is not necessary to rely on the simplistic (though convenient) assumption that algorithm accuracy is the same under all conditions. One limitation of ICET is that performance is somewhat dependent on the resolution and alignment of the voxel grid. Rectangular voxels are an attractive choice in that their regularity makes it easy to index and assign lidar points to individual voxels; however, rectangular voxels also have practical limitations, particularly in cases in which extended world features are not aligned with the grid. This issue can be resolved for NDT by using an overlapping grid [15]. For ICET, a new problem is that our dimension reduction approach assumes the extended object cuts through the middle of a cell rather than, say, across its corner. The apparent width of a wall passing through the corner of a voxel would appear to be small, and the feature would not necessarily be excluded if the wall were aligned at an arbitrary angle relative to the grid. This is expected to somewhat reduce the benefits of ICET's dimension-reduction step when applied to arbitrary data sets. ## VII Summary A new technique for matching point clouds, the Iterative Closest Ellipsoidal Transform (ICET), was introduced which provides two novel improvements over existing voxelized scan-match methods such as the Normal Distributions Transform (NDT). The first contribution of ICET is the ability to estimate solution-error covariance (and thus the expected standard deviation of error for each component of translation and rotation in the solution vector) as a function of scene geometry. This is accomplished by first subdividing each scan into voxel grids and treating the distribution of points in each voxel as a probability density function. A linear least-squares solution is then calculated to determine the optimal transformation to minimize the differencesbetween distribution centers for the new scan and a reference scan. This process is iterated multiple times to account for nonlinearities. The second major contribution of ICET allows the algorithm to ignore contributions to the solution vector from locally ambiguous directions (such as along the surface of a long smooth wall) while keeping information in useful directions (such normal to the wall). Performance of ICET was verified in a simulated 2D environment against a benchmark implementation of NDT. In Monte-Carlo simulations of 1000 trials, ICET achieved an 80% reduction in estimation error while successfully predicting the standard deviation of error for all components of rotation and translation. The ability to estimate accuracy has utility for sensor fusion algorithms (e.g. a Kalman Filter), especially since ICET can dynamically adjust accuracy estimates on the fly, which avoids reliance on static - and likely overconservative - accuracy estimates. Furthermore, in a pathological straight-tunnel simulation, ICET was able to successfully identify the ambiguous dimension in all trials and suppress the solution along that axis. By contrast, in the same situation, NDT naively provides inaccurate estimates without warning that an ambiguity is present. The ambiguity-handling properties of ICET are especially important for autonomous vehicle applications, where tunnels and urban canyons may introduce ambiguity in the form of the aperture problem. ## Acknowledgements The authors wish to acknowledge and thank the U.S. Department of Transportation Joint Program Office (ITS JPO) and the Office of the Assistant Secretary for Research and Technology (OST-R) for sponsorship of this work. Opinions discussed here are those of the authors and do not necessarily represent those of the DOT or other affiliated agencies. ## Appendix As discussed in Section IV, the dimension-reduction process can, in rare cases, result in a solution singularity. These special cases can be detected numerically by analyzing [(29)] to compute the matrix _condition number_, which for this symmetric and positive-definite matrix is the ratio of its maximum to minimum eigenvalue. Matrices with high condition number are not practical to invert, and so in the case where the condition number is high (larger than, say, \\(10^{5}\\)), then the lowest eigenvalue of [(29)] can be eliminated iteratively until the condition number falls below the threshold. This process eliminates dimensions from the _solution_, in contrast with the prior dimension-reduction step, which eliminated dimensions from the _measurement_. In order to reduce the dimension of the solution, start by defining an eigendecomposition for [(29)], involving the eigenvector matrix \\(\\mathbf{V}\\) and the eigenvalue matrix \\(\\mathbf{\\Gamma}\\). \\[\\mathbf{H}^{T}\\mathbf{W}\\mathbf{H}=\\mathbf{V}\\mathbf{\\Gamma}\\mathbf{V}^{T} \\tag{31}\\] Again the eigenvalues can be partitioned into a submatrix of preserved eigenvalues \\(\\mathbf{\\Gamma}_{P}\\), which satisfy the condition number requirement, and a submatrix of eliminated eigenvalues \\(\\mathbf{\\Gamma}_{N}\\), which includes the approximately singular eigenvalues, meaning the low eigenvalues removed to meet the condition number requirement. The eigenvector matrix \\(\\mathbf{V}\\) can also be partitioned into the preserved and eliminated eigenvectors, \\(\\mathbf{V}_{P}\\) and \\(\\mathbf{V}_{N}\\), respectively. We can then solve a modified version of [(6)] in the well-conditioned subspace. To this end, we define coordinates \\(\\delta\\mathbf{z}\\) for the well-condition subspace where \\[\\delta\\mathbf{z}=\\mathbf{V}_{P}^{T}\\,\\delta\\mathbf{x}. \\tag{32}\\] Substituting [(31)] into into [(6)], multiplying by \\(\\mathbf{V}_{P}^{T}\\), and introducing [(32)], the iterative Newton-Raphson correction [(6)] becomes \\[\\delta\\mathbf{z}=\\mathbf{\\Gamma}_{P}^{-1}\\,\\mathbf{V}_{P}^{T}\\,(\\mathbf{H}^{ T}\\mathbf{W}\\Delta\\mathbf{y}) \\tag{33}\\] The corrections are accumulated into the variable \\(\\mathbf{x}^{\\prime}\\) (initialized to zero), such that \\[\\mathbf{x}^{\\prime}\\rightarrow\\mathbf{x}^{\\prime}+\\mathbf{V}_{P}\\,\\delta \\mathbf{z}. \\tag{34}\\] At each Newton-Raphson stage, the estimated state \\(\\hat{\\mathbf{x}}\\) is computed as the initial state \\(\\tilde{\\mathbf{x}}\\) plus the correction from [(34)]. \\[\\hat{\\mathbf{x}}=\\bar{\\mathbf{x}}+\\mathbf{x}^{\\prime} \\tag{35}\\] When the iterations converge, there is only confidence in the solution in the coordinates of the well-conditioned subspace, associated with the directions of the vectors \\(\\mathbf{V}_{P}\\). For this reason, it is helpful to map the correction term \\(\\mathbf{x}^{\\prime}\\) back into the well-conditioned subspace as \\(\\mathbf{z}\\), in order to accurately express the solution covariance. This mapping has the form of a general observation equation, characterized by an observation matrix \\(\\mathbf{C}\\). \\[\\mathbf{z}=\\mathbf{C}\\mathbf{x}^{\\prime} \\tag{36}\\] In this case, the observation matrix is simply the projection matrix \\(\\mathbf{C}=\\mathbf{V}_{P}^{T}\\). The uncertainty associated with this the difference between the estimate \\(\\hat{\\mathbf{z}}\\) and the true value \\(\\mathbf{z}_{\\mathbf{true}}\\) is characterized by the following covariance. \\[E[(\\hat{\\mathbf{z}}-\\mathbf{z}_{true})(\\hat{\\mathbf{z}}-\\mathbf{z}_{true})^{ T}]=\\mathbf{\\Gamma}_{P}^{-1} \\tag{37}\\] Result (37) has a dimension that is smaller than the size of the full-state covariance (22), reflecting the fact that there is no confidence in the solution in the eliminated directions, those associated with \\(\\mathbf{V}_{N}\\). ## References * [1] P. J. Besl and N. D. McKay. \"A method for registration of 3-D shapes\". In: _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 14.2 (1992), pp. 239-256. doi: 10.1109/34.121791. * [2] Y. Chen and G. Medioni. \"Object modeling by registration of multiple range images\". In: _Proceedings. 1991 IEEE International Conference on Robotics and Automation_. 1991, 2724-2729 vol.3. doi: 10.1109/ROBOT.1991.132043. * [3] K. L. Low. \"Linear Least-Squares Optimization for Point-to-Plane ICP Surface Registration\". In: 2004. * [4] P. Biber and W. Strasser. \"The Normal Distributions Transform: A New Approach to Laser Scan Matching\". In: vol. 3. Nov. 2003, 2743-2748 vol.3. isbn: 0-7803-7860-1. doi: 10.1109/IROS.2003.1249285. * [5] Min Jiang et al. \"Scan registration for mechanical scanning imaging sonar using kD2D-NDT\". In: June 2018, pp. 6425-6430. doi: 10.1109/CCDC.2018.8408259. * [6] Q. Li et al. \"LO-Net: Deep Real-Time Lidar Odometry\". In: June 2019, pp. 8465-8474. doi: 10.1109/CVPR.2019.00867. * [7] Shinsuke Shimojo, Gerald H Silverman, and Ken Nakayama. \"Occlusion and the solution to the aperture problem for motion\". In: _Vision research_ 29.5 (1989), pp. 619-626. * [8] Rene Schwietzke. \"Interstate I-93 Tunnel in Boston, part of the Big Dig\". In: 2005. url: [https://commons.wikimedia.org/wiki/File:Tunnel-large.jpg](https://commons.wikimedia.org/wiki/File:Tunnel-large.jpg). * [9] D. Simon. _Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches_. Wiley & Sons, 2006. * [10] NIST. \"Normal Distribution\". In: _NIST/ SEMATECH e-Handbook of Statistical Methods_. Oct. 2013, p. 1.3.6.6. doi: 10.18434/M32189. * [11] Ferdinand Pierre Beer, Ralph E Flori, and E Russell Johnston. _Mechanics for engineers: dynamics_. McGraw-Hill, 2007. * [12] Y. S. Park, H. Jang, and A. Kim. \"I-LOAM: Intensity Enhanced LiDAR Odometry and Mapping\". In: _2020 17th International Conference on Ubiquitous Robots (UR)_. 2020, pp. 455-458. doi: 10.1109/UR49135.2020.9144987. * [13] Silvere Bonnabel, Martin Barczyk, and Francois Goulette. \"On the covariance of ICP-based scan-matching techniques\". In: _2016 American Control Conference (ACC)_. 2016, pp. 5498-5503. doi: 10.1109/ACC.2016.7526532. * [14] Ashwin Vivek Kanhere and Grace Xingxin Gao. \"LiDAR SLAM utilizing normal distribution transform and measurement consensus\". In: _Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019)_. 2019, pp. 2228-2240. * [15] H. Hong, H. Kim, and B. H. Lee. \"Accuracy Evaluation of Registration of 3D Normal Distributions Transforms Interpolated by Overlapped Regular Cells\". In: _2018 18th International Conference on Control, Automation and Systems (ICCAS)_. 2018, pp. 1616-1619.
Lidar data can be used to generate point clouds for the navigation of autonomous vehicles or mobile robotics platforms. Scan matching, the process of estimating the rigid transformation that best aligns two point clouds, is the basis for lidar odometry, a form of dead reckoning. Lidar odometry is particularly useful when absolute sensors, like GPS, are not available. Here we propose the Iterative Closest Ellipsoidal Transform (ICET), a scan matching algorithm which provides two novel improvements over the current state-of-the-art Normal Distributions Transform (NDT). Like NDT, ICET decomposes lidar data into voxels and fits a Gaussian distribution to the points within each voxel. The first innovation of ICET reduces geometric ambiguity along large flat surfaces by suppressing the solution along those directions. The second innovation of ICET is to infer the output error covariance associated with the position and orientation transformation between successive point clouds; the error covariance is particularly useful when ICET is incorporated into a state-estimation routine such as an extended Kalman filter. We constructed a simulation to compare the performance of ICET and NDT in 2D space both with and without geometric ambiguity and found that ICET produces superior estimates while accurately predicting solution accuracy.
Give a concise overview of the text below.
251
arxiv-format/2109_08263v1.md
# Neural Network Based Lidar Gesture Recognition for Realtime Robot Teleoperation Simon Chamorro, Jack Collier, Francois Grondin This work was conducted at Defence Research and Development Canada. Prof. Grondin provided subject matter advise to the research team. S. Chamorro was employed as a student researcher when this research was conducted.S. Chamorro and F. Grondin are with the Department of Electrical Engineering and Computer Engineering, Interdisciplinary Institute for Technological Innovation (3JT), 3000 boul. de l'Universite, Universite de Sherbrooke, Sherbrooke, Quebec (Canada) J1K GA5, {simon.chamorro,francois.grondin2}@usberbrooke.ca.J. Collier is with Defence Research and Development Canada, Suffield Research Centre, 4000 Sun Main, Medicine Hat, Alberta (Canada) T1A 8K6, [email protected].** **2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.** ## I Introduction Robotic systems are expected to play an increasingly important role in everyday life whether it be autonomous vehicles, personal assistant robotics, etc. Large-scale adoption of these systems into society is dependant, in part, on their ease of use. human-robot interaction (HRI) mechanisms must be intuitive, robust and efficient without imposing a heavy cognitive burden on the user. An effective HRI interface will work regardless of the environment and without cumbersome specialized training. These HRI principles are especially true in defence applications where the soldier must be aware of their surroundings and be ready to react to any dangers. One potential HRI technology that may help alleviate cognitive burden while being intuitive is gesture-based HRI. Traditional gesture-based control uses instrumentation such as inertial measurement units (IMUs) or touch-sensitive gloves to detect gestures. These technologies require robust communications infrastructure and specialized equipment to ensure the robot receives commands. Camera-based systems use machine learning techniques to determine gestures but lack robustness to varying lighting conditions, are unusable in the dark, and often have a limited field of view. The HRI design goals and the deficiencies of current systems listed above, motivated the authors at Defence R&D Canada to investigate lidar-based gesture recognition to enable soldier-robot teaming. In our intial work [1], we adapted the pedestrian tracking and classification work in [2] to address large gesture classification for robot control. Slice features [3], which essentially encode the contour profile of a person, were used to train a model using Adaboost and learn gestures for robot teleoperation. While basic control was observed, the algorithm did not have a temporal component and could only detect static gestures and lacked robustness at moderate distances from the lidar due to point cloud density on a per frame basis. The system also required the area to be pre-mapped using GraphSlam [4] to allow for effective clustering of humans before gesture classification. In this work, we take a neural-network approach which includes a temporal component allowing us to classify both static Fig. 1: Gesture-based vehicle teleoperation. and dynamic gestures using a lidar. We develop a lidar-based gesture recognition system for soldier-robot teaming. The system is able to operate in all lighting conditions, with no specialized user instrumentation or communications hardware. Furthermore, the operator can control the robot anywhere within the lidar's 360 degree field of view. The system has been integrated onto a large ground robot allowing for gesture-based teleoperation as well as autonomous leader-follower behaviours. To the knowledge of the authors, this is a novel approach to gesture and robot control. In particular, the authors have not seen any other system which uses sparse lidar to enable teleoperation in outdoor environments using learned gesture recognition. The main contributions of the work are as follows: 1. A robust and low-complexity large gesture classification system that runs in real-time using an long short-term memory (LSTM) model. 2. A pose estimation pipeline from lidar that uses a convolutional neural network (CNN) to learn body-pose features. 3. A gesture-based teleoperation system for a large ground vehicle operating in outdoor environments. ## II Related Work Gesture detection and activity recognition is an active area of research that can benefit to numerous real-life applications [5]. For example, activity recognition is used for monitoring purposes in the healthcare, security and surveillance industries [6]. Similarly, gesture recognition is very promising for human-robot interaction (e.g. smart home applications [7], self-parking for cars [8] and even robot surgery [9]). In defence research, related work [10] looks at how a soldier interacts with a wheeled robot to accomplish a task. The authors developed a system that uses a 9-axis IMU to perform gesture-based teleoperation. The work is further integrated with speech in [11] to allow for multi-modal commands. Most of the systems introduced above either use wearable sensors to detect human gestures, or focus on hand gestures only. This limits their suitability for many applications, such as robot teleoperation, requiring the user to wear a dedicated device or to be very close to the sensors in order for the hands to be detected. In this work, we propose a gesture recognition system for robot teleoperation that recognises large gestures such as arm waving. To make the system low-complexity such that it can run online on a mobile robot with limited computing resources, we opt for a modular two-step approach: 1) body pose estimation and 2) gesture classification. **Body pose estimation** has been widely studied recently and is used for many applications such as HRI, motion analysis, behaviour prediction, and virtual reality [12]. It is often performed using RGB and RGB-D data [13, 14]. In [15], the authors propose a multi-modal system using RGB imagery and lidar scans to obtain a precise 3D pose estimation. However, these methods require large CNNs and large amounts of data to achieve good results, since they use high-dimension inputs such as high-resolution images and dense point clouds to predict body poses. This makes them ill-suited for real-time usage with limited computing. In this work, we pre-process lidar scans to preserve only the relevant information in a low resolution 2D depth image, and use the latter for pose estimation, highly reducing input dimensionality. **Gesture classification** can be achieved using end-to-end models with video signals as inputs [16, 17, 18]. In [18], the authors use two separate 3D CNNs to learn spatio-temporal features of RGB and and depth sequences. A Convolutional LSTM is used to learn the long-term spatio-temporal features for each separate input stream. Finally, the two streams are fused by simple averaging to obtain the final result. Others also leverage body pose estimation systems to build gesture recognition modules from skeleton data [19]. Fully connected layers together with temporal matching mechanisms are used to detect gestures from skeleton data [20]. Graph Convolutional Networks can also recognize gestures [21], and predict future body poses from the input sequences [22]. In [23], the authors use a CNN to detect a simple body pose (hands and face) in UAV video frames. The relative position of the hands to face is used to detect the gestures and execute a robot task. The authors extend this work in [24] to also compute a 3D pointing vector for an RGB-D camera and use its intersection with the floor as a waypoint for the UGV to manoeuvre to. Other work focuses on the use of Recurrent Neural Networks (RNNs) for the purpose of gesture recognition (e.g. DeepGRU is a network based on multiple stacked GRU units that performs gesture recognition from raw skeleton, pose or vector data [25]). We opt for this approach of using only pose data to predict gestures with a RNN because of the low dimensionality of this type of data and the RNN's capacity to process temporal information. Finally, RGB frames, 3D skeleton joint information, and body part segmentation can be combined using a set of neural networks, such as a 3D CNN and a LSTM network, to predict human gestures [26]. ## III Gesture Recognition System We propose a robust and modular gesture recognition system using lidar as input. Figure 2 provides an overview of the system's architecture. The system consists of two main components: a body pose estimator and a gesture classifier. On the left, we see the body pose estimation system, which uses a three step procedure to predict the user's body pose from a lidar scan. On the right, we see that the gesture classification module uses a sequence of the estimated poses to predict user gestures. This modular approach has many advantages: 1) the predicted body poses act as an intermediate explainable representation; 2) the gesture classifier is independent of the modality used to predict keypoints; 3) the pose estimation system reduces significantly the input dimensionality, which makes the gesture classifier lightweight. ### _Pose Estimation Network_ The proposed pose estimation system consists of three main modules: 1) 3D segmentation, 2) 2D projection and 3) pose estimation. The first module pre-processes the lidar point cloud to segment the points that belong to the tracked person. A cluster tracking algorithm can continuously track a person 360 degrees around the sensor to recognise his gestures. The algorithm uses euclidean clustering to extract clusters from the point cloud and a Kalman filter to track the user given an initial position [27]. Extensions to multi-person tracking to enable multi-user UGV control are currently being investigated. Omnidirectional tracking is essential for vehicle teleoperation, as both the vehicle and the user can move. The user cluster point cloud is projected on a plane and the module generates a one channel \\(128\\times 64\\) pixel depth image from the sensor's perspective. This reduces the data complexity while keeping all relevant information about the person's pose. Finally, this image is fed to the pose estimation CNN, which estimates 8 body keypoint positions: the hips, the shoulders, the elbows and the wrists. The proposed network is composed of four convolutional layers with \\(3\\times 3\\) kernels, 64 channels and a stride of one followed by two fully connected layers, of size 512 and 256 respectively. Each convolutional layer is followed by a \\(2\\times 2\\) kernel maxpool. All activation functions are ReLUs, except the last one, which is a sigmoid. ### _Gesture Classifier_ We frame the gesture classification problem as a temporal problem and use a network based on a LSTM to classify a sequence of estimated body poses. The sequence contains approximately one second of data, which corresponds to between 10 and 30 frames, depending on the frame rate of the input data (e.g. 10Hz for a lidar, and 30Hz for a stereo camera). The network is composed of one LSTM layer with 50 hidden dimensions followed by a fully connected layer and a softmax. When using the system for teleoperation, an extra filtering step of post-processing is added to the system's predicted gestures. For this purpose, a buffer of predicted gestures is created. A certain threshold (gesture count of the same type within the buffer window) needs to be achieved to trigger a gesture prediction, and a lower threshold needs to be maintained to continue predicting the current gesture. This makes the changes in gestures smooth and avoids high frequency glitches for both static and dynamic gestures. ## IV Methodology The proposed system is low-complexity and only requires a small dataset with weak labels, both for pose estimation and for gesture classification. In fact, we propose an automated labeling setup for pose estimation from the lidar using the capabilities of an existing pose estimation system from stereo imagery. We also introduce multiple data augmentation strategies to improve the classifier and make it independent of the body pose estimation method. ### _Training for Lidar Pose Estimation_ To train our pose estimation network, we leverage a pose estimation system from stereo imagery. We collected synchronised lidar scans and 3D body pose predictions and we then follow the procedure described in section III-A to obtain training depth images with no labeling efforts required. We also augmented the samples with different transformations such as rotations, translations and resizing in order to generate more samples. In total, we collected 3960 frames for training over a period of 20 minutes from two different individuals, which amounts to 738000 samples after the augmentation process. Our CNN was trained over 20 epochs, with a learning rate of 0.001. A total of 1000 frames were collected for testing, from three individuals different from the two used for training. For our experiments, we used the Ouster OS1-64 lidar and the ZED2 stereo camera along with its Software Development Kit (SDK), which offers a working pose estimation module1. Footnote 1: [https://www.stereolabs.com/docs/](https://www.stereolabs.com/docs/) ### _Sample-efficient Gesture Classification Training_ To train our gesture classifier, we collected body pose data only from stereo input. We collected gestures from eight individuals, each doing one sample of each of the 18 learned gestures for approximately six seconds and several negative samples containing no gesture of interest. The data from four people was used for training and the rest was used as the test set. To generate the training dataset, we sampled one second sequences from each six second recording and applied multiple transformations to the body keypoints including noise, resizing, rotation and translations. The time scale was also altered by dropping frames or adding intermediate frames Fig. 2: Our modular gesture recognition system. The body pose estimation module takes lidar data as inputs. The person in each point cloud frame is segmented and projected as a 2D depth image before being used by the CNN to predict a body pose in the form of keypoint coordinates. Then, keypoints are accumulated to form a sequence, which is fed to the gesture classifier at every iteration to predict human gestures. using interpolation. The augmentation process is illustrated in figure 3. This data augmentation process allows us to train a useful gesture classifier with only 80 original gesture recordings. As mentioned, no extra data collection is needed to train a classifier for experiments with the lidar. We use the same data for training, but alter the augmentation process by simulating a 10Hz frame rate, which is similar to the lidar's frequency of acquisition. Both of our LSTM-based gesture classifiers have the same architecture and are trained using a learning rate of 0.001. ## V Experimental Results The performance of each subsystem presented in the previous section is evaluated individually. Then, the full pipeline is tested in terms of maximal range and maximum body rotation with respect to the sensor. We also test the responsiveness of our system to determine the average delay to predict a certain gesture. The ease of use of our teleoperation system is also qualitatively evaluated. ### _Pose Estimation Accuracy_ Table I shows the mean error for each body keypoint from the test set. Since the network is trained by using the keypoints predicted by the stereo pose estimation module as labels, the accuracy is measured by using them as ground truth values even though they are not perfectly accurate. The results show that our system has an average error of 4.3 cm when compared to the stereo pose estimation system used for labeling. We can see that the hips and the shoulder features have a lower error, and the elbow and wrist features have a higher error, with an average of 5.6 cm. Compared to stereo imagery, the lidar is appealing as it has a higher range of operation for pose estimation (up to 10m), is robust to any lighting condition, and works 360 degrees around the sensor. These advantages are critical for the application of the system to vehicle teleoperation, since it gives more freedom to the user. Furthermore, because we leverage an existing stereo pose estimation system, no manual labeling was needed for training. ### _Gesture Classifier Performance_ Table II presents the results of our gesture classifier on the test set with no post-processing being applied. The results show that the model has an average precision of 94.6% for stereo input and 82.7% for lidar input. Noticeably, the dynamic gestures are harder to recognize and have a lower recall average compared to the static gestures, both for stereo and lidar. On the other hand, the classifier performs better when using body keypoints derived from stereo imagery. However, for vehicle teleoperation, the use of lidar is crucial for robustness to lighting conditions and omnidirectional usage. As it is shown in figure 4, our post-processing of the network's predictions highly improves performance. \\begin{table} \\begin{tabular}{c|c|c|c|c|c|c|c|c} \\hline \\hline & \\multicolumn{3}{c|}{**Hips**} & \\multicolumn{3}{c|}{**Shoulders**} & \\multicolumn{3}{c|}{**Elbows**} & \\multicolumn{3}{c|}{**Wrist**} & \\multicolumn{1}{c}{**All**} \\\\ & **R** & **L** & **R** & **L** & **R** & **L** & **R** & **L** & **All** \\\\ \\hline \\(\\mu\\) & 3.1 & 3.0 & 2.9 & 3.0 & 6.3 & 4.4 & 5.5 & 6.3 & 4.3 \\\\ \\(\\sigma\\) & 0.2 & 0.2 & 0.2 & 0.2 & 0.3 & 0.3 & 0.4 & 0.4 & 0.3 \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE I: Pose estimation mean (\\(\\mu\\)) and standard deviation (\\(\\sigma\\)) error (in cm) for right (R) and left (L) body keypoints. Fig. 4: Gesture recognition system performance (Precision (–) and Recall (- -)) with the lidar pose estimation and the gesture classifier for different a) context times for gesture buffering, b) user distances from the lidar and c) body rotations with respect to the lidar. The results represent the mean results of all 18 learned gestures. Fig. 3: Data augmentation pipeline for gesture classification training. First, a sequence is sampled from the original recording using a varying time scaling factor. Then, constant random noise is added to the body keypoints for all the sequence. Finally, transformations such as rotations, translations and size scaling are applied. ### _Complete System Performance_ The system was tested on an Argo J8 vehicle2 with both the body pose estimation pipeline and gesture recognition running on an Asus Zenbook Laptop with a i7-8565U CPU and a GTX 1050 graphics card. The system, implemented using PyTorch 3, is lightweight, as it only requires 500 MB of memory, and ran on CPU. Inference took less than 50 ms with this setup, including lidar scan pre-processing and gesture prediction post-processing. Footnote 2: [https://www.argo-xtr.com/index.php/xtr-robots/](https://www.argo-xtr.com/index.php/xtr-robots/)$-atlas-xtr/ Footnote 3: [https://pytorch.org/](https://pytorch.org/) Figure 4 shows the performance of the system, including pose estimation from lidar and gesture classification from the predicted keypoints. For this experiment, we evaluate performance on 10 second recordings of gestures, where the user has no feedback on the system predictions that would allow him to correct his movements. We show that the system reaches a perfect precision and a 90% recall after approximately 1.1 seconds of performing a given gesture. In the same figure, we evaluate the limits of the system in terms of user distance and rotation. Precision and recall are excellent up to eight meters, and the system remains usable up to around ten meters. The main reason explaining the rapid performance drop upwards of eight meters is the 2D projection step described in section III-A. In fact, the user cluster becomes sparser as the person is further and so does the 2D projection. A future improvement for the system could be to develop a more robust filtering procedure to produce a better 2D projection from sparse user point cloud clusters. Figure 4 also shows the performances with respect to user rotation (rotation of the torso with respect to the lidar centroid). We notice that the system can perform reasonably well up to a 45 degree rotation, maintaining a 56.5% precision and a 43.6% recall. This performance is sufficient for the system to be usable, especially when the user has access to feedback from the system and can correct the way of performing each gesture depending on the predictions. This limitation is due to the fact that the lidar scans are projected in 2D, which makes it difficult to classify gestures when the body rotates, but can be easily mitigated by the user facing the vehicle when performing gestures. ## VI Gesture-based Vehicle Teleoperation We propose an intuitive and complete gesture teleoperation system to showcase the robustness of our gesture recognition system. There are 18 gestures learned by our system, illustrated in fig. 5, and each one of them is mapped to a specific command. These gestures are inspired by the United States Army visual signals catalog [28], but the proposed teleoperation system maps them to different commands. Figure 6 shows the different state transitions triggered by the learned gestures. The system supports two modes: 1) a gesture-based teleoperation mode and 2) a leader-follower mode. When in gesture-based teleoperation mode, the vehicles executes commands based on recognised gestures. When in leader-follower mode, the vehicle simply follows the user until the mode is deactivated. A cluster tracking algorithm and a proportional controller are used for this purpose. This mode eases operation when precise control is unnecessary and is more suitable for traveling longer distances. Specific gestures are chosen to enable and disable the leader-follower mode. ## VII Conclusion We propose a novel, robust and low-complexity lidar-based gesture recognition system for vehicle teleoperation. We present a pose estimation system that uses a CNN to track and extract user body keypoints. These keypoints are \\begin{table} \\begin{tabular}{c c|c c c c c c c c c c c c|c c c c c} \\hline \\hline & & & & & \\multicolumn{4}{c|}{**Static**} & & & & & \\multicolumn{4}{c}{**Dynamic**} & \\\\ & **Gestures** & **1** & **2** & **3** & **4** & **5** & **6** & **7** & **8** & **9** & **10** & **11** & **12** & **13** & **14** & **15** & **16** & **17** & **18** \\\\ \\hline \\multirow{2}{*}{Stereo} & Precision (\\%) & 100 & 98 & 100 & 94 & 100 & 100 & 96 & 97 & 97 & 100 & 100 & 78 & 98 & 98 & 98 & 96 & 100 & 100 \\\\ & Recall (\\%) & 100 & 100 & 92 & 100 & 87 & 98 & 94 & 100 & 98 & 98 & 100 & 97 & 93 & 96 & 100 & 93 & 85 & 67 \\\\ \\hline \\multirow{2}{*}{Lidar} & Precision (\\%) & 94 & 99 & 95 & 91 & 94 & 95 & 90 & 95 & 92 & 87 & 94 & 94 & 84 & 94 & 95 & 84 & 99 & 98 \\\\ & Recall (\\%) & 77 & 91 & 91 & 74 & 91 & 92 & 92 & 92 & 80 & 87 & 92 & 90 & 66 & 76 & 78 & 84 & 71 & 55 \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE II: Results of the proposed gesture classifier on test set. Fig. 5: Static (1–12) and dynamic (13–18) gestures learned by the classifier (Images from [28]). input to the proposed gesture classification module that uses a LSTM network and a temporal post-processing procedure to classify gestures. Finally, a working teleoperation setup is designed to showcase system teleoperation from lidar input. The system can run in real-time on a robotic vehicle with limited computing capabilities, requires very little data collection and no manual labeling of gestures. The system was able to function in all tested outdoor conditions, regardless of lighting and the users position around the vehicle. The results show a lot of potential for gesture-based robotic control. Future development will focus on a multi-user extension allowing for control hand-off between users as well as the integration of the system as part of a multi-modal framework for robot teleoperation that also includes speech and a tablet interface. Finally, this multi-modal system will be used in human factors testing to determine the optimal interface for Canadian Armed Forces operational scenarios. ## References * [1]A. Aghbolaghi, A. Clapes, M. Bellantonio, H. J. Escalante, V. Ponce-Lopez, X. Baro, I. Guyon, S. Kasaei, and S. Escalera (2017) A survey on deep learning based approaches for action and gesture recognition in image sequences. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition, pp. 476-483. Cited by: SSI. * [2]M. Asadi-Aghbolaghi, A. Clapes, M. Bellantonio, H. J. Escalante, V. Ponce-Lopez, X. Baro, I. Guyon, S. Kasaei, and S. Escalera (2017) A survey on deep learning based approaches for action and gesture recognition in image sequences. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition, pp. 476-483. Cited by: SSI. * [3]M. Asadi-Aghbolaghi, A. Clapes, M. Bellantonio, H. J. Escalante, V. Ponce-Lopez, X. Baro, I. Guyon, S. Kasaei, and S. Escalera (2017) A survey on deep learning based approaches for action and gesture recognition in image sequences. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition, pp. 476-483. Cited by: SSI. * [4]M. Asadi-Aghbolaghi, A. Clapes, M. Bellantonio, H. J. Escalante, V. Ponce-Lopez, X. Baro, I. Guyon, S. Kasaei, and S. Escalera (2017) A survey on deep learning based approaches for action and gesture recognition in image sequences. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition, pp. 476-483. Cited by: SSI. * [5]M. Asadi-Aghbolaghi, A. Clapes, M. Bellantonio, H. J. Escalante, V. Ponce-Lopez, X. Baro, I. Guyon, S. Kasaei, and S. Escalera (2017) A survey on deep learning based approaches for action and gesture recognition in image sequences. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition, pp. 476-483. Cited by: SSI. * [6]M. Asadi-Aghbolaghi, A. Clapes, M. Bellantonio, H. J. Escalante, V. Ponce-Lopez, X. Baro, I. Guyon, S. Kasaei, and S. Escalera (2017) A survey on deep learning based approaches for action and gesture recognition in image sequences. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition, pp. 476-483. Cited by: SSI. * [7]M. Asadi-Aghbolaghi, A. Clapes, M. Bellantonio, H. J. Escalante, V. Ponce-Lopez, X. Baro, I. Guyon, S. Kasaei, and S. Escalera (2017) A survey on deep learning based approaches for action and gesture recognition in image sequences. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition, pp. 476-483. Cited by: SSI. * [8]M. Asadi-Aghbolaghi, A. Clapes, M. Bellantonio, H. J. Escalante, V. Ponce-Lopez, X. Baro, I. Guyon, S. Kasaei, and S. Escalera (2017) A survey on deep learning based approaches for action and gesture recognition in image sequences. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition, pp. 476-483. Cited by: SSI. * [9]M. Asadi-Aghbolaghi, A. Clapes, M. Bellantonio, H. J. Escalante, V. Ponce-Lopez, X. Baro, I. Guyon, S. Kasaei, and S. Escalera (2017) A survey on deep learning based approaches for action and gesture recognition in image sequences. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition, pp. 476-483. Cited by: SSI. * [10]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [11]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [12]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [13]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [14]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [15]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [16]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [17]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [18]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [19]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [20]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [21]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [22]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [23]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [24]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [25]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [26]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [27]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [28]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [29]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [30]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [31]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [32]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [33]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [34]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [35]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [36]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI. * [37]M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Action-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3595-3603. Cited by: SSI.
We propose a novel low-complexity lidar gesture recognition system for mobile robot control robust to gesture variation. Our system uses a modular approach, consisting of a pose estimation module and a gesture classifier. Pose estimates are predicted from lidar scans using a Convolutional Neural Network trained using an existing stereo-based pose estimation system. Gesture classification is accomplished using a Long Short-Term Memory network and uses a sequence of estimated body poses as input to predict a gesture. Breaking down the pipeline into two modules reduces the dimensionality of the input, which could be lidar scans, stereo imagery, or any other modality from which body keypoints can be extracted, making our system lightweight and suitable for mobile robot control with limited computing power. The use of lidar contributes to the robustness of the system, allowing it to operate in most outdoor conditions, to be independent of lighting conditions, and for input to be detected 360 degrees around the robot. The lidar-based pose estimator and gesture classifier use data augmentation and automated labeling techniques, requiring a minimal amount of data collection and avoiding the need for manual labeling. We report experimental results for each module of our system and demonstrate its effectiveness by testing it in a real-world robot teleoperation setting. Gesture Recognition, Pose Estimation, Lidar, Teleoperation
Give a concise overview of the text below.
259
arxiv-format/2404_04520v1.md
IITK at SemEval-2024 Task 4: Hierarchical Embeddings for Detection of Persuasion Techniques in Memes Shreenaga Chikoti Shrey Mehta Ashutosh Modi Indian Institute of Technology Kanpur (IIT Kanpur) [email protected] [email protected] ## 1 Introduction Memes are popular among people of all age groups today through different social media platforms (Keswani et al., 2020; Singh et al., 2020). These memes help people know about the trends around them and can influence their decisions. Memes are one of the popular modes for spreading disinformation among people (examples in Figure 1), as studies have suggested that people tend to believe what they see frequently in such memes spread over the internet (Moravec et al., 2018). As evidenced by research (Shu et al., 2017) during the 2016 US Presidential campaign, nefarious actors, including bots, cyborgs, and trolls, leveraged memes to evoke emotional reactions and propagate misleading narratives (Guo et al., 2020). In this respect, SemEval-2024 Task 4 (Dimitro et al., 2024) focuses on predicting the persuasive technique (from the visual and textual content) used in a meme across four different languages: English, Arabic, North Macedonian and Bulgarian. The task is divided into three sub-tasks: (**1**) Hierarchical multi-label classification using only textual content of the meme, (**2**) Hierarchical multi-label classification using both textual and visual content of the meme and (**3**) Binary classification of whether the meme contains a persuasion technique or not using it's textual and visual content. The training data is provided for each sub-task but only in English. Taxonomy of various persuasion techniques (Figure 2) and their respective definitions are provided. To address sub-task **1**, we employed a dual approach involving definition-based modeling for each class and hierarchical classification using hyperbolic embeddings, as proposed in Chen et al. (2023). Based on hyperbolic embeddings, the Figure 1: Sample set of memes showing the multi-modal setting method facilitates a nuanced classification of persuasion techniques by leveraging hierarchical structures. The incorporation of definition-based modeling allows for a dataset-agnostic approach, enhancing the precision of classification without reliance on hierarchical structures. For sub-task **2**, we augmented our methodology by integrating CLIP embeddings (Radford et al., 2021) to capture essential features from memes' textual and visual components. This fusion of textual and visual information enables a more comprehensive analysis of meme content. In addressing sub-task **3**, we adopted an ensemble approach, leveraging transfer learning from both the DistilBERT (Sanh et al., 2019) and CLIP embeddings (Radford et al., 2021). This ensemble technique enhances the robustness and effectiveness of our classification system by amalgamating insights from both pre-trained models. We release the code via GitHub.1 Footnote 1: [https://github.com/Exploration-Lab/IITK-SemEval-2024-Task-4-Pursuasion-Techniques](https://github.com/Exploration-Lab/IITK-SemEval-2024-Task-4-Pursuasion-Techniques) ## 2 Background The goal of propaganda is to enhance people's mindsets (Singh et al., 2020), especially at the time of elections, where the trends in the media influence the votes of the people (Shu et al., 2017). Propaganda uses psychological and rhetorical techniques to serve its purpose. Such methods include using logical fallacies and appealing to the audience's emotions. Logical fallacies are usually hard to spot since the argumentation, at first sight, might seem correct and objective. However, a careful analysis shows that the conclusion cannot be drawn from the premise without misusing logical rules (Gupta and Sharma, 2021). Another set of techniques uses emotional language to induce the audience to agree with the speaker only based on the emotional bond that is being created, provoking the suspension of any rational analysis of the argumentation (Szabo, 2020). Corpora development has been instrumental in advancing deception detection methodologies. Rashkin et al. (2017) introduced the TSHP-17 corpus, providing document-level annotation across four classes: trusted, satire, hoax, and propaganda. However, their study on the classification task revealed limitations in the generalizability of n-gram-based approaches. Building on this, Barron-Cedeno et al. (2019) contributed the QProp corpus, which specifically targeted propaganda detection, employing a binary classification scheme of propaganda versus non-propaganda. Similarly, Habernal Figure 2: Taxonomy of persuasion techniques for sub-task **2** et al. (2018) developed a corpus annotated with fallacies, including _ad hominem_ and _red herring_, directly relevant to propaganda techniques. BERT-based variants have emerged as promising methodologies for classification tasks in tandem with corpus development. Yoosuf and Yang (2019) proposed a fine-tuning approach post-world-level classification using BERT, while Fadel et al. (2019) presented a pre-trained ensemble model integrating BiLSTM, BERT, and RNN components. Further extending the capabilities of BERT, Costa et al. (2023) advocated for a multilingual setup, employing translation to English before utilizing RoBERTa. Additionally, Teimas and Saias (2023) proposed a hybrid technique combining CNN with DistilBERT for improved detection accuracy. Exploring multimodal content, Glenski et al. (2019) delved into multilingual multimodal deception detection, mainly focusing on hateful memes. Leveraging visual and textual content, they utilized fine-tuning techniques with state-of-the-art models like ViLBERT and VisualBERT and transfer learning-based approaches Gupta et al. (2021). ## 3 Data Description The competition consisted of two different phases mainly the development phase which we refer to as the development set and for the development phase we were provided the training and validation sets for benchmarking our models All three sub-tasks have different sets of memes split across training, validation and Develepmont sets as shown in Table 2. We have also plotted the Distribution of the labels across the Figure 3 training data and the Figure 4 validation data. Our analysis used a dictionary to map various rhetorical techniques to numerical values for plotting. This dictionary is as follows: ## 4 System overview The proposed system for all the sub-tasks involves task-specific modifications made to the BERT model and earlier proposed works including CLIP Model Radford et al. (2021), Class Definition based Emotion Predictions Singh et al. (2021, 2023) and HypEmo model Chen et al. (2023) (described below). ### Data Pre-processing To ensure consistency and standardization, we begin by pre-processing the text. This involves removing newline characters, commas, numerical values, and other special characters. Additionally, the entire text is converted to lowercase. In our approach, we leverage the Development (Dev) and Training sets, focusing solely on samples containing non-zero classes. \\begin{table} \\begin{tabular}{l c} **Permanation Technique** & **Number mapped to** \\\\ \\hline Presenting Inrelevant Data (Red Herring) & 0 \\\\ Bandwayon & 1 \\\\ Saueras & 2 \\\\ Glitering generatatatat (Virano) & 3 \\\\ Causal Oversimplification & 4 \\\\ Watchwayon & 5 \\\\ Load Language & 6 \\\\ Keaggeration/Minimisation & 7 \\\\ Repetition & 8 \\\\ Though+terminating child & 9 \\\\ Name callingly.dehting & 10 \\\\ Appeal authority & 11 \\\\ Black-and-white Fallay/Dicaturechip & 12 \\\\ Oktktronization, Incentiated upguests, & 13 \\\\ Confusion (Straw Man) & 14 \\\\ Redkactio at libitem & 15 \\\\ Appeal to fear/piedice & 16 \\\\ Management of Someone’s & 17 \\\\ Position (Straw Man) & 19 \\\\ Fig.-waring & 17 \\\\ Slegans & 18 \\\\ Double & 19 \\\\ \\end{tabular} \\end{table} Table 1: Dictionary Mapping for different persuasion techniques for Subtask 1 Figure 4: The frequency Distribution of Labels in the validation dataset Figure 3: The frequency Distribution of Labels in the training dataset ### Sub-task 1: Hierarchical Multi-label Text Classification We present a novel approach to meme classification, drawing upon the methodologies of two key frameworks: HypEmo and a multi-task learning model focused on emotion definition modeling. HypEmo (Chen et al., 2023) utilizes pre-trained label hyperbolic embeddings to capture hierarchical structures effectively, particularly in tree-like formations. Initially, the hidden state of the [CLS] token from the RoBERTa backbone model is projected using a Multi-Layer Perceptron (MLP). Subsequently, an exponential map is applied to project it into hyperbolic space. The distance from pre-trained label embeddings is the weight for the cross-entropy loss function, enhancing the model's sensitivity to label relationships. To implement the HypEmo architecture, we transform the Directed Acyclic Graph (DAG) (Figure 2) into a tree structure. This involves duplicating children with multiple parents, resulting in distinct embeddings for each label. For example, a sentence with various labels is converted into separate samples, each assigned one label. Utilizing the Poincare hyperbolic entailment cones model (Ganea et al., 2018) with 100 dimensions, the constructed tree undergoes training, with predictions generated via softmax. Peaks are identified through Z-score analysis associated with each class, with thresholds set accordingly. Singh et al. (2021, 2023) have introduced a complementary approach focusing on emotion prediction through a multi-task learning framework. This model incorporates auxiliary tasks, including masked language modeling (MLM) and class definition prediction, to enhance the understanding of emotional concepts. In our setup, class definitions are merged using a [SEP] token, with the model trained to predict whether the conjoined definition matches the actual definition. Binary cross-entropy loss is employed for this task, along with MLM for fine-tuning the model. Additionally, binary cross-entropy loss is used for each class during training. We utilize class definitions provided by the meme classification competition for the auxiliary task of class-definition prediction. Finally, we merge the predictions generated by both models (HypEmo, Fine-grained class-definition based model) to compute the final predictions. This integrated approach aims to leverage the strengths of each framework, enhancing the accuracy and comprehensiveness of meme classification outcomes. ### Sub-task 2: Hierarchical Multi-label Text and Image Classification We model this sub-task by experimenting with using an ensemble of HypEmo (Chen et al., 2023) \\begin{table} \\begin{tabular}{l c c c} \\hline \\hline **Sub-task** & **Train Data** & **Validation Data** & **Development Data** \\\\ \\hline Sub-task 1 & 7000 & 500 & 1000 \\\\ Sub-task 2 & 7000 & 500 & 1000 \\\\ Sub-task 3 & 1200 & 300 & 500 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Distribution of data across sub-tasks Figure 5: The meme sarcastically suggests that individuals who oppose Trump are being unfairly equated with terrorists, highlighting the absurdity of such comparisons. Two persuasion techniques are used: (i) _Loaded Language_, and (ii) _Name calling_ that can be inferred from the text and the visual content. Figure 6: Proposed architecture for sub-task **2**and the class definition-based multi-task learning model (Singh et al., 2021, 2023) for the textual content of the meme and using the CLIP model (Radford et al., 2021) embeddings for extracting the relevant features from the visual content of the meme. We construct a similar DAG structure for sub-task 1 and generate the hyperbolic embeddings. The image embeddings obtained from the CLIP model are concatenated with the embeddings generated for the textual contents before sending the combined feature vector for training. Then, the model is trained, and the predictions are generated using the softmax activation function. The Z-score analysis is done on the resulting predictions to make the classification, similar to task 1. An overview of the architecture of the modified HypEmo model is shown in Figure 7. ### Sub-task 3: Binary Text and Image Classification In this task, we must classify whether a meme contains a persuasion technique based on its textual and visual content. We use the pre-trained BERT\\({}_{\\text{BASE}}\\) model (Devlin et al., 2019) and the Convolution Neural Network (CNN) (O'Shea and Nash, 2015) layer to extract the features from the text and image, respectively. We attach a feed-forward \\([CLS]\\) token embedding along with two linear layers connected by the \\(sigmoid\\) activation function in between, which generates the sentence embeddings corresponding to the textual content in the meme. We use a network of four CNN layers connected through the ReLU activation function, which progressively extracts features from the input image. Max pooling layers are used to down-sample the feature maps, increasing robustness to minor variations. The resultant image embeddings are concatenated with the sentence embeddings, and a linear classifier is applied to the combined feature vector with the \\(sigmoid\\) activation function. We use the binary cross-entropy loss function to train the model and tune the hyperparameters on the validation set. An overview of the model architecture is shown with an example in Figure 8. Since the training data is in a 2:1 ratio for the \"persuasive\" (positive, labeled as 1) and \"not-persuasive\" (negative, labeled as 0) class, which leads to an imbalance in the dataset, we use the weighted binary cross entropy loss function as shown below: \\[L(\\textbf{x},\\textbf{y}) =-\\frac{1}{N}\\sum_{1}^{N}(w*y_{i}*log(x_{i})\\] \\[+(1-w)*(1-y_{i})*log(1-x_{i}))\\] \\[w=\\frac{1}{f}(K-f)\\] where \\(N\\) is the batch size, \\(i\\) is the index of the \\(i^{th}\\) batch element, \\(f\\) is the frequency of the positive class, **x** is the output of the last \\(sigmoid\\) layer, **y** is the vector of the ground truth labels, and \\(K\\) is the total size of the training dataset. Finally, by choosing the one with a higher probability, we use the output probabilities of the final \\(sigmoid\\) layer to predict whether a persuasion technique is present in the meme. ## 5 Experimental setup ### Implementation Details We have used the official PyTorch implementation (Paszke et al., 2019) for implementing all the models across sub-tasks. We have used the HypEmo2 model and the Class Definition Prediction (CDP)3 model for generating the hyperbolic embeddings and class-definition based features of the textual contents, respectively and the CLIP4 mainly the 'clip-ViT-B-32' model for generating embeddings for the visual features of the meme. Some portions of the test set have languages other than English for Figure 7: Proposed architecture for sub-task **3** testing purposes. Since the models described earlier were trained in English, we translated the non-English data into English language using the implementation of the OPUS-MT model Tiedemann and Thottingal (2020) from the HuggingFace5 library and inference was done on the translated text. We created an ensemble of classes predicted by all the models and took a union of the predicted labels to produce the final predicted set of labels to which the meme belonged. Footnote 5: OPUS-MT, [https://huggingface.co/Helsinki-NLP/opus-mt-bg-en](https://huggingface.co/Helsinki-NLP/opus-mt-bg-en) We have used the data in the same ratio provided in the task to train the models. We combine the train validation dataset for training in each subtask and test it in the four languages. ### Evaluation Metrics Sub-tasks 1 and 2 depend on a hierarchy, as shown in Figure 2. Hierarchical-F1 Kiritchenko et al. (2006) is used as the evaluation metric for these two sub-tasks. In these two, the gold label is always a leaf node of the DAG, considering the hierarchy in Figure 2 as a reference. However, any node of the DAG can be a predicted label with: * If the prediction is a leaf node and it is the correct label, then a full reward is given. For example, _Red Herring_ is predicted and is the gold label. * If the prediction is NOT a leaf node and an ancestor of the correct gold label, then a partial reward is given (the reward depends on the distance between the two nodes). For example, if the gold label is Red Herring and the predicted label is _Distraction_ or _Appeal to Logic_. * If the prediction is not an ancestor node of the correct label, then a null reward is given. For example, if the gold label is _Red Herring_ and the predicted label is _Black and White Fallacy_ or _Appeal to Emotions_. Sub-task 3 uses macro-F1 as the evaluation metric for the binary classification task. This ensures equal importance to the \"persuasion technique present\" and \"no persuasion technique\" classes, regardless of potential data imbalance. ## 6 Results We conducted several experiments across all the sub-tasks, and the detailed information can be seen in Table 3,Table 4,Table 5,Table 8 and Table 9. For Task 1, we started experimenting with the BERT and RoBERTa models, achieving a hierarchical F1 score of \\(0.55\\) and \\(0.60\\) on the test set of the English language. But, in this approach, we did not take the hierarchy and the definitions of the classes into consideration. We tried to accommodate that using the combination of HypEmo and CDP models. For the HypEmo model, the model was trained to prioritize higher-level labels in the Directed Acyclic Graph (DAG). During this process, we explored two options: eliminating children when the model predicted the parent label and retaining the children. We observed a significant impact on the hierarchical F1 score, with the first formulation yielding \\(0.45\\) F1 and the second approach resulting \\begin{table} \\begin{tabular}{l c c c} \\hline \\hline **Technique** & **Arabic** & **Bulgurian** & **North Macedonia** \\\\ \\hline Presenting Interlevant Data (Red Herring) & 0. & 0. & 0. \\\\ Bandwayoa & 0. & 0. & 0. \\\\ Smears & 0. & 0. & 0. \\\\ Causal Overlication & 0. & 0. & 0. \\\\ Whatwhomijn & 0. & 0. & 0. \\\\ Load Language & 0. & 0. & 0. \\\\ EngagementMinimisation & 0. & 0. & 0. \\\\ Reporting & 0. & 0. & 0. \\\\ Thought-terminative child & 0. & 0. & 0. \\\\ NameAttending: & 0. & 0. & 0. \\\\ NameAttending: & 0. & 0. & 0. \\\\ Classification (Sitara Man) & 0. & 0. & 0. \\\\ Education of niliform & 0. & 0. & 0. \\\\ Appeal to fear/prejudice & 0. & 0. & 0. \\\\ Management of SemoC's & 0. & 0. & 0. \\\\ Pug-aware & 0. & 0. & 0. \\\\ Slopins & 0. & 0. & 0. \\\\ Double & 0. & 0. & 0. \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Macro F1 scores for different persuasion classes for the given languages for Subtask 1 \\begin{table} \\begin{tabular}{l c c c} \\hline \\hline **Technique** & **Arabic** & **Bulgurian** & **North Macedonia** \\\\ \\hline Presenting Interlevant Data (Red Herring) & 0. & 0. & 0. \\\\ Bandwayoa & 0. & 0. & 0. \\\\ Smears & 0. & 0. & 0. \\\\ Smears & 0. & 0. & 0. \\\\ Classification (Sitara Man) & 0. & 0. & 0. \\\\ Rebactio at niliform & 0. & 0. & 0. \\\\ Appeal to fear/prejudice & 0. & 0. & 0. \\\\ Management of SemoC's & 0. & 0. & 0. \\\\ Position (Sitara Man) & 0. & 0. & 0. \\\\ Pug-aware & 0. & 0. & 0. \\\\ Slopins & 0. & 0. & 0. \\\\ Double & 0. & 0. & 0. \\\\ Transfer & 0. & 0. & 0. \\\\ Appeal to (Strong) Emotions & 0. & 0. & 0. \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Macro F1 scores for different persuasion classes for the given languages for Subtask 2in \\(0.59\\) on the test set. We also tried to predict the labels utilizing only the definitions of the classes, using the CDP model, which yielded a hierarchical F1 score of \\(0.57\\) and \\(0.59\\) on the dev set and the test set, respectively. For constructing an ensemble, one approach considered concatenating embeddings or softmax predictions from both models for further classification using a neural network. However, this approach was not viable due to limited samples for generalization. The most effective model emerged from utilizing the ensemble with fine-tuning of hyperparameters. Combining predictions from both models yielded a hierarchical F1 score of \\(0.60\\). Table 8 shows that the best generalizability across all tasks is achieved via the HypEmo + CDP(Union) for subtask1. For sub-task 2, we trained the model from scratch after including the two labels used in the ensemble used in sub-task 1 and changed the feature embeddings being trained by considering the features from the visual content. However, as you can see, there is very little to no difference between the results using CLIP and not using CLIP. We can also see that, unlike the first subtask, they perform better due to more data. We can see the F1-score analysis tables for each subtask, i.e., in Table 4, Table 3 for subtask1 and subtask2. macro-F1 score of \\(0.63\\) on the dev set. ## 7 Conclusion Detection of persuasion techniques in memes is seen in a multi-modal setting in this task, but the significant features are drawn from the textual cues in the memes, which can be seen in the results of sub-tasks 1 and 2. The CLIP and other visual language models still need considerable development, and visual cues are helpful for only specific input-output pairs. Identifying whether a persuasion technique is present in the meme but does not apply to the multi-label classification task can be beneficial. Also, we have used a basic ensemble of the latest works in this area and modified them for task-specific requirements. Still, other complex architectures can be explored to get better results. ## References * B. Ayton, J. Mendoza, and S. Volkova (2019)Multilingual multimodal digital deception detection and disinformation spread across social platforms. arXiv preprint arXiv:1909.05838. Cited by: SS1. * S. Gupta, D. Gautam, and R. Mamidi (2021)Volta at semeval-2021 task 6: towards detecting persuasive texts and images using textual and multimodal ensemble. Cited by: SS1. * V. Gupta and R. Sharma (2021)NLPIITR at semeval-2021 task 6: RoBERTa model with data augmentation for persuasion techniques detection. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), Online, pp. 1061-1067. External Links: Link Cited by: SS1. * I. Habernal, P. Pauli, and I. Gurevych (2018)Adapting serious game for fallacious argumentation to german: pitfalls, insights, and best practices. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Cited by: SS1. * V. Keswani, S. Singh, S. Agarwal, and A. Modi (2020)ITK at semeval-2020 task 8: unimodal and bimodal sentiment analysis of internet memes. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, Barcelona, Barcelona, pp. 1135-1140. External Links: Link Cited by: SS1. * S. Kiritchenko, R. Nock, and F. Famili (2006)Learning and evaluation in the presence of class hierarchies: application to text categorization. Vol. 4013, pp. 395-406. External Links: Link Cited by: SS1. * P. Moravec, R. Minas, and A. Dennis (2018)Fake news on social media: people believe what they want to believe when it makes no sense at all. SSRN Electronic Journal. Cited by: SS1. * K. O'Shea and R. Nash (2015)An introduction to convolutional neural networks. Cited by: SS1. * A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019)Pytorch: an imperative style, high-performance deep learning library. Cited by: SS1. * M. Glenski, E. Ayton, J. Mendoza, and S. Volkova (2019)Multilingual multimodal digital deception detection and disinformation spread across social platforms. arXiv preprint arXiv:1909.05838. Cited by: SS1. * B. Guo, Y. Ding, L. Yao, Y. Liang, and Z. Yu (2020)The future of false information detection on social media: new perspectives and trends. ACM Computing Surveys (CSUR)53 (4), pp. 1-36. Cited by: SS1. * K. Gupta, D. Gautam, and R. Mamidi (2021)Volta at semeval-2021 task 6: towards detecting persuasive texts and images using textual and multimodal ensemble. Cited by: SS1. * V. Gupta and R. Sharma (2021)NLPIITR at semeval-2021 task 6: RoBERTa model with data augmentation for persuasion techniques detection. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), Online, pp. 1061-1067. External Links: Link Cited by: SS1. * I. Habernal, P. Pauli, and I. Gurevych (2018)Adapting serious game for fallacious argumentation to german: pitfalls, insights, and best practices. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Cited by: SS1. * V. Keswani, S. Singh, S. Agarwal, and A. Modi (2020)IITK at semeval-2020 task 8: unimodal and bimodal sentiment analysis of Internet memes. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, Barcelona, Barcelona, pp. 1135-1140. External Links: Link Cited by: SS1. * S. Kiritchenko, R. Nock, and F. Famili (2006)Learning and evaluation in the presence of class hierarchies: application to text categorization. Vol. 4013, pp. 395-406. Cited by: SS1. * P. Moravec, R. Minas, and A. Dennis (2018)Fake news on social media: people believe what they want to believe when it makes no sense at all. SSRN Electronic Journal. Cited by: SS1. * K. O'Shea and R. Nash (2015)An introduction to convolutional neural networks. Cited by: SS1. * A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019)Pytorch: an imperative style, high-performance deep learning library. Cited by: SS1. * M. Platt, J. R. Fergus, A. Lerer, and J. Malik (2016)Fake news on social media: people believe what they want to believe when it makes no sense at all. SSRN Electronic Journal. Cited by: SS1. [MISSING_PAGE_POST] Representations, pp. 1-16. Cited by: SS1. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In _International conference on machine learning_, pages 8748-8763. PMLR. * Rashkin et al. (2017) Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and political fact-checking. In _Proceedings of the 2017 conference on empirical methods in natural language processing_, pages 2931-2937. * Sanh et al. (2019) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. _arXiv preprint arXiv:1910.01108_. * Shu et al. (2017) Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social media: A data mining perspective. _ACM SIGKDD explorations newsletter_, 19(1):22-36. * Singh et al. (2021) Gargi Singh, Dhanajit Brahma, Piyush Rai, and Ashutosh Modi. 2021. Fine-grained emotion prediction by modeling emotion definitions. In _2021 9th International Conference on Affective Computing and Intelligent Interaction (ACII)_, pages 1-8, Los Alamitos, CA, USA. IEEE Computer Society. * Singh et al. (2023) Gargi Singh, Dhanajit Brahma, Piyush Rai, and Ashutosh Modi. 2023. Text-based fine-grained emotion prediction. _IEEE Transactions on Affective Computing_, pages 12-12. * Singh et al. (2020) Paramansh Singh, Siraj Sandhu, Subham Kumar, and Ashutosh Modi. 2020. newsSweeper at SemEval-2020 task 11: Context-aware rich feature representations for propaganda classification. In _Proceedings of the Fourteenth Workshop on Semantic Evaluation_, pages 1764-1770, Barcelona (online). International Committee for Computational Linguistics. * Szabo (2020) Gabriella Szabo. 2020. Emotional communication and participation in politics. _Intersections_, 6:5-21. * Teimas and Saias (2023) Ruben Teimas and Jose Saias. 2023. Detecting persuasion attempts on social networks: Unearthing the potential of loss functions and text pre-processing in imbalanced data settings. _Electronics_, 12(21):4447. * building open translation services for the world. In _Proceedings of the 22nd Annual Conference of the European Association for Machine Translation_, pages 479-480, Lisboa, Portugal. European Association for Machine Translation. * Yoosuf and Yang (2019) Shehel Yoosuf and Yin Yang. 2019. Fine-grained propaganda detection with fine-tuned bert. In _Proceedings of the second workshop on natural language processing for internet freedom: censorship, disinformation, and propaganda_, pages 87-91.
Memes are one of the most popular types of content used in an online disinformation campaign. They are primarily effective on social media platforms since they can easily reach many users. Memes in a disinformation campaign achieve their goal of influencing the users through several rhetorical and psychological techniques, such as causal oversimplification, name-calling, and smear. The SemEval 2024 Task 4 _Multilingual Detection of Persuasion Technique in Memes_ on identifying such techniques in the memes is divided across three sub-tasks: (**1**) Hierarchical multi-label classification using only textual content of the meme, (**2**) Hierarchical multi-label classification using both, textual and visual content of the meme and (**3**) Binary classification of whether the meme contains a persuasion technique or not using it's textual and visual content. This paper proposes an ensemble of Class Definition Prediction (CDP) and hyperbolic embeddings-based approaches for this task. We enhance meme classification accuracy and comprehensiveness by integrating HypEmo's hierarchical label embeddings (Chen et al., 2023) and a multi-task learning framework for emotion prediction. We achieve a hierarchical F1-score of 0.60, 0.67, and 0.48 on the respective sub-tasks.
Write a summary of the passage below.
258
isprs/32b4252a_e476_4708_bc21_78203f0c25aa.md
# Pixel-based classification analysis of Land use land cover using sentinel-2 and Landsat-8 data A. Sekertekin1, A. M. Marangoz1, H. Akin1 BEU, Engineering Faculty, Geomatics Engineering Department 67100 Zonguldak, Turkey - (aliihsan_sekertekin, aycammaroz, hakanakcin)@hotmail.com ## 1 Introduction Generating LULC image has gained importance in recent years for sustainable land management, landscape ecology and climate related researches (Turner et al., 2001; Pielke et al., 2011). Besides, temporal changes in LULC give us information about proper planning and use of natural resources and their management (Mejia and Hochschild, 2012). Thus, accurate and up to date LULC information is always crucial for a sustainable environment. Furthermore, it is important to monitor LULC changes periodically in fast growing cities since the urban climate can change by the uncontrolled and irregular expansion in the cities. Remote sensing technology is an effective way to monitor the changes on Earth. Satellite images have been widely used to retrieve LULC images. In particular, various algorithms have been developed, and improved accuracies have been obtained with the advances in remote sensing technologies and sensor types. Sentinel-2 MSI and Landsat-8 OLI are recently operational new generation Earth observation satellites, and thus in this case study these satellites were selected as data sources. Many studies have been conducted using only Sentinel-2 data, only Landsat-8 data and both together, and so many methods have been applied to investigate which method gives better the accuracy results (Elhag & Boteva, 2016; Liu et al., 2015; Jia et al., 2014; Pirotti et al., 2016; Topaloglu et al., 2016; Marangoz et al., 2017). The aim of this study is to generate LULC images from Sentinel-2 MSI and Landsat-8 OLI data using pixel-based MLC supervised classification method, and to reveal which LULC image presents better accuracy results. ## 2 Study area The study area, Zonguldak is located on the coast of Western Black Sea region of Turkey (Figure 1). Zonguldak has rugged terrain and it is one of the main coal mining areas in the world. Furthermore, it is an important industrial region including four thermal power plants, and one of the biggest iron and steel plant in Europe. Thus it is important to monitor LULC changes in this region. ## 3 Material and method Sentinel-2 MSI and Landsat-8 OLI data, acquired on 6 April 2016 and 3 April 2016 respectively, were used as satellite imagery in the study. Common bands of those two dataset namely Red (R), Green (G), Blue (B) and Near Infrared (NIR) were used in the process of classification. The spectral bands and Ground Sampling Distance (GSD) values of both satellites are as presented in Table 1. ## 4 Material and method Sentinel-2 MSI and Landsat-8 OLI data, acquired on 6 April 2016 and 3 April 2016 respectively, were used as satellite imagery in the study. Common bands of those two dataset namely Red (R), Green (G), Blue (B) and Near Infrared (NIR) were used in the process of classification. The spectral bands and Ground Sampling Distance (GSD) values of both satellites are as presented in Table 1. Before the image classification process, pre-processing steps for satellite images were implemented. RGB and NIR bands of two datasets are common and thus these four bands were considered for layer stacking. For Landsat-8 data, band 2, band 3 band 4 and band 5 were layer stacked and then clipped so as to include the study area. After clipping, Landsat-8 pan-sharpened image was created using High Pass Filtering (HPF) pan sharp algorithm in ERDAS software package. Pan-sharpening process was used to make familiar the GSD of two datasets. For Sentinel-2 data, the same pre-processing steps were implemented except for pan-sharpening using SNAP software developed by European Space Agency (ESA) and its partners. Five general LULC classes including Water Body, Settlement Area, Bare Land, Forest and Vegetation were utilized in this case study. For each LULC class, at least 15 samples were collected and used for the classification of both images in ERDAS. Same training samples were used for both data sets. MLC is the most common classification method introduced in the literature (Benedickson et al., 1990), and uses the statistics for each class in each band as a normally distributed function and computes the likelihood of a given pixel belongs to a specific category based on the following equation (Eliha & Boteva, 2016): \\[\\mathrm{g_{i}(x)=ln(w_{i})\\cdot\\frac{1}{2}ln|\\Sigma|i\\cdot\\frac{1}{2}(x\\text{- m}_{i})^{T}\\Sigma^{-1}(x\\text{-m}_{i})}\\] Where; i = class x = n-dimensional data (where n is the number of bands) p(w) = probability that class w\\({}_{i}\\) occurs in the image and is assumed the same for all classes \\(|\\Sigma|i\\) = determinant of the covariance matrix of the data in class w\\({}_{i}\\)\\(\\Sigma^{i}\\) = its inverse matrix m = mean vector. ## 4 Results Classified Landsat-8 and Sentinel-2 images are presented in Figure 2. Due to the spatial resolution of the datasets, general classes namely Water Body, Settlement Area, Bare Land, Forest and Vegetation were considered as LULC classes. overall accuracy for Sentinel-2 derived LULC is better than Landsat-8 derived LULC. If some of the parts of the LULC images are considered, pan-sharpened Landsat-8 can offer better results for some areas in Sea class than Sentinel-2 as it is clear from Figure 2. LULC images are crucial for fast grown cities in order to understand the dynamics of urban growth. Satellite imagery is one of the main resources to monitor the changes on Earth, especially new generation Earth observation satellites such as Landsat-8 and Sentinel-2 can be obtained freely, and LULC images can be produced in a good temporal resolution. Temporal analyses of LULC help city planners and decision makers to improve the standards of the cities. ## References * Benedictsson et al. (1990) Benedictsson, J. A., Swain, P. H., and Ersoy, O. K., 1990. Neural network approaches versus statistical methods in classification of multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, 28, 540-551. * Elhag and Boteva (2016) Elhag, M., Boteva, S., 2016. Mediterranean land use and land cover classification assessment using high spatial resolution data. In IOP Conference Series: _Earth and Environmental Science_, Vol. 44, No. 4, p. 042032. * Hale Topaloglu et al. (2016) Hale Topaloglu, R., Sertel, E., & Musaoglu, N., 2016. Assessment of Classification Accuracies of SENTINEL-2 and LANDSAT-8 Data for Land Cover/Use Mapping. In: _International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, Prague, Czech Republic, Vol: XLI-B8, pp. 1055-1059. * Jia et al. (2014) Jia, K., Wei, X., Gu, X., Yao, Y., Xie, X., & Li, B., 2014. Land cover classification using Landsat 8 operational land imager data in Beijing, China. _Geocarto International_, 29(8), 941-951. * Liu et al. (2015) Liu, J., Heiskanen, J., Aynekulu, E., & Pellikka, P. K. E., 2015. Seasonal variation of land cover classification accuracy of Landsat 8 images in Burkina Faso. In: _The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_, Berlin, Germany, Vol: XL-7/W3, pp. 455-460. * Marangoz et al. (2017) Marangoz, A.M., Sekertekin, A. & Akin, H., 2017. Analysis of Land Use Land Cover Classification Results Derived From Sentinel-2 Image. _17th International Multidisciplinary Scientific GeoConference SGEM 2017_, Albena, Vama, Bulgaria, pp. 25-32. * Mejia and Hochschild (2012) Mejia, J.F., Hochschild, V., 2012. Land Use and Land Cover (LULC) Change in the Bocono River Basin, North Venezuela Andes, and Its Implications for the Natural Resources Management. In: _Environmental Land Use Planning_. InTech, 244 p. * Pielke et al. (2011) Pielke, R. A., Pitman, A., Niyogi, D., Mahmood, R., McAlpine, C., Hossain, F., Goldewijk, K. K., Nair, U., Betts, R., Fall, S., Reichstein, M., Kabat, P. and de Nollet, N., 2011. Land use/land cover changes and climate: modeling analysis and observational evidence. _WIREs Clim Change_, 2: 828-850. * Pirotti et al. (2016) Pirotti, F., Sunar, F., & Pringnolo, M., 2016. Benchmark of Machine Learning Methods for Classification of a Sentinel-2 Image. In: _International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences_, Prague, Czech Republic, Vol: XLI-B7, pp. 335-340. * Turner et al. (2001) Turner, M. G., Gardner, R. H., & Orneill, R. V., 2001. _Landscape ecology in theory and practice_ (Vol. 401). New York: Springer.
The aim of this study is to conduct accuracy analyses of Land Use Land Cover (LULC) classifications derived from Sentinel-2 and Landsat-8 data, and to reveal which dataset present better accuracy results. Zonguldak city and its near surrounding was selected as study area for this case study. Sentinel-2 Multispectral Instrument (MSI) and Landsat-8 the Operational Land Imager (OLL) data, acquired on 6 April 2016 and 3 April 2016 respectively, were utilized as satellite imagery in the study. The RGB and NIR bands of Sentinel-2 and Landsat-8 were used for classification and comparison. Pan-shappening process was carried out for Landsat-8 data before classification because the spatial resolution of Landsat-8 (30m) is far from Sentinel-2 RGB and NIR bands (10m). LULC images were generated using pixel-based Maximum Likelihood (MLC) supervised classification method. As a result of the accuracy assessment, kappa statistics for Sentinel-2 and Landsat-8 data were 0.78 and 0.85 respectively. The obtained results showed that Sentinel-2 MSI presents more satisfying LULC images than Landsat-8 OLI data. However, in some areas of Sea class Landsat-8 presented better results than Sentinel-2. Land Use Land Cover, Pixel Based Image Classification, Supervised Classification, Landsat-8 OLI, Sentinel-2 MSI 4th International GeoAdvances Workshop, 14-15 October 2017, Safarnobuk, Karabuk, Turkey
Write a summary of the passage below.
329
arxiv-format/2307_08079v3.md
# Flexible and efficient spatial extremes emulation via variational autoencoders Likun Zhang, Xiaoyu Ma, Christopher K. Wikle Department of Statistics, University of Missouri and Raphael Huser Statistics Program, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology (KAUST) ###### _Keywords:_ Variational Bayes, Deep learning, Spatial extremes, Tail dependence, Climate emulation Emulating large spatial datasets is very valuable in producing a large ensemble of realistic replicates, quantifying uncertainty of various inference targets, and assessing risks associated with rare events such as joint high-threshold exceedances (Arpat and Caers, 2007). The efficacy of an emulator hinges greatly on its ability to characterize complex patterns of spatial variability in the studied phenomenon, especially the dependence structure in the tail. However, handling statistical computations for spatial models accommodating extremes remains a significant challenge for large datasets (Huser and Wadsworth, 2022). Spatial emulation is particularly useful in climate science, where an abundance of simulations with high regional resolutions is often needed for detailed field-level insights and for uncertainty quantification (UQ). For example, watershed modeling (e.g., Sharma et al., 2021) requires a great number of simulations of heavy precipitation to assess regional riverine flood inundation hazards. In addition, numerous high-resolution sea surface temperature (SST) field replicates can aid in the identification of marine heatwaves (MHW) and the detection of coral reef zones prone to bleaching (Hughes et al., 2017). Current MHW detection methods, as described in Genevier et al. (2019), involve spatial averaging over predefined regions with extensive coral bleaching records, followed by calculating a daily percentile threshold using an 11-day window around the date of interest. To avoid relying on predefined subdomains or spatial averaging, we need a refined methodology to enhance the accuracy of spatial threshold exceedance estimations with better UQ. Furthermore, emulators can be used as surrogate models to speed up and simplify mechanic-based deterministic models such as Earth system models (ESMs). These models typically rely on deterministic partial differential equations, featuring numerous parameters and requiring significant computational resources. In recent years, surrogate models have proven to produce realistic synthetic data with much less computational expense (see Gramacy, 2020, for an overview). They are also useful for inverse problems, model calibration and the specification of realistic model parameterizations for coupled processes. However, traditional methods for emulation such as those based on Gaussian processes (e.g., Gu et al., 2018), polynomial chaos expansions (e.g., Sargsyan, 2017), proper orthogonal decomposition (e.g., Iuliano and Quagliarella, 2013) and more recently, deep neural networks used in generative models such as generative adversarial networks (Goodfellow et al., 2014, GANs) and variational autoencoders (Kingma and Welling, 2013, VAEs), do not naturally accommodate extreme values, and certainly not dependent extreme values. Classical asymptotic max-stable processes exhibit dependence structures that are too rigid for environmental datasets and thus lead to overestimation of the spatial extent and intensity of extreme events (Huser et al., 2024). Recent spatial extremes models have addressed some of these limitations and offer more realistic tail dependence structures. Examples of such models include Gaussian scale mixtures (e.g., Opitz, 2016; Huser and Wadsworth, 2019) or \\(r\\)-Pareto processes (e.g., Thibaud and Opitz, 2015; de Fondeville and Davison, 2018), applied in the peaks-over-threshold framework, and max-infinitely divisible (max-id) models (e.g., Reich and Shaby, 2012; Padoan, 2013; Huser et al., 2021; Bopp et al., 2021; Zhong et al., 2022), applied in the block-maxima framework. However, these models often assume a stationary dependence structure across space and time, and the computational demands are significant even for moderately-sized datasets (up to about 500locations; Zhang et al., 2022), limiting the applicability to high-resolution climate datasets. In this work, we adopt a statistics-of-extremes perspective (Davison and Huser, 2015) and introduce a novel max-id model for spatial extremes with nonstationary dependence properties over space and time to explicitly accommodate concurrent and locally dependent extremes. We provide a detailed proof of these properties and propose embedding this flexible spatial extremes model within a variational autoencoder engine, referred to as the XVAE. This surrogate modeling framework facilitates rapid and efficient spatial modeling and data emulation in dimensions never explored before for dependent extremes. Moreover, we present a validation framework with wide applicability to assess emulation quality across low, moderate, and high values. A novel metric with theoretical guarantees is proposed, specifically tailored to evaluate the skill of a spatial model in fitting joint tail behavior. The paper proceeds as follows. First, in Section 1, we detail how our novel max-id process can be embeded into the encoder-decoder VAE construction and study its highly flexible extremal dependence properties. Next, in Section 2, we introduce several model evaluation approaches to assess the emulation of spatial fields with dependent extremes. In Section 3, we validate the emulating power of our XVAE by simulation and demonstrate its advantages over a commonly used Gaussian process emulator. Then, in Section 4, we apply our XVAE to high-resolution Red Sea surface temperature extremes. Lastly, in Section 5, we conclude the paper and discuss perspectives for future work. ## 1 Methodology This section provides background on VAEs, a presentation of our novel max-id model for spatial extremes, and a description of the construction of our XVAE. A detailed discussion on the tail properties of the model is also included. Hereafter, we shall denote random variables with capital letters and fixed or observed quantities with lowercase letters. ### Variational autoencoder background Bayesian hierarchical models characterized by a lower-dimensional latent process are amenable to use of the variational autoencoder for inference and generative simulation. Specifically, consider the joint distribution \\[p_{\\mathbf{\\theta}}(\\mathbf{x},\\mathbf{z})=p_{\\mathbf{\\theta}}(\\mathbf{x}\\mid\\mathbf{z})p_{\\mathbf{\\theta}} (\\mathbf{z}),\\] where \\(\\mathbf{x}\\) and \\(\\mathbf{z}\\) are observations of a (e.g., physical) process \\(\\mathbf{X}\\in\\mathbb{R}^{n_{s}}\\) and realizations of a latent process \\(\\mathbf{Z}\\in\\mathbb{R}^{K}\\), respectively, and \\(K\\ll n_{s}\\). The data model \\(p_{\\mathbf{\\theta}}(\\mathbf{x}\\mid\\mathbf{z})\\) and the latent process model \\(p_{\\mathbf{\\theta}}(\\mathbf{z})\\) may depend on different subsets of parameters \\(\\mathbf{\\theta}\\). In the case of spatial data, the vector \\(\\mathbf{X}\\) may be observations of a spatial process \\(\\{X(\\mathbf{s}):\\mathbf{s}\\in\\mathcal{S}\\}\\) at \\(n_{s}\\) locations, and \\(\\mathbf{Z}\\) may be random coefficients from a low-rank basis expansion representation of \\(\\mathbf{X}\\). To emulate an observed high-dimensional data \\(\\mathbf{x}\\) (i.e., generate new realizations of \\(\\mathbf{X}\\)), an ideal probabilistic framework would be to: (1) obtain a good estimate \\(\\hat{\\mathbf{\\theta}}\\) of the parameters\\(\\mathbf{\\theta}\\) given the observations \\(\\mathbf{x}\\) and generate independent (latent) variables \\(\\mathbf{Z}^{1},\\ldots,\\mathbf{Z}^{L}\\) from the posterior density \\(p_{\\mathbf{\\theta}}(\\mathbf{z}\\mid\\mathbf{x})\\); (2) generate \\(\\mathbf{X}^{l}\\) from the data model (posterior predictive distributions) \\(p_{\\hat{\\mathbf{\\theta}}}(\\mathbf{x}\\mid\\mathbf{Z}^{l})\\), \\(l=1,\\ldots,L\\). If the characterization of the distributions is reasonable, the resulting emulated replicates \\(\\{\\mathbf{X}^{1},\\ldots,\\mathbf{X}^{L}\\}\\) should resemble the original input \\(\\mathbf{x}\\), with meaningful variations among the emulated samples. Unfortunately, the posterior density \\(p_{\\mathbf{\\theta}}(\\mathbf{z}\\mid\\mathbf{x})\\) is often intractable because the marginal likelihood \\(p_{\\mathbf{\\theta}}(\\mathbf{x})=\\int p_{\\mathbf{\\theta}}(\\mathbf{x},\\mathbf{z})\\mathrm{d}\\mathbf{z}\\), a \\(K\\)-dimensional integral, typically does not have an analytical form. This makes the maximum likelihood estimation of the parameters \\(\\mathbf{\\theta}\\) difficult even for a moderately complicated data model \\(p_{\\mathbf{\\theta}}(\\mathbf{x}\\mid\\mathbf{z})\\). Imposing another prior model on \\(\\mathbf{\\theta}\\) and implementing a Markov chain Monte Carlo (MCMC) algorithm to draw samples of \\((\\mathbf{Z},\\mathbf{\\theta})\\) simultaneously can be a possible solution; the draws of \\(\\mathbf{Z}\\) from the Markov chain are treated as the samples from \\(p_{\\mathbf{\\theta}}(\\mathbf{z}\\mid\\mathbf{x})\\) which can then be fed in \\(p_{\\mathbf{\\theta}}(\\mathbf{x}\\mid\\mathbf{z})\\) to generate new \\(\\mathbf{X}\\). But MCMC can be computationally intensive when the likelihood \\(p_{\\mathbf{\\theta}}(\\mathbf{x},\\mathbf{z})\\) is expensive to evaluate, and algorithm tuning can be challenging when the dimension of \\((\\mathbf{Z},\\mathbf{\\theta})\\) is large, particularly in the case of complex spatial and/or temporal dependence in the data. In variational Bayesian inference, the true posterior \\(p_{\\mathbf{\\theta}}(\\mathbf{z}\\mid\\mathbf{x})\\) is approximated by a so-called variational distribution that is relatively easy to evaluate. The variational autoencoder (VAE) proposed by Kingma and Welling (2013) formulates the variational distribution (which they call a probabilistic _encoder_) as a multilayer perceptron (MLP) neural network denoted by \\(q_{\\mathbf{\\phi}_{e}}(\\mathbf{z}\\mid\\mathbf{x})\\), in which \\(\\mathbf{x}\\) is the input, \\(\\mathbf{z}\\) is the output and \\(\\mathbf{\\phi}_{e}\\) are the so-called weights and biases of the neural network. This construction greatly facilitates fast non-linear encoding of a high-dimensional process \\(\\mathbf{X}\\) to lower-dimensional \\(\\mathbf{Z}\\) in the latent space. Next, to generate emulated data replicates from \\(p_{\\mathbf{\\theta}}(\\mathbf{x}\\mid\\mathbf{Z})\\), Kingma and Welling (2013) use another MLP neural network (which they call a probabilistic _decoder_) acting as an estimator for the model parameters \\(\\mathbf{\\theta}\\). Denoting the weights and biases of the decoder by \\(\\mathbf{\\phi}_{d}\\), we write \\(\\hat{\\mathbf{\\theta}}_{\\mathrm{NN}}=\\mathrm{DecoderNeuralNet}_{\\mathbf{\\phi}_{d}}(\\mathbf{z})\\) and denote through an abuse of notation \\[p_{\\mathbf{\\phi}_{d}}(\\mathbf{x},\\mathbf{z})\\equiv p_{\\hat{\\mathbf{\\theta}}_{\\mathrm{NN}}}( \\mathbf{x},\\mathbf{z}),\\quad p_{\\mathbf{\\phi}_{d}}(\\mathbf{x})\\equiv\\int p_{\\hat{\\mathbf{\\theta}} _{\\mathrm{NN}}}(\\mathbf{x},\\mathbf{z})\\mathrm{d}\\mathbf{z},\\] and \\(p_{\\mathbf{\\phi}_{d}}(\\mathbf{z}\\mid\\mathbf{x})=p_{\\mathbf{\\phi}_{d}}(\\mathbf{x},\\mathbf{z})/p_{\\mathbf{ \\phi}_{d}}(\\mathbf{x})\\). The encoding parameters \\(\\mathbf{\\phi}_{e}\\) and decoding parameters \\(\\mathbf{\\phi}_{d}\\) can be estimated by maximizing the evidence lower bound (ELBO), which is defined by \\[\\mathcal{L}_{\\mathbf{\\phi}_{e},\\mathbf{\\phi}_{d}}(\\mathbf{x})=\\log p_{\\mathbf{\\phi}_{d}}(\\mathbf{x })-D_{KL}\\left\\{q_{\\mathbf{\\phi}_{e}}(\\mathbf{z}\\mid\\mathbf{x})\\mid\\mid p_{\\mathbf{\\phi}_{d}} (\\mathbf{z}\\mid\\mathbf{x})\\right\\}, \\tag{1}\\] in which \\(\\log p_{\\mathbf{\\phi}_{d}}(\\mathbf{x})\\) is called the _evidence_ for \\(\\mathbf{x}\\) under \\(\\hat{\\mathbf{\\theta}}_{\\mathrm{NN}}\\), and the second term in (1) is the Kullback-Leibler (KL) divergence between \\(q_{\\mathbf{\\phi}_{e}}(\\mathbf{z}\\mid\\mathbf{x})\\) and \\(p_{\\mathbf{\\phi}_{d}}(\\mathbf{z}\\mid\\mathbf{x})\\), which is non-negative. Therefore, maximizing the ELBO with respect to \\(\\mathbf{\\phi}_{e}\\) and \\(\\mathbf{\\phi}_{d}\\) is equivalent to maximizing the log-likelihood \\(\\log p_{\\mathbf{\\phi}_{d}}(\\mathbf{x})\\) while minimizing the difference between the approximate posterior density \\(q_{\\mathbf{\\phi}_{e}}(\\mathbf{z}\\mid\\mathbf{x})\\) and the true posterior density \\(p_{\\mathbf{\\phi}_{d}}(\\mathbf{z}\\mid\\mathbf{x})\\) (under \\(\\hat{\\mathbf{\\theta}}_{\\mathrm{NN}}\\)). Since \\(p_{\\mathbf{\\phi}_{d}}(\\mathbf{z}\\mid\\mathbf{x})\\) is unknown, we rewrite the marginal likelihood \\(p_{\\mathbf{\\phi}_{d}}(\\mathbf{x})\\) as follows \\[\\log p_{\\mathbf{\\phi}_{d}}(\\mathbf{x})=\\mathbb{E}_{\\mathbf{Z}\\sim q_{\\mathbf{\\phi}_{e}}(\\mathbf{z} \\mid\\mathbf{x})}\\left\\{\\log\\frac{p_{\\mathbf{\\phi}_{d}}(\\mathbf{x},\\mathbf{Z})}{q_{\\mathbf{\\phi}_{e} }(\\mathbf{Z}\\mid\\mathbf{x})}\\right\\}+D_{KL}\\left\\{q_{\\mathbf{\\phi}_{e}}(\\mathbf{z}\\mid\\mathbf{x}) \\mid\\mid p_{\\mathbf{\\phi}_{d}}(\\mathbf{z}\\mid\\mathbf{x})\\right\\}.\\] Therefore, the ELBO can be approximated by Monte Carlo as \\[\\mathcal{L}_{\\mathbf{\\phi}_{e},\\mathbf{\\phi}_{d}}(\\mathbf{x})\\approx\\frac{1}{L}\\sum_{l=1} ^{L}\\log\\frac{p_{\\mathbf{\\phi}_{d}}(\\mathbf{x},\\mathbf{Z}^{l})}{q_{\\mathbf{\\phi}_{e}}(\\mathbf{Z}^ {l}\\mid\\mathbf{x})}, \\tag{2}\\] where \\(\\mathbf{Z}^{1},\\ldots,\\mathbf{Z}^{L}\\) are independent draws from \\(q_{\\mathbf{\\phi}_{e}}(\\cdot\\mid\\mathbf{x})\\). If there are replicates of the process, \\(\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{n_{t}}\\), then \\(\\sum_{t=1}^{n_{t}}\\mathcal{L}_{\\mathbf{\\phi}_{e},\\mathbf{\\phi}_{d}}(\\mathbf{x}_{t})\\) is considered. In traditional VAEs (e.g., Kingma et al., 2019; Cartwright et al., 2023), Gaussianity is assumed for both the data model \\(p_{\\mathbf{\\phi}_{d}}(\\mathbf{x}_{t}\\mid\\mathbf{z}_{t})\\) and the encoder \\(q_{\\phi_{e}}(\\mathbf{z}_{t}\\mid\\mathbf{x}_{t})\\), with the prior \\(p_{\\mathbf{\\phi}_{d}}(\\mathbf{z}_{t})\\) often set as a simple multivariate normal distribution \\(N(\\mathbf{0},\\mathbf{I}_{K})\\). However, the Gaussian assumptions limit the VAE's ability to capture heavy-tailed distributions and intricate dependence structures. ### A nonstationary max-id model for VAE-based emulation To better emulate spatial data with extremes, we define \\(p_{\\mathbf{\\theta}}(\\mathbf{x}\\mid\\mathbf{z})\\) indirectly through the construction of a novel flexible nonstationary spatial extremes model that can easily be incorporated into a VAE. The flexible spatial extremes model is described in Section 1.2.1, followed by a detailed description the XVAE in Section 1.2.2. Note that implementation details related to the XVAE neural networks are provided in the Supplementary Material. #### 1.2.1 Flexible nonstationary max-id spatial extremes model Our model builds upon the max-id process introduced by Reich and Shaby (2012) and extended by Bopp et al. (2021). Importantly, a novel extension of our model is its ability to realistically capture both short-range asymptotic dependence (AD), mid-range asymptotic independence (AI), and long-range exact independence, as explained in more detail in Section 1.3, and it can accommodate nonstationary in space and time. As in these earlier works, we start by defining the spatial observation model as \\[X(\\mathbf{s})=\\epsilon(\\mathbf{s})Y(\\mathbf{s}),\\;\\mathbf{s}\\in\\mathcal{S}, \\tag{3}\\] where \\(\\mathcal{S}\\in\\mathbb{R}^{2}\\) is the domain of interest and \\(\\epsilon(\\mathbf{s})\\) is a noise process with independent \\(\\text{Frechet}(0,\\tau,1/\\alpha_{0})\\) marginal distributions; that is, \\(\\Pr\\{\\epsilon(\\mathbf{s})\\leq x\\}=\\exp\\{-(x/\\tau)^{1/\\alpha_{0}}\\}\\), where \\(x>0\\), \\(\\tau>0\\) and \\(\\alpha_{0}>0\\). Then, \\(Y(\\mathbf{s})\\) is constructed using a low-rank representation: \\[Y(\\mathbf{s})=\\left\\{\\sum_{k=1}^{K}\\omega_{k}(\\mathbf{s})^{\\frac{1}{\\alpha}}Z_{k} \\right\\}^{\\alpha_{0}}, \\tag{4}\\]where \\(\\alpha\\in(0,1)\\), \\(\\{\\omega_{k}(\\mathbf{s}):k=1,\\ldots,K\\}\\) are fixed compactly-supported radial basis functions centered at \\(K\\) pre-specified knots such that \\(\\sum_{k=1}^{K}\\omega_{k}(\\mathbf{s})=1\\) for any \\(\\mathbf{s}\\in\\mathcal{S}\\), and \\(\\{Z_{k}:k=1,\\ldots,K\\}\\) are latent variables independently distributed as exponentially-tilted positive-stable (PS) random variables (Hougaard, 1986). We write \\(Z_{k}\\stackrel{{\\mathrm{ind}}}{{\\sim}}\\exp\\)PS\\((\\alpha,\\gamma_{k}),\\ k=1,\\ldots,K\\), in which \\(\\alpha\\in(0,1)\\) determines the rate at which the power-law tail of \\(\\exp\\)PS\\((\\alpha,0)\\) tapers off, and the tilting parameters \\(\\gamma_{k}\\geq 0\\) determine the extent of tilting, with larger values of \\(\\gamma_{k}\\) leading to lighter-tailed \\(Z_{k}\\) (see Section B.1 of the Supplementary Material). Our model, inspired by Reich and Shaby (2012) and Bopp et al. (2021), presents a novel class of spatial extremes models. In Reich and Shaby (2012) and Bopp et al. (2021), the basis functions lack compact support and all tilting parameters are fixed at either \\(\\gamma_{k}\\equiv 0\\) or \\(\\gamma_{k}\\equiv\\gamma>0\\), resulting in only AI or AD for all pairs of locations, respectively. In contrast, the use of compactly-supported basis function and spatially-varying tilting parameters creates a spatial-scale aware extremal dependence model, enabling local asymptotic dependence (AD) or asymptotic independence (AI) for nearby locations while ensuring long-range AI for distant locations, which is a significant advancement in spatial extremes. Moreover, while both previous works use a noise process with Frechet\\((0,1,1/\\alpha)\\) marginals (i.e., \\(\\alpha_{0}=\\alpha\\)), our approach decouples noise variance from tail heaviness, providing better noise control for each time point, while keep the appealing property of being max-id as shown below in Section 1.3. Additionally, we embed our model in a VAE, leveraging variational-Bayes-based parameter estimation for rapid computation, modeling, and emulation of high-dimensional spatial extremes (on the order of more than 10,000 locations) without distributing the model parameter estimation over a number of subdomains. Furthermore, we allow different concentration parameters \\(\\alpha_{t}\\) and tilting parameters \\(\\mathbf{\\gamma}_{t}=\\{\\gamma_{kt}:k=1,\\ldots,K\\}\\), at different time points \\(t=1,\\ldots,n_{t}\\), which automatically ensures a temporally non-stationary spatial extremes model. To the best of our knowledge, our XVAE is the first attempt to capture both spatially and temporally varying extremal dependence structures simultaneously in one model at the scale that we consider here. Zhong et al. (2022) achieved it at a much smaller scale and using a quite rigid covariate-based approach. In Section A of the Supplementary Material, we compare our max-id model with other existing models more in depth. #### 1.2.2 XVAE: A VAE incorporating the proposed max-id spatial model For notational simplicity hereafter, we denote by \\(\\mathbf{X}_{t}=\\{X_{t}(\\mathbf{s}_{j}):j=1,\\ldots,n_{s}\\}\\) the observations of process (3) at time \\(t=1,\\ldots,n_{t}\\), and by \\(\\mathbf{Z}_{t}=\\{Z_{kt}:k=1,\\ldots,K\\}\\) the corresponding latent variables. Inference for our flexible extremes model on large spatial datasets poses challenges. A streamlined Metropolis-Hastings MCMC algorithm would be time-consuming and hard to monitor when confronted with the scale of our spatial data in Section 4, where a considerable number of local basis functions \\(K\\) is necessary to capture intricate local extremes accurately. Additionally, when there are many time replicates, inferring the parameters \\((\\alpha_{t},\\mathbf{\\gamma}_{t},\\mathbf{Z}_{t})\\) for all time points becomes extremely challenging. Moreover, with a large sample size \\(n_{s}\\), evaluating the likelihood \\(p_{\\mathbf{\\theta}}(\\mathbf{x}\\mid\\mathbf{z})\\) for a single iteration becomes computationally expensive,and updating different parameters iteratively further exacerbates this issue. To overcome these challenges, we employ the encoding-decoding VAE paradigm described in Section 1.1 and modify it to account for our extremes framework. For \\(t=1,\\ldots,n_{t}\\), our encoder \\(q_{\\mathbf{\\phi}_{e}}(\\mathbf{z}_{t}\\mid\\mathbf{x}_{t})\\) maps each observed replicate \\(\\mathbf{x}_{t}\\) to the latent space and allows fast random sampling of \\(\\{\\mathbf{Z}_{t}^{1},\\ldots,\\mathbf{Z}_{t}^{L}\\}\\) that will be approximately distributed according to the true posterior \\(p_{\\mathbf{\\theta}_{t}}(\\cdot\\mid\\mathbf{x}_{t})\\) because of the ELBO regularization, in which \\(\\mathbf{\\theta}_{t}=(\\alpha_{0},\\tau,\\alpha_{t},\\mathbf{\\gamma}_{t}^{T})^{T}\\); recall Eqs. (1) and (2). The details of this approach are provided below. Approximate Posterior/Encoder (\\(q_{\\mathbf{\\phi}_{e}}(\\mathbf{z}_{t}\\mid\\mathbf{x}_{t})\\)):The encoder is defined through \\[\\begin{split}\\mathbf{z}_{t}&=\\mathbf{\\mu}_{t}+\\mathbf{\\zeta}_{t }\\odot\\mathbf{\\eta}_{t},\\\\ \\eta_{kt}&\\overset{\\text{i.i.d.}}{\\sim}\\text{Normal }(0,1),\\\\ (\\mathbf{\\mu}_{t}^{T},\\log\\mathbf{\\zeta}_{t}^{T})^{T}&= \\text{EncoderNeuralNet}_{\\mathbf{\\phi}_{e}}(\\mathbf{x}_{t}),\\end{split} \\tag{5}\\] where we use a standard reparameterization trick with an auxiliary variable \\(\\mathbf{\\eta}_{t}=\\{\\eta_{kt}:k=1,\\ldots,K\\}\\) and \\(\\odot\\) is the elementwise product (Kingma and Welling, 2013). This trick enables fast computation of Monte Carlo estimates of \\(\ abla_{\\mathbf{\\phi}_{e}}\\mathbf{\\mathcal{L}}_{\\mathbf{\\phi}_{e},\\mathbf{\\phi}_{d}}\\), the gradient of the ELBO with respect to \\(\\mathbf{\\phi}_{e}\\) (see Section D.1 for details). Also, by controlling the mean \\(\\mathbf{\\mu}_{t}\\) and variance \\(\\mathbf{\\zeta}_{t}^{2}\\), the distributions \\(q_{\\phi_{e}}(\\mathbf{z}_{t}\\mid\\mathbf{x}_{t})\\) are enforced to be close to \\(p_{\\mathbf{\\phi}_{d}}(\\mathbf{z}_{t}\\mid\\mathbf{x}_{t})\\) for each \\(t\\). This is the primary role of the deep neural network in (5) -- i.e., to learn the complex relationship between the inputs \\(\\mathbf{x}_{t}\\) and the latent process, \\(\\mathbf{z}_{t}\\). The specific neural network architecture and implementation details are given in Section D of the Supplementary Material. Prior on Latent Process (\\(p_{\\mathbf{\\phi}_{d}}(\\mathbf{z}_{t})\\)):This is determined by our model construction. Specifically, denoting the density function of the \\(\\text{expPS}(\\alpha,\\gamma_{k})\\) distribution by \\(h(z;\\alpha,\\gamma_{k})\\), the prior on \\(\\mathbf{z}_{t}\\) can be written as \\[p_{\\mathbf{\\phi}_{d}}(\\mathbf{z}_{t})=\\prod_{k=1}^{K}h(z_{kt};\\alpha_{t},\\gamma_{kt}), \\tag{6}\\] which depends on parameters \\((\\alpha_{t},\\gamma_{kt}:t=1,\\ldots,T;k=1,\\ldots,K)\\). Data Model/Decoder (\\(p_{\\phi_{d}}(\\mathbf{x}_{t}\\mid\\mathbf{z}_{t})\\)):Our decoder is based on the flexible max-id spatial extremes model described in Section 1.2.1. Specifically, recall from Eq. (4) that \\(\\Pr(\\mathbf{X}_{t}\\leq\\mathbf{x}_{t}\\mid\\mathbf{Z}_{t}=\\mathbf{z}_{t})=\\exp\\{-\\sum_{j=1}^{n_{s }}{(\\tau/x_{jt})}^{1/\\alpha_{0}}\\sum_{k=1}^{K}\\omega_{kj}^{1/\\alpha_{t}}z_{kt}\\}\\). Differentiating this conditional distribution function gives the exact form of the decoder: \\[p_{\\mathbf{\\phi}_{d}}(\\mathbf{x}_{t}\\mid\\mathbf{z}_{t})=\\left(\\frac{1}{\\alpha_{0}}\\right) ^{n_{s}}\\left\\{\\prod_{j=1}^{n_{s}}\\frac{1}{x_{jt}}\\left(\\frac{x_{jt}}{\\tau y_{ jt}}\\right)^{-1/\\alpha_{0}}\\right\\}\\exp\\left\\{-\\sum_{j=1}^{n_{s}}\\left(\\frac{x_{ jt}}{\\tau y_{jt}}\\right)^{-1/\\alpha_{0}}\\right\\}, \\tag{7}\\] where \\(y_{jt}=\\sum_{k=1}^{K}\\omega_{kj}^{1/\\alpha_{t}}z_{kt}\\). This distribution depends on the Frechet parameters \\((\\alpha_{0},\\tau)^{T}\\) and the dependence parameters \\((\\alpha_{t},\\mathbf{\\gamma}_{t}^{T})^{T}\\) inherited from the prior distribution of \\(\\mathbf{z}_{t}\\). The decoder neural network estimates these dependence parameters as \\[(\\hat{\\alpha}_{t},\\hat{\\mathbf{\\gamma}}_{t}^{T})^{T}=\\text{DecoderNeuralNet}_{\\mathbf{ \\phi}_{d,0}}(\\mathbf{Z}_{t}) \\tag{8}\\] where \\(\\mathbf{\\phi}_{d,0}\\) are the bias and weight parameters of this neural network (see Eqs. (D.1) and (D.3) of the Supplementary Material for more details). Combining \\(\\mathbf{\\phi}_{d,0}\\) with the Frechet parameters \\((\\alpha_{0},\\tau)^{T}\\), we write \\(\\mathbf{\\phi}_{d}=(\\alpha_{0},\\tau,\\mathbf{\\phi}_{d,0}^{T})^{T}\\). We must use the variational procedure to find estimates of parameters \\(\\mathbf{\\phi}_{d}\\) and the encoder neural network parameters, \\(\\mathbf{\\phi}_{e}\\). Encoder/Decoder Estimation:By drawing \\(L\\) independent samples \\(\\mathbf{Z}_{t}^{1},\\ldots,\\mathbf{Z}_{t}^{L}\\) using Eq. (5), we can derive the Monte Carlo estimate of the ELBO, \\(\\mathcal{L}_{\\mathbf{\\phi}_{e},\\mathbf{\\phi}_{d}}(\\mathbf{x}_{t})\\), as in Eq. (2). In Section D.3 of the Supplementary Material, we detail the stochastic gradient search used to find the \\(\\mathbf{\\phi}_{e}\\) and \\(\\mathbf{\\phi}_{d}\\) that maximize \\(\\sum_{t=1}^{n_{t}}\\mathcal{L}_{\\mathbf{\\phi}_{e},\\mathbf{\\phi}_{d}}(\\mathbf{x}_{t})\\). We stress that our XVAE is an example of \"amortized inference\" (Zammit-Mangion et al., 2024): there is a substantial training cost up front, but once the XVAE is trained, posterior simulation of new latent variables \\(\\mathbf{Z}_{t}\\) can be performed very efficiently following Eq. (5) and synthetic data can be generated extremely quickly by passing them through the decoder (8) and sampling from the model \\(p_{\\hat{\\mathbf{\\theta}}_{t}}(\\mathbf{x}\\mid\\mathbf{Z}_{t})\\) specified by Eqs. (3) and (4), in which \\(\\hat{\\mathbf{\\theta}}_{t}=(\\hat{\\alpha}_{0},\\hat{\\tau},\\hat{\\alpha}_{t},\\hat{\\mathbf{ \\gamma}}_{t}^{T})^{T}\\). The data reconstruction process relies on compactly supported local basis functions at pre-determined knot points, which are not updated with \\(\\mathbf{\\phi}_{d}\\) of the decoder. Although one could choose the knots using a certain space-filling design, we propose a data-driven way to determine the number of knots, their locations, and the radius of basis functions as described in Section D.2 of the Supplementary Material, and show by simulation that this compares favorably to the XVAE initialized with the true knots/radii. Our XVAE implementation in R is publicly accessible on GitHub at [https://github.com/likun-stat/XVAE](https://github.com/likun-stat/XVAE). ### Marginal and dependence properties One major advantage of our approach over typical machine learning generative models is that we can explicitly derive the marginal and dependence properties of our spatial extremes model integrated within the VAE. In this section, we temporarily omit the subscript \\(t\\) and examine the marginal and joint tail behavior of the model (3) at one specific time point. For notational simplicity, we write \\(X_{j}=X(\\mathbf{s}_{j})\\), \\(\\omega_{kj}=\\omega_{k}(\\mathbf{s}_{j})\\), \\(k=1,\\ldots,K\\), \\(j=1,\\ldots,n_{s}\\), and define \\(\\mathcal{C}_{j}=\\{k:\\omega_{kj}\ eq 0,k=1,\\ldots,K\\}\\). We require that any location \\(\\mathbf{s}\\in\\mathcal{S}\\) be covered by at least one basis function, thereby \\(\\mathcal{C}_{j}\\) cannot be empty for any \\(j\\). In spatial extremes, extremal dependence is commonly described by the measure \\[\\chi_{ij}(u)=\\Pr\\{F_{j}(X_{j})>u\\mid F_{i}(X_{i})>u\\}=\\frac{\\Pr\\{F_{j}(X_{j})> u,F_{i}(X_{i})>u\\}}{\\Pr\\{F_{i}(X_{i})>u\\}}\\in[0,1], \\tag{9}\\] in which \\(u\\in(0,1)\\) is a (high) threshold and \\(F_{i}\\) and \\(F_{j}\\) are the continuous marginal distribution functions for \\(X_{i}\\) and \\(X_{j}\\), respectively. When \\(u\\) is close to one, \\(\\chi_{ij}(u)\\) quantifies the probability that one variable is extreme given that the other variable is similarly extreme. If \\(\\chi_{ij}=\\lim_{u\\to 1}\\chi_{ij}(u)=0\\), \\(X_{i}\\) and \\(X_{j}\\) are said to be _asymptotically independent_ (AI), and if \\(\\chi_{ij}=\\lim_{u\\to 1}\\chi_{ij}(u)>0\\), \\(X_{i}\\) and \\(X_{j}\\) are _asymptotically dependent_ (AD). #### 1.3.1 Marginal distributions To examine \\(\\chi_{ij}=\\lim_{u\\to 1}\\chi_{ij}(u)\\), we first study the marginal distributions of the process (3). **Proposition 1.1**.: _Let \\(\\mathcal{D}=\\{k:\\gamma_{k}=0,\\ k=1,\\ldots,K\\}\\) and \\(\\bar{\\mathcal{D}}\\) be the complement of \\(\\mathcal{D}\\). For process (3),the marginal distribution function of \\(X_{j}=X(\\boldsymbol{s}_{j})\\) can be written as_ \\[F_{j}(x)=\\exp\\left\\{\\sum_{k\\in\\bar{\\mathcal{D}}}\\gamma_{k}^{\\alpha}-\\sum_{k=1} ^{K}\\left(\\gamma_{k}+\\tau^{\\frac{1}{\\alpha_{0}}}\\omega_{kj}^{\\frac{1}{\\alpha}} x^{-\\frac{1}{\\alpha_{0}}}\\right)^{\\alpha}\\right\\}. \\tag{10}\\] _As \\(x\\to\\infty\\), the survival function \\(\\bar{F}_{j}(x)=1-F_{j}(x)\\) satisfies_ \\[\\bar{F}_{j}(x)=c_{j}^{\\prime}x^{-\\frac{\\alpha}{\\alpha_{0}}}+c_{j}x^{-\\frac{1 }{\\alpha_{0}}}+\\left(d_{j}-\\frac{c_{j}^{2}}{2}\\right)x^{-\\frac{2}{\\alpha_{0}} }-\\frac{{c_{j}^{\\prime}}^{2}}{2}x^{-\\frac{2\\alpha}{\\alpha_{0}}}-c_{j}^{\\prime }c_{j}x^{-\\frac{\\alpha+1}{\\alpha_{0}}}+o\\left(x^{-\\frac{2}{\\alpha_{0}}} \\right), \\tag{11}\\] _where \\(c_{j}=\\alpha\\tau^{1/\\alpha_{0}}\\sum_{k\\in\\bar{\\mathcal{D}}}\\gamma_{k}^{\\alpha -1}\\omega_{kj}^{1/\\alpha}\\), \\(c_{j}^{\\prime}=\\tau^{\\alpha/\\alpha_{0}}\\sum_{k\\in\\mathcal{D}}\\omega_{kj}\\), and \\(d_{j}=\\frac{\\alpha(\\alpha-1)}{2}\\tau^{2/\\alpha_{0}}\\sum_{k\\in\\bar{\\mathcal{D} }}\\gamma_{k}^{\\alpha-2}\\omega_{kj}^{2/\\alpha}\\)._ The proof of this result can be found in Section B.2 of the Supplementary Material. It indicates that the process (3) has Pareto-like marginal tails at any location in the domain \\(\\mathcal{S}\\). If \\(\\mathcal{C}_{j}\\cap\\mathcal{D}\ eq\\emptyset\\), that is, if the \\(j\\)th location is impacted by an \"un-tilted knot\" (i.e., a knot with \\(\\gamma_{k}=0\\) in the \\(\\mathrm{expPS}(\\alpha,\\gamma_{k})\\) distribution of the corresponding latent variable \\(Z_{k}\\)), then \\(\\bar{F}_{j}(x)\\sim c_{j}^{\\prime}x^{-\\frac{\\alpha}{\\alpha_{0}}}\\) as \\(x\\to\\infty\\) since \\(\\alpha\\in(0,1)\\). If, however, the location is not within the reach of an un-tilted knot, then instead \\(\\bar{F}_{j}(x)\\sim c_{j}x^{-\\frac{1}{\\alpha_{0}}}\\) as \\(x\\to\\infty\\), which is less heavy-tailed. The following result directly delineates how the quantile level changes as \\(u\\to 1\\). **Corollary 1.1.1**.: _As \\(t\\to\\infty\\), the marginal quantile function \\(q_{j}(t)=F_{j}^{-1}(1-1/t)\\) can be approximated as follows under the assumptions of Proposition 1.1:_ \\[q_{j}(t)=\\begin{cases}c_{j}^{\\prime\\,\\alpha_{0}/\\alpha}t^{\\alpha_{0}/\\alpha} \\left\\{1+\\frac{\\alpha_{0}c_{j}t^{1-1/\\alpha}}{\\alpha c_{j}^{\\prime}{}^{1/ \\alpha}}-\\frac{\\alpha_{0}t^{-1}}{2\\alpha}+O\\left(t^{-1/\\alpha}\\right)\\right\\},&\\text{if }\\mathcal{C}_{j}\\cap\\mathcal{D}\ eq\\emptyset,\\\\ c_{j}^{\\alpha_{0}}t^{\\alpha_{0}}\\{1+\\alpha_{0}(\\frac{d_{j}}{c_{j}^{2}}-\\frac{ 1}{2})t^{-1}+o(t^{-1})\\},&\\text{if }\\mathcal{C}_{j}\\cap\\mathcal{D}=\\emptyset.\\end{cases}\\] The proof of this result can also be found in Section B.2 of the Supplementary Material. It will be used to derive the tail dependence structure for two arbitrary spatial locations. #### 1.3.2 Joint distribution To derive the extremal dependence structure, we first calculate the joint distribution function of a \\(n_{s}\\)-variate random vector \\((X_{1},\\ldots,X_{n_{s}})^{T}\\) drawn from the process (3). **Proposition 1.2**.: _Under the definitions and notation as established in the previous sections, for locations \\(\\boldsymbol{s}_{1},\\ldots,\\boldsymbol{s}_{n_{s}}\\in\\mathcal{S}\\), the exact form of the joint distribution function of the random vector \\((X_{1},\\ldots,X_{n_{s}})^{T}\\) can be written as_ \\[F(x_{1},\\ldots,x_{n_{s}})=\\exp\\left\\{\\sum_{k\\in\\tilde{\\mathcal{D}}}\\gamma_{k}^{ \\alpha}-\\sum_{k=1}^{K}\\left(\\gamma_{k}+\\tau^{\\frac{1}{\\alpha_{0}}}\\sum_{j=1}^{ n_{s}}\\omega_{kj}^{\\frac{1}{\\alpha}}x_{j}^{-\\frac{1}{\\alpha_{0}}}\\right)^{\\alpha} \\right\\}. \\tag{12}\\] The proof of Proposition 1.2 is given in Section B.3 of the Supplementary Material. Eq. (12) ensures that \\(F^{1/r}(x_{1},\\ldots,x_{n_{s}})\\) is a valid distribution function on \\(\\mathbb{R}^{n_{s}}\\) for any real \\(r>0\\), of the same form as (12) but with tilting indices \\(\\{\\gamma_{1}/r^{1/\\alpha},\\ldots,\\gamma_{K}/r^{1/\\alpha}\\}\\) and scale parameter \\(\\tau/r^{\\alpha_{0}/\\alpha}\\). By definition, the process \\(\\{X_{t}(\\mathbf{s}):\\mathbf{s}\\in\\mathcal{D}\\}\\) is thus max-infinitely divisible. It becomes max-stable only when it remains within the same location-scale family, i.e., when \\(\\gamma_{1}=\\cdots=\\gamma_{K}=0\\). #### 1.3.3 Tail dependence properties We now characterize the tail dependence structure of process (3) using both \\(\\chi_{ij}\\) defined in Eq. (9) and the complementary measure \\(\\eta_{ij}\\) defined by \\(\\Pr\\{X_{i}>F_{i}^{-1}(u),X_{j}>F_{j}^{-1}(u)\\}=\\mathcal{L}\\{(1-u)^{-1}\\}(1-u)^ {1/\\eta_{ij}}\\), where \\(\\mathcal{L}\\) is slowly varying at infinity, i.e., \\(\\mathcal{L}(tx)/\\mathcal{L}(t)\\to 1\\) as \\(t\\to\\infty\\) for all \\(x>0\\). The value of \\(\\eta_{ij}\\in(0,1]\\) is used to differentiate between the different levels of dependence exhibited by an AI pair \\((X_{i},X_{j})^{T}\\). When \\(\\eta_{ij}=1\\) and \\(\\mathcal{L}(t)\ ot\\to 0\\) as \\(t\\to\\infty\\), \\((X_{i},X_{j})^{T}\\) is AD (\\(\\chi_{ij}>0\\)), and the remaining cases are all AI (\\(\\chi_{ij}=0\\); see Ledford and Tawn, 1996), with stronger tail dependence for larger values of \\(\\eta_{ij}\\). **Theorem 1.3**.: _Under the assumptions of Propositions 1.1 and 1.2, the process \\(\\{X(\\mathbf{s})\\}\\) defined in (3) has a tail dependence structure characterized as follows:_ 1. _If_ \\(\\mathcal{C}_{i}\\cap\\mathcal{D}=\\emptyset\\) _and_ \\(\\mathcal{C}_{j}\\cap\\mathcal{D}=\\emptyset\\)_, we have_ \\(\\chi_{ij}=0\\) _and_ \\(\\eta_{ij}=1/2\\)_._ 2. _If_ \\(\\mathcal{C}_{i}\\cap\\mathcal{D}=\\emptyset\\) _and_ \\(\\mathcal{C}_{j}\\cap\\mathcal{D}\ eq\\emptyset\\)_, we have_ \\[t\\Pr\\{X_{i}>q_{i}(t),X_{j}>q_{j}(t)\\}=\\begin{cases}c_{ij}t^{-\\frac{1}{\\alpha}}/ (c_{i}c_{j}^{\\prime\\,\\frac{1}{\\alpha}})+o(t^{-\\frac{1}{\\alpha}}),&\\text{if } \\mathcal{C}_{i}\\cap\\mathcal{C}_{j}\ eq\\emptyset,\\\\ t^{-1},&\\text{if }\\mathcal{C}_{i}\\cap\\mathcal{C}_{j}=\\emptyset,\\end{cases}\\] _where_ \\(c_{ij}=\\alpha(\\alpha-1)\\tau^{2/\\alpha_{0}}\\sum_{k\\in\\mathcal{C}_{i}\\cap \\mathcal{C}_{j}}\\theta_{k}^{\\alpha-2}\\omega_{ki}^{1/\\alpha}\\omega_{kj}^{1/\\alpha}\\)_. This leads to_ \\(\\chi_{ij}=0\\)_,_ \\(\\eta_{ij}=\\frac{\\alpha}{\\alpha+1}\\) _when_ \\(\\mathcal{C}_{i}\\cap\\mathcal{C}_{j}\ eq\\emptyset\\) _and_ \\(1/2\\) _when_ \\(\\mathcal{C}_{i}\\cap\\mathcal{C}_{j}=\\emptyset\\)_._ 3. _If_ \\(\\mathcal{C}_{i}\\cap\\mathcal{D}\ eq\\emptyset\\) _and_ \\(\\mathcal{C}_{j}\\cap\\mathcal{D}\ eq\\emptyset\\)_, we have_ \\[t\\Pr\\{X_{i}>q_{i}(t),X_{j}>q_{j}(t)\\}=2-d_{ij}-(c_{i}/c_{i}^{\\prime\\,1/\\alpha}+ c_{j}/c_{j}^{\\prime\\,1/\\alpha})t^{1-\\frac{1}{\\alpha}}-O(t^{2-\\frac{2}{ \\alpha}}),\\] _where_ \\(d_{ij}=\\tau^{\\alpha/\\alpha_{0}}\\sum_{k\\in\\mathcal{D}}\\{(\\omega_{ki}/c_{i}^{ \\prime})^{1/\\alpha}+(\\omega_{kj}/c_{j}^{\\prime})^{1/\\alpha}\\}^{\\alpha}\\in(1,2)\\)_. Thus,_ \\((X_{i},X_{j})^{T}\\) _is AD with_ \\(\\chi_{ij}=2-d_{ij}\\) _when_ \\(\\mathcal{C}_{i}\\cap\\mathcal{C}_{j}\ eq\\emptyset\\)_, and AI with_ \\(\\eta_{ij}=\\alpha\\) _and_ \\(\\chi_{ij}=0\\) _when_ \\(\\mathcal{C}_{i}\\cap\\mathcal{C}_{j}=\\emptyset\\)_._ The proof of this result can be found in Section B.4 of the Supplementary Material. The local dependence strength is proportional to the tail-heaviness of the latent variable of the closest knot. There is local AD if \\(\\gamma_{k}=0\\) and there is local AI if \\(\\gamma_{k}>0\\), as expected. Similar to Eq. (10), the sets \\(\\mathcal{C}_{j}\\cap\\mathcal{D}\\), \\(j=1,\\ldots,n_{s}\\), are crucial to the behavior of the so-calledexponent function which occurs in the limiting distribution for normalized maxima (Huser and Wadsworth, 2019); see Remark 5 in Section B.4 for more discussion. The compactness of the basis functions' support yields long-range exact independence (thus, also AI) for two far-apart stations that are impacted by disjoint sets of basis functions; this is similar in spirit to the Cauchy convolution process of Krupskii and Huser (2022), though their model construction is different and less computationally tractable than ours. ## 2 Validation framework for extremes emulation We propose a new validation framework that is tailored to assess skill in fitting both the full range of data and joint tail behavior in model outputs. This comprehensive framework can be used to evaluate the quality of emulations from any generative spatial extremes model and represents one of the contributions of this paper to the extremes literature. ### Full range evaluation To examine the quality of the emulation from the XVAE, we will predict at \\(n_{h}\\) locations \\(\\{\\mathbf{h}_{i}:i=1,\\ldots,n_{h}\\}\\) held out from the analyses. To perform these predictions, we calculate the basis function values at these locations, with which we can mix the encoded variables from Eq. (5) to get predicted values. For each time \\(t\\) and holdout location \\(\\mathbf{h}_{i}\\), denote the true observation of \\(X_{t}(\\mathbf{h}_{i})\\) by \\(x_{it}\\) and the emulated prediction by \\(x_{it}^{*}\\). Then the mean squared prediction error (MSPE) for time \\(t\\) is \\[\\text{MSPE}_{t}=\\frac{1}{n_{h}}\\sum_{i=1}^{n_{h}}(x_{it}-x_{it}^{*})^{2},\\] where \\(t=1,\\ldots,n_{t}\\). All MSPEs from different time replicates can be summarized in a boxplot; see Section 3 for example. Similarly, we can calculate the continuously ranked probability score (CRPS; Matheson and Winkler, 1976; Gneiting and Raftery, 2007) across time for each location, i.e., \\[\\text{CRPS}_{i}=\\frac{1}{n_{t}}\\sum_{t=1}^{n_{t}}\\int_{-\\infty}^{\\infty}(F_{i }(z)-1(x_{it}^{*}\\leq z))^{2}\\text{d}z,\\] where \\(F_{i}\\) is the marginal distribution estimated using parameters at the holdout location \\(\\mathbf{h}_{i}\\), \\(i=1,\\ldots,n_{h}\\), and again \\(x_{it}^{*}\\) is the emulated value. Smaller CRPS indicates that the distribution \\(F_{i}\\) is concentrated around \\(x_{it}^{*}\\), and thus can be used to measure how well the distribution fits all emulated values. Section 3 also shows how we present the CRPS values from all holdout locations for each emulation. In addition, we will examine the quantile-quantile (QQ)-plots obtained by pooling the spatial data into the same plot to check if the spatial input and the emulation have similar ranges and quantiles. ### Empirical tail dependence measures To assess the tail dependence structure of the emulated fields, we will estimate \\(\\chi_{ij}(u)\\) defined in Eq. (9) empirically in two ways. First, to examine the overall dependence strength, we treat \\(\\{X(\\mathbf{s})\\}\\) as if it had a stationary and isotropic dependence structure so that \\(\\chi_{ij}(u)\\equiv\\chi_{h}(u)\\), with \\(h=||\\mathbf{s}_{i}-\\mathbf{s}_{j}||\\) being the distance between locations. Then for a fixed \\(h\\), we find all pairs of locations with similar distances (within a small tolerance, say \\(\\epsilon=0.001\\)), and compute the empirical conditional probabilities \\(\\widehat{\\chi}_{h}(u)\\) at a grid of \\(u\\) values. Confidence envelopes can be calculated by regarding the outcome (i.e., simultaneously exceed \\(u\\) or not) of each pair as a Bernoulli variable and computing pointwise binomial confidence intervals, assuming that all pairs of points are independent from each other. Examples in Section 3 demonstrate how this empirical measure can be used to compare the extremal dependence structures between the spatial data input and realizations from the emulator. While this metric does not completely characterize the non-stationarity in the process, it is still well-defined as a summary statistic and carries important information about the average decay of dependence with distance irrespective of the direction. Second, to avoid the stationary assumption, we can choose a reference point denoted by \\(\\mathbf{s}_{0}\\) and estimate the pairwise \\(\\chi_{0j}(u)\\) empirically between \\(\\mathbf{s}_{0}\\) and all observed locations \\(\\mathbf{s}_{j}\\) in the spatial domain \\(\\mathcal{S}\\). These pairwise estimates can then be presented using a raster plot (if gridded) or a heat plot. Section 4 shows examples of the empirical \\(\\chi_{0j}(u)\\), \\(u=0.85\\), estimated from the real and emulated datasets, where \\(\\mathbf{s}_{0}\\) is the center of \\(\\mathcal{S}\\). ### Areal radius of exceedance We further propose a tail dependence coefficient that formally summarizes the overall dependence strength over the entire spatial domain. This metric characterizes the spatial extent of extreme events conditional on an arbitrary reference point in the domain (e.g., the center of \\(\\mathcal{S}\\)) exceeding a particular quantile \\(u\\). Zhang et al. (2023) formulated the metric on an empirical basis and named it the averaged radius of exceedances (ARE). Given a large number of independent replicates (say \\(n_{r}\\)) from \\(\\{X(\\mathbf{s})\\}\\) on a dense regular grid \\(\\mathcal{G}=\\{\\mathbf{g}_{i}\\in\\mathcal{S}:i=1,\\ldots,n_{g}\\}\\) over the domain \\(\\mathcal{S}\\) with side length \\(\\psi>0\\), denote the replicates by \\(\\mathbf{X}_{r}=\\{X_{r}(\\mathbf{g}_{i}):i=1,\\ldots,n_{g}\\}\\), \\(r=1,\\ldots,n_{r}\\). The empirical marginal distribution functions at \\(\\mathbf{g}_{i}\\) can then be obtained as \\(\\hat{F}_{i}(x)=n_{r}^{-1}\\sum_{r=1}^{n_{r}}\\mathbb{1}\\left(X_{ir}\\leq x\\right)\\), where \\(X_{ir}=X_{r}(\\mathbf{g}_{i})\\) and \\(\\mathbb{1}\\{\\cdot\\}\\) is the indicator function. We then transform \\((X_{i1},\\cdots,X_{in_{r}})^{T}\\) to the uniform scale via \\(U_{ir}=\\hat{F}_{i}(X_{ir})\\), \\(r=1,\\ldots,n_{r}\\). Let \\(\\mathbf{U}_{r}=\\{U_{ir}:i=1,\\ldots,n_{g}\\}\\) and \\(U_{0r}=\\hat{F}_{0}\\{X_{r}(\\mathbf{s}_{0})\\}\\). The ARE metric at the threshold \\(u\\) is defined by \\[\\widehat{\\text{ARE}}_{\\psi}(u)=\\left\\{\\frac{\\psi^{2}\\sum_{r=1}^{n_{r}}\\sum_{i =1}^{n_{g}}\\mathbb{1}(U_{ir}>u,U_{0r}>u)}{\\pi\\sum_{r=1}^{n_{r}}\\mathbb{1}(U_{ 0r}>u)}\\right\\}^{1/2}. \\tag{13}\\] The summation \\(\\psi^{2}\\sum_{i=1}^{n_{g}}\\mathbb{1}\\left(U_{ir}>u,U_{0r}>u\\right)\\) in Eq. (13) calculates the area of all grid cells exceeding the extremeness level \\(u\\) at the same replicate \\(r\\) as the reference location \\(\\mathbf{s}_{0}\\); dividing it by \\(\\pi\\) and taking the square root thus yields the \"radius\" of a circular exceedanceregion that has the same spatial extent. Additionally, Eq. (13) averages over all replicates with the reference location exceeding the extremeness level \\(u\\). Therefore, \\(\\widehat{\\mathrm{ARE}}_{\\psi}(u)\\) has the same units as \\(\\psi\\), or the distance metric used on the domain \\(\\mathcal{S}\\), which makes it a more straightforward metric for domain scientists because it reflects the average length scale of the extreme events (e.g., warm pool size in SST data). If there is an infinite number of independent replicates, the following result ensures that \\(\\widehat{\\mathrm{ARE}}_{\\psi}(u)\\) almost surely converges to \\[\\mathrm{ARE}_{\\psi}(u)=\\left(\\psi^{2}\\sum_{i=1}^{n_{g}}\\chi_{0i}(u)/\\pi\\right)^ {1/2}, \\tag{14}\\] where \\(\\chi_{0i}(u)\\) is here the \\(\\chi\\)-measure between locations \\(\\mathbf{s}_{0}\\) and \\(\\mathbf{g}_{i}\\) defined in Eq. (9). **Theorem 2.1**.: _For a fixed regular grid \\(\\mathcal{G}\\) with side length \\(\\psi\\), a reference location \\(\\mathbf{s}_{0}\\) and \\(u\\in(0,1)\\), we have \\(\\widehat{\\mathrm{ARE}}_{\\psi}(u)\\to\\mathrm{ARE}_{\\psi}(u)\\) almost surely as \\(n_{r}\\to\\infty\\)._ From this result, we see that \\(\\widehat{\\mathrm{ARE}}_{\\psi}(u)\\) and its limit, which do not require stationarity or isotropy, quantify the square root of spatial average of \\(\\chi_{0i}(u)\\). Due to the presence of the white noise term \\(\\epsilon(\\mathbf{s})\\) form, there is no version of the process \\(\\{X(\\mathbf{s})\\}\\) that has measurable paths, which means that \\(X(\\mathbf{s})\ ot\\to X(\\mathbf{s}_{0})\\) (in probability) as \\(\\mathbf{s}\\to\\mathbf{s}_{0}\\). However, from Theorem 1.3, we know that there is continuity in the dependence measure \\(\\chi_{0i}\\) because \\(\\epsilon(\\mathbf{s})\\) has little impact on the dependence structure of the mixture \\(Y(\\mathbf{s})\\). That is, \\(\\chi_{\\mathbf{s}_{0},\\mathbf{s}}\\), denoting the \\(\\chi\\)-measure between location \\(\\mathbf{s}_{0}\\) and \\(\\mathbf{s}\\in\\mathcal{S}\\), is a continuous function of \\(\\mathbf{s}\\in\\mathcal{S}\\) when fixing the reference location \\(\\mathbf{s}_{0}\\); we define this property to be _tail-continuous_ for \\(\\mathbf{s}_{0}\\). The following result further confirms that under the tail-continuity, \\(\\widehat{\\mathrm{ARE}}_{\\psi}(u)\\) also converges to the square root of spatial integration of \\(\\chi_{\\mathbf{s}_{0},\\mathbf{s}}\\) as \\(u\\to 1\\) and as \\(\\mathcal{G}\\) becomes infinitely dense. **Theorem 2.2**.: _Let the domain \\(\\mathcal{S}\\) be bounded (i.e., its area \\(|\\mathcal{S}|<\\infty\\)) and process \\(\\{X(\\mathbf{s}):\\mathbf{s}\\in\\mathcal{S}\\}\\) be tail-continuous for \\(\\mathbf{s}_{0}\\) (i.e., \\(\\chi_{\\mathbf{s}_{0},\\mathbf{s}}\\) is a continuous function of \\(\\mathbf{s}\\) in \\(\\mathcal{S}\\)). Then,_ \\[\\lim_{\\psi\\to 0,u\\to 1}\\psi\\left(\\sum_{i=1}^{n_{g}}\\chi_{0i}(u)\\right)^{1/2}= \\left\\{\\int_{\\mathcal{S}}\\chi_{\\mathbf{s}_{0},\\mathbf{s}}\\mathrm{d}\\mathbf{s}\\right\\}^{1/2}. \\tag{15}\\] **Remark 1**.: _Tail-continuity is met by many spatial extremes models, like max-stable, inverted-max-stable, and others (e.g., Opitz, 2016; Huser and Wadsworth, 2019; Wadsworth and Tawn, 2022). Our model (3) also adheres to tail-continuity, as indicated by Theorem 1.3._ **Remark 2**.: _Together, Theorems 2.1 and 2.2 ensure that \\(\\widehat{\\mathrm{ARE}}_{\\psi}(u)\\approx\\left\\{\\int_{\\mathcal{S}}\\chi_{\\mathbf{s}_ {0},\\mathbf{s}}\\mathrm{d}\\mathbf{s}\\right\\}^{1/2}/\\pi^{1/2}\\) if there are a large number of replicates from the process \\(\\{X(\\mathbf{s})\\}\\) on a very dense grid \\(\\mathcal{G}\\)._ Similarly, we can estimate \\(\\mathrm{ARE}_{\\psi}(u)\\) for the emulator by running the decoder repeatedly to obtain emulated replicates of \\(\\{X(\\mathbf{s})\\}\\) on the same grid. By comparing the \\(\\mathrm{ARE}_{\\psi}(u)\\) estimates at a series of \\(u\\) levels, we can evaluate whether the spatially-aggregated exceedance is consistent between the spatial data inputs and their XVAE emulation counterparts. ### Uncertainty quantification The decoder (8) functions as a neural estimator for \\((\\alpha_{t},\\mathbf{\\gamma}_{t}^{T})^{T}\\). Examining its inferential power is crucial, as accurate emulation heavily relies on precise characterization of spatial inputs. Drawing a substantial number of samples from the variational distribution \\(q_{\\mathbf{\\phi}_{e}}(\\cdot\\mid\\mathbf{x}_{t})\\) (which is close to \\(p_{\\mathbf{\\phi}_{d}}(\\cdot\\mid\\mathbf{x}_{t})\\); recall Section 1.1) allows us to obtain Monte Carlo estimates of the dependence parameters \\((\\alpha_{t},\\mathbf{\\gamma}_{t}^{T})^{T}\\) using the decoder (8). Aggregating these estimates provides approximate samples of the posterior \\((\\alpha_{t},\\mathbf{\\gamma}_{t}^{T})^{T}\\mid\\{\\mathbf{x}_{t}:t=1,\\ldots,n_{t}\\}\\), which enables the calculation of point estimates (posterior mean or maximum _a posteriori_) and the construction of approximate confidence regions for uncertainty quantification. ## 3 Simulation study In this section, we simulate data from five different parametric models that have varying levels of extremal dependence across space. By examining the diagnostics introduced in Section 2, we validate the efficacy of our XVAE to analyze and emulate data from both model (3) and misspecified models. ### General setting To assess the performance of our XVAE, we conduct a simulation study in which data are generated at \\(n_{s}=2,000\\) random locations uniformly sampled over the square \\([0,10]\\times[0,10]\\). We simulate \\(n_{t}=100\\) replicates of the process from each of the following different models: 1. Gaussian process with zero mean, unit variance, and Matern correlation \\(C(\\mathbf{s}_{j},\\mathbf{s}_{j};\\phi,\ u)\\), in which \\(\\phi=3\\) and \\(\ u=5/2\\) are range and smoothness parameters; 2. Max-id process (3) with \\(K=25\\) basis functions and \\(|\\mathcal{D}|=0\\) un-tilted knots; 3. Max-id process (3) with \\(K=25\\) basis functions and \\(0<|\\mathcal{D}|<K\\) un-tilted knots; 4. Max-id process (3) with \\(K=25\\) basis functions and \\(|\\mathcal{D}|=K\\) un-tilted knots; 5. Max-stable Reich and Shaby (2012) model with \\(K=25\\) basis functions. When simulating from Models II-IV, we use time-invariant dependence parameters \\(\\alpha_{t}\\equiv\\alpha\\) and \\(\\mathbf{\\gamma}_{t}\\equiv\\mathbf{\\gamma}\\); see Figure 1 for the knot locations and \\(\\mathbf{\\gamma}\\) values. Recall that \\(K\\) is the number of basis functions and \\(\\mathcal{D}=\\{k:\\gamma_{k}=0\\}\\). Models I-V gradually exhibit increasingly stronger extremal dependence, and they can help us test whether the XVAE can capture spatially-varying dependence structures that exhibit local AD and/or local AI. Since the proposed process (3) allows \\(\\gamma_{k}\\) to change at different knots (\\(k=1,\\ldots,K\\)), a well-trained XVAE should be able to differentiate between local AD (\\(\\gamma_{k}=0\\)) and local AI (\\(\\gamma_{k}>0\\)). Model I is a stationary and isotropic Gaussian process with a Matern covariance function. It is known that the joint distribution of the Gaussian process at any two locations \\(\\mathbf{s}_{i}\\) and \\(\\mathbf{s}_{j}\\) is light-tailed and leads to AI unless the correlation equals one. For Models II, III and IV, we simulate data from the max-id model (3) with \\(K=25\\) evenly-spread knotsacross the grid, denoted by \\(\\{\\tilde{\\mathbf{s}}_{1},\\ldots,\\tilde{\\mathbf{s}}_{K}\\}\\). Setting the range parameter to \\(r=3\\), we use compactly supported Wendland basis functions \\(\\omega_{k}(\\mathbf{s},r)\\propto\\{1-d(\\mathbf{s},\\tilde{\\mathbf{s}}_{k})/r\\}_{+}^{2}\\) centered at each knot (Wendland, 1995), \\(k=1,\\ldots,K\\); see Figure 1. The basis function values are standardized so that for each \\(\\mathbf{s}\\), \\(\\sum_{k=1}^{K}\\omega_{k}(\\mathbf{s},r)=1\\). The main difference between Models II, III and IV lies in the \\(\\gamma_{k}\\) values: Model II has no zero \\(\\gamma_{k}\\)'s (i.e., \\(|\\mathcal{D}|=0\\)), whereas Model III has a mix of positive and zero \\(\\gamma_{k}\\)'s, and Model IV has only zero \\(\\gamma_{k}\\)'s (i.e., \\(|\\mathcal{D}|=K\\)). By Theorem 1.3, we know Model II gives only local AI and Model IV gives only local AD. In contrast, Model III gives both local AD and local AI. By contrast, Model V adopts the same set of knots but it uses Gaussian radial basis functions which are not compactly supported. Therefore, Model V is the Reich and Shaby (2012) max-stable model, and has stronger extremal dependence than Models I-IV. When simulating from Models II-V, we choose the parameter \\(\\alpha=1/2\\) and then sample the latent variables \\(\\mathbf{Z}_{t}\\) from the exponentially-tilted PS distribution for each time replicate. Additionally, for Models II-IV, the white noise process \\(\\epsilon_{t}(\\mathbf{s})\\) follows the same independent \\(\\text{Frechet}(0,\\tau,1/\\alpha_{0})\\) distribution with \\(\\tau=1\\) and \\(\\alpha_{0}=1/4\\). For each space-time simulated dataset, we randomly set aside 100 locations as a validation set. Subsequently, we analyze the dependency structure of the remaining 1,900 locations using both the proposed XVAE (initialized with data-driven knots unless specified otherwise) and a Gaussian process regression with heteroskedastic noise implemented in the R package hetGP(Binois and Gramacy, 2021). We then perform predictions at the 100 holdout locations as outlined in Section 2.1. In the following result sections, we show that both emulators will have good performances when emulating datasets from Models I and II, but only XVAE appropriately captures AD in Models III-V. Figure 1: The left panel presents knot locations used for simulating data under Models II–IV, and we only show the support of the one Wendland basis function centered at knot in the middle of the domain. Model V uses the same set of knots but the basis functions are not compactly supported. The middle and right panels display the \\(\\gamma_{k}\\) values, \\(k=1,\\ldots,K\\), used in the exponentially-tilted PS variables at each knot for Models II and III respectively. The circled knots signify \\(\\gamma_{k}=0\\), which induces local AD. ### Emulation results Figure 2 and Figure E.1 of the Supplementary Material compare emulated replicates from XVAE and hetGP with data replicates from Models I-V, while Figure E.2 displays QQ-plots that align well with the 1-1 line in all cases for XVAE but not for hetGP. Since the Gaussian process has much weaker extremal dependence, the resulting \\(\\mathbf{\\gamma}_{t}\\) estimated in (8) after convergence is consistently far greater than 0.1, indicating light tails in the exponentially-tilted PS variables and thus, local AI at all knots. Similarly, for Model II, there is AI everywhere in the domain. However, the \\(\\mathbf{\\gamma}_{t}\\) values we used for Model II are much smaller than 0.1 (see Figure 1) and thus the fitted exponentially-tilted PS variables are heavier-tailed than the ones from Model I. Therefore, hetGP has difficulty in capturing the extremal dependence and the QQ-plot shows that large values in the tail tend to be underestimated, even though Model II still exhibits AI only. For Models III-V, there is local AD, and we see that hetGP completely fails at emulating the co-occurrence of extreme values. Because hetGP focuses on the bulk of the distribution, it ignores spatial extremal dependence. This validates the need to incorporate a flexible spatial model in the emulator to capture tail dependence accurately. Figure 3 compares the performance of spatial predictions at the 100 holdout locations. For Model I, hetGP has lower CRPS and MSPE scores, indicating higher predictive power, as expected since the true process is Gaussian. However, the XVAE model still performs quite well in this case. For Models II-V, XVAE uniformly outperforms hetGP. Also, the CRPS and MSPE for hetGP are significantly higher for time replicates with extreme events. The left three panels of Figure 4 and Figure E.3 of the Supplementary Material compare nonparametric estimates of the upper tail dependence \\(\\chi_{h}(u)\\) from the data replicates and emulations at three different distances \\(h\\in\\{0.5,2,5\\}\\) under the working assumption of stationarity. In general, we see that the dependence strength decays as \\(h\\) and \\(u\\) increase, with varying levels of positive limits as \\(u\\to 1\\) for Models III-V. The results in Figure 4 Figure 2: Data replicate (left) and its corresponding emulated fields (XVAE, middle; hetGP, right) from Model III. See Figure E.1 of the Supplementary Material for comparisons for the other models. In all cases, we use data-driven knots for emulation using XVAE. demonstrate that our XVAE manages to accurately emulate the dependence behavior at both low and high quantiles and the empirical confidence envelopes of \\(\\chi_{h}(u)\\) are essentially indistinguishable between the simulated and emulated data. Choosing \\((5,5)\\) as the reference point, the rightmost panel of Figure 4 displays estimates of \\(\\text{ARE}_{\\psi}(u)\\), \\(\\psi=0.05\\), for both data replicates and emulated data under Models I-V. We see that the empirical AREs from the XVAE are consistent with the ones estimated from the data except for Model V, where \\(\\text{ARE}(u)\\) is slightly underestimated at low thresholds \\(u\\) but overestimated at high \\(u\\). As expected, the limit of \\(\\text{ARE}(u)\\) as \\(u\\to 1\\) is non-negative for Models III-V when there is local AD, and the limit increases from Model III to V. To showcase the inferential capabilities of our approach, we initialize the XVAE with true knots and rerun it on datasets simulated from Models II and III. Figure 5 displays \\(\\boldsymbol{\\gamma}_{t}\\) estimates obtained by running the decoder (i.e., Eq. (8)) 1000 times at \\(t=1\\). The results highlight the XVAE's ability to produce accurate estimates of \\(\\boldsymbol{\\gamma}_{t}=\\{\\gamma_{kt}:k=1,\\ldots,K\\}\\), and correctly identify the extremal dependence class, with satisfactory agreement between Figure 4: From left to right, we show the empirically-estimated \\(\\chi_{h}(u)\\) at \\(h=0.5,2,5\\), and \\(\\text{ARE}_{\\psi}(u)\\) with \\(\\psi=0.05\\) for Model III based on data replicates (black) and XVAE emulated data (red). The \\(\\chi_{h}(u)\\) and \\(\\text{ARE}_{\\psi}(u)\\) estimates for the other models are shown in Figures E.3 and E.4 of the Supplementary Material, respectively. Figure 3: The CRPS (left) and MSPE (right) values from two emulation approaches on the datasets simulated from Models I–V. For both metrics, lower values indicate better emulation results. Also, for Models IV and V, we plot the CRPS values on the log scale since the AD in the data generating process causes the margins to be very heavy-tailed. true and estimated values, accounting for uncertainty. Additionally, we perform a coverage analysis by simulating 99 more datasets with \\(n_{s}=2000\\) and \\(n_{t}=100\\) from Models II and III, running the XVAE on each to generate empirical credible intervals for \\(\\boldsymbol{\\gamma}_{t}\\). Figure E.5 of the Supplementary Material shows the coverage probabilities of \\(\\{\\gamma_{kt}:k=1,\\ldots,K\\}\\) for \\(t=1\\). Most estimated probabilities align closely with the nominal 95% level, except when \\(\\gamma_{kt}=0\\), where coverage is poorer due to the true value residing on the parameter space boundary. Nevertheless, these promising results endorse the XVAE as a fast and robust inference tool for estimating parameters in the max-id process (3) and for Bayesian UQ. ## 4 Application to Red Sea surface temperature data The Red Sea, a biodiversity hot spot, is susceptible to coral bleaching due to climate change and rising SST anomalies (Furby et al., 2013). Corals are unlikely to survive once the temperature exceeds a bleaching threshold annually, which in turn causes disruption in fish migration and slow decline in fish abundance. Here, we analyze and emulate a Red Sea surface temperature dataset, which consists of satellite-derived daily SST estimates at 16,703 locations on a \\(1/20^{\\circ}\\) grid from 1985/01/01 to 2015/12/31 (11,315 days in total); see Donlon et al. (2012). This yields about 189 million correlated spatio-temporal data points. Through our analysis, we demonstrate the importance of generating realistic realizations Figure 5: Initializing the XVAE using the true knots from Models II (top) and III (bottom), we show the estimates of \\((\\gamma_{1t},\\gamma_{3t})^{T}\\) (left) and \\((\\gamma_{5t},\\gamma_{12t})^{T}\\) (middle) from 1,000 samples generated with the trained decoder (\\(t=1\\)). On the right, we also show the medians, 2.5% and 97.5% quantiles of the \\(n_{t}\\) estimates of \\(\\{\\gamma_{kt}:k=1,\\ldots,K\\}\\) for \\(t=1\\), from the decoder (8), in which the 1-1 line is displayed in black for reference. and quantifying uncertainty associated with threshold exceedances--they could for example be used to accurately assess marine heatwave (MHW) risks and identify regions susceptible to coral bleaching by refining the threshold for MHW detection with improved UQ. This dataset has previously been analyzed (sometimes partially) by Hazra and Huser (2021), Simpson and Wadsworth (2021), Simpson et al. (2023), Oesting and Huser (2022), and Sainsbury-Dale et al. (2024). The latter three studies focused on a small portion of the Red Sea using the summer months only to eliminate the effects of seasonality. For example, Sainsbury-Dale et al. (2024) retained a dataset with only 678 spatial locations and 141 replicates. By contrast, Hazra and Huser (2021) extensively studied weekly data over the entire spatial domain using a Dirichlet process mixture of low-rank spatial Student's \\(t\\) processes to account for spatial dependence. However, their model is AD across the entire domain (i.e., for any pair of locations), limiting its flexibility in capturing extreme behavior. Since the daily SST at each location exhibits a clear trend and seasonality across seasons and years and the temporal dependence strength varies at different locations, we first de-trend and remove the seasonality before applying our model (see Section F.1 of the Supplementary Material for details). Despite the successful performance of our XVAE in Section 3 when the data input is not generated from the process (3), we opt to extract monthly maxima from the renormalized data to better comply with the assumed max-infinite divisibility and to enhance the modeling accuracy of marginal distributions for station records. In Section F.2, we perform various goodness-of-fit tests using the generalized extreme value (GEV) distribution and the general non-central \\(t\\) distribution, and find that the former performs better than the latter and it fits the monthly maxima very well at almost all grid points (99.99% of the 16,703 locations). Therefore, we fit a GEV distribution at each location and transform the sitewise records to the Pareto scale on which we then apply the XVAE; see Section F.3 for specifics. Figure 6: In the left two panels, we show empirically-estimated pairwise tail dependence measure \\(\\chi_{0j}(u)\\), \\(u=0.85\\), between \\(\\mathbf{s}_{0}=(38.104,21.427)\\), marked using a red cross, and all \\(\\mathbf{s}_{j}\\in\\mathcal{S}\\), from observations and emulated data. In the right two panels, we show the estimated tilting parameters at \\(K=243\\) data-driven knots averaged over time (i.e., \\(n_{t}^{-1}\\sum_{t=1}^{n_{t}}\\gamma_{kt}\\), \\(k=1,\\ldots,K\\)), and the estimated tilting parameters averaged over space (i.e., \\(K^{-1}\\sum_{k=1}^{K}\\gamma_{kt}\\), \\(t=1,\\ldots,n_{t}\\)) with the best linear regression fit (red line). The third panel of Figure 6 displays the locations of the data-driven knots chosen by our algorithm (\\(K=243\\)), and the initial radius shared by the Wendland basis functions is \\(1.2^{\\circ}\\). Figure 7 shows emulated replicates of the original monthly maxima field for the first and last months (1985/01 and 2015/12, respectively). Here, we convert the emulated values back to the original data scale using the estimated GEV parameters fitted from the previous step. Figure 7 demonstrates that the XVAE is able to capture the detailed features of the temperature fields and to accurately characterize spatial dependence, while the QQ-plot shows an almost perfect alignment with the 1-1 line. Similar to Figure 4, we then estimate \\(\\chi_{h}(u)\\) empirically for the original monthly maxima and emulated fields, under the working assumption of stationarity and isotropy. Figure F.2 of the Supplementary Material attests once more that our XVAE characterizes the extremal dependence structure accurately from low to high quantiles. As introduced in Section 2, we Figure 7: Observed (left) and emulated (middle) Red Sea SST monthly maxima, for the 1985/01 (top) and 2015/12 (bottom) months. From the emulation maps and QQ plots (right), we see that the emulated fields from the XVAE match the observations very well. also examine the pairwise \\(\\chi\\) measure without assuming spatial stationarity. Specifically, we choose the center of the Red Sea (\\(38.104^{\\circ}\\)E, \\(21.427^{\\circ}\\)N) as a reference point denoted by \\(\\mathbf{s}_{0}\\). For all observed locations \\(\\mathbf{s}_{j}\\in\\mathcal{S}\\) in the Red Sea, we estimate the pairwise \\(\\chi_{0j}(u)\\) empirically only using the values from \\(\\mathbf{s}_{j}\\) and \\(\\mathbf{s}_{0}\\). The left two panels of Figure 6 include raster plots of the pairwise measure evaluated at the level \\(u=0.85\\), in which the \\(\\chi_{0j}(u)\\) values estimated from the observed and emulated data are very similar to each other. One other major advantage of our XVAE framework is that it allows the dependence parameters \\(\\{\\gamma_{k}:k=1,\\ldots,K\\}\\) and \\(\\alpha\\) to change over time, which makes the extremal dependence structure non-stationary. The right two panels of Figure 6 show the estimated tilting parameters \\(\\{\\gamma_{kt}:k=1,\\ldots,K,\\ t=1,\\ldots,n_{t}\\}\\) averaged over time/space. We see that the \\(\\gamma_{kt}\\) values are generally lower near the coast compared to the interior of the Red Sea, indicating that SST tends to be more heavy-tailed on the coast. Also, \\(\\gamma_{kt}\\) tends to increase over time, indicating that extreme events are becoming more localized. This is consistent with the findings in Genevier et al. (2019). To examine what the SST fields look like without the noise, the left panels of Figure 8 display realizations of the latent field \\(\\{Y_{t}(\\mathbf{s})\\}\\) using the fitted XVAE at two time points, \\(1985/9\\) and \\(2015/9\\). To focus on the extreme values, we transform \\(Y_{t}(\\mathbf{s})\\) to the original SST scale using the estimated GEV parameters and censor the simulations with a fixed thermal threshold of \\(31^{\\circ}\\)C, resulting in threshold exceedances primarily in the southern region. This is expected: the southern Red Sea experiences higher SSTs compared to the northern area. However, coral reefs in different parts of the Red Sea have developed varying levels of thermal tolerance (Hazra and Huser, 2021). To explore regional variation in marine heatwave (MHW) and coral bleaching risk, we divide the Red Sea into four regions based on Raitsos et al. (2013) and Genevier et al. (2019): North (\\(25.5\\)-\\(30^{\\circ}\\)N), North central (\\(22\\)-\\(25.5^{\\circ}\\)N), South central (\\(17.5\\)-\\(22^{\\circ}\\)N) and South (\\(12.5\\)-\\(17.5^{\\circ}\\)N). A useful quantity is the areal exceedance probability, which represents the spatial extent of a region being simultaneously at extreme risk of MHW. To accurately estimate these joint probabilities (and uncertainties thereof) based on the trained XVAE, we generate 30,000 independent SST emulations for each time point and calculate the total area exceeding the designated threshold of \\(31^{\\circ}\\)C. One could easily define a different, potentially spatially-varying, thermal threshold (e.g., with fixed marginal probability of exceedances). The middle panels of Figure 8 report the results, namely the density of the total area at risk of MHW within each region. The curves for the south central and south regions concentrate around larger areas, while the north central and north regions show little or no exceeded areas, confirming that surface temperatures decrease with increasing latitude. Additionally, the middle panels illustrate that under rising SST conditions, simultaneous exceedances of \\(31^{\\circ}\\)C across larger areas may become more likely over time for all subregions except the north, where \\(31^{\\circ}\\)C still surpasses the highest possible temperature in September 2015. In order to provide a more detailed description of the spatial extent of areas of joint threshold exceedances at different extreme levels, we then estimate the SST threshold required for an array of fixed spatial extents of exceedances. For each fixed spatial extent, we calculate the minimal threshold needed to reach that area of joint exceedances from each emulated replicate, and we then group all 30,000 estimated thresholds together to derive 95% empirical confidence intervals. We repeat this process for all spatial extents of exceedances in between 100 km\\({}^{2}\\) and \\(1.4\\times 10^{5}\\) km\\({}^{2}\\). Note that this can be computed quite well for the \\(\\sim 10^{5}\\) cm\\({}^{-2}\\) range. Figure 8: The left panels show realizations of Red Sea SST monthly maxima emulated with fitted parameters from the XVAE for 1985/9 (top) and 2015/9 (bottom) months. The emulations are censored with a threshold of 31\\({}^{\\circ}\\)C. From 30,000 such emulations, we estimate the distribution of total area exceeding 31\\({}^{\\circ}\\)C within each region. On the right, we estimate the threshold it takes to have a fixed area of exceedance. The 95% confidence intervals are also shown. The vertical dashed lines are total areas of each subregion, and the horizontal slices at 31\\({}^{\\circ}\\)C yield results that align with the middle panels. fast thanks to the amortized nature of our XVAE. The right panels of Figure 8 illustrate a consistent rise in SST thresholds across all extreme levels from 1985 to 2015. This trend is further confirmed in Figure 9, reporting the same results for a specific area of \\(5\\times 10^{4}\\) km\\({}^{2}\\). Slicing the confidence bands at the 31\\({}^{\\circ}\\)C threshold aligns with the middle panel results, demonstrating that we can provide sensible uncertainty quantification when evaluating the spatial extent of exceedances at any extreme level. This also shows that the joint tail of SSTs exhibits a weakening dependence structure within each subregion, with the spatial extent of exceedances decreasing as the threshold increases and extreme events becoming more localized as they get more extreme. Furthermore, as the spatial extent approaches zero, the threshold estimates represent the highest possible SST for a specific month in a subregion, a valuable metric for studying phytoplankton bloom. To directly assess the impact of climate change, we determine the threshold necessary for one specific spatial extent of exceedances (i.e., \\(5\\times 10^{4}\\) km\\({}^{2}\\)) across all September months from 1985 to 2015. Here, we also emulate 30,000 SST fields for each September month in this period. Figure 9 reveals that the fixed-area SST threshold increased steadily by \\(0.7^{\\circ}\\) in all four subregions on average over the studied time period, corroborating the warming trend in the Red Sea and the localized nature of extremes shown in Figures 6 and 8. ## 5 Concluding remarks In this paper, we propose XVAE, a new variational autoencoder, which integrates a novel max-id model for spatial extremes that exhibits flexible extremal dependence properties. It greatly advances the ability to model extremes in high-dimensional spatial problems and expands the frontier on computation and modeling of complex extremal processes. The encoder and decoder construction and the trained distributions of the latent variables allow for parameter estimation and uncertainty quantification within a variational Bayesian framework. We also provide general guidance on evaluating emulator performance when Figure 9: Similar to Figure 8, we emulate 30,000 independent SST fields for each September from 1985 to 2015. For a fixed areal exceedance of \\(5\\times 10^{4}\\) km\\({}^{2}\\), we estimate its associated required threshold along with the 95% Monte Carlo confidence intervals. applied to spatial data with dependent extremes. We note that our emulator extends beyond emulating large datasets for UQ. As highlighted in the introduction, the XVAE can serve as a surrogate model for mechanic-based computer models. It can also be applied to areas other than climate-related problems. For example, turbulent buoyant plume can be simulated from a system of compressible Euler conservative equations in flux formulation, but the computational cost is prohibitively expensive with increasing Reynolds number (Bhimireddy and Bhaganagar, 2021). Our XVAE can provide a promising avenue for efficiently emulating the chaotic and irregular turbulence observations in high resolutions. One possible model improvement is that the latent exponentially-tilted PS variables are independent over space and time, which is unrealistic for physical processes that exhibit diffusive dynamics at short-time scales. In future work, we are planning to include a time component with data-driven dynamic learning based on a stochastic dynamic spatio-temporal model. Hence, the latent variables in the encoded space will evolve smoothly over time while retaining heavy tails and thus simultaneously ensuring local extremal dependence. Also, it is possible to improve the XVAE by allowing spatially-varying radii \\(r_{k}\\), \\(k=1,\\ldots,K\\), and estimate them by optimizing the ELBO together with other parameters. Another promising direction for future work is to implement a _conditional_ VAE (CVAE; Sohn et al., 2015) with a similar underlying max-id model; in such a model, we can allow the parameters of both the encoder and decoder to change conditioning on different climate scenarios (e.g., radiative forcings, seasons, soil conditions, etc.). This will allow us to simulate new data under different conditions. We will need to ensure that the CVAE emulates \\(\\mathbf{x}_{t}\\) differently according to different input states (e.g., tuning parameters and/or forcing variables). In doing so, we will allow changes to the parameters for both the encoder and decoder conditioning on different scenarios (e.g., different climate states). ## References * Arpat and Caers (2007) Arpat, G. B. and Caers, J. (2007), 'Conditional simulation with patterns', _Mathematical Geology_**39**, 177-203. * Bhimireddy and Bhaganagar (2021) Bhimireddy, S. R. and Bhaganagar, K. (2021), 'Implementing a new formulation in WRF-LES for Buoyant Plume Simulations: bPlume-WRF-LES model', _Monthly Weather Review_**149**(7), 2299-2319. * Binois and Gramacy (2021) Binois, M. and Gramacy, R. B. (2021), 'hetGP: Heteroskedastic Gaussian process modeling and sequential design in R', _Journal of Statistical Software_**98**(13), 1-44. * Bopp et al. (2021) Bopp, G. P., Shaby, B. A. and Huser, R. (2021), 'A hierarchical max-infinitely divisible spatial model for extreme precipitation', _Journal of the American Statistical Association_**116**(533), 93-106. * Cartwright et al. (2023) Cartwright, L., Zammit-Mangion, A. and Deutscher, N. M. (2023), 'Emulation of greenhouse-gas sensitivities using variational autoencoders', _Environmetrics_**34**(2). * Cotsakis et al. (2022) Cotsakis, R., Di Bernardino, E. and Opitz, T. (2022), 'On the perimeter estimation of pixelated excursion sets of 2D anisotropic random fields', _Hal Science preprint: hal-03582844v2_. * Davison and Huser (2015) Davison, A. C. and Huser, R. (2015), 'Statistics of extremes', _Annual Review of Statistics and its Application_**2**, 203-235. * Davison et al. (2019) Davison, A. C., Huser, R. and Thibaud, E. (2019), Spatial extremes, _in_ 'Handbook of Environmental and Ecological Statistics', editors A. E. Gelfand, M. Fuentes, J. A. Hoeting and R. L. Smith, CRC Press, pp. 711-744. * Davison et al. (2012) Davison, A. C., Padoan, S. A. and Ribatet, M. (2012), 'Statistical Modeling of Spatial Extremes', _Statistical Science_**27**(2), 161-186. * de Fondeville and Davison (2018) de Fondeville, R. and Davison, A. C. (2018), 'High-dimensional peaks-over-threshold inference', _Biometrika_**105**(3), 575-592. * Donlon et al. (2012) Donlon, C. J., Martin, M., Stark, J., Roberts-Jones, J., Fiedler, E. and Wimmer, W. (2012), 'The operational sea surface temperature and sea ice analysis (OSTIA) system', _Remote Sensing of Environment_**116**, 140-158. * Falbel and Luraschi (2023) Falbel, D. and Luraschi, J. (2023), _torch: Tensors and Neural Networks with 'GPU' Acceleration_. **URL:**[https://github.com/mlverse/torch](https://github.com/mlverse/torch). * Ferreira and de Haan (2014) Ferreira, A. and de Haan, L. (2014), 'The generalized Pareto process; with a view towards application and simulation', _Bernoulli_**20**(4), 1717-1737. * Furby et al. (2013) Furby, K. A., Bouwmeester, J. and Berumen, M. L. (2013), 'Susceptibility of central Red Sea corals during a major bleaching event', _Coral Reefs_**32**, 505-513. * Genevier et al. (2019) Genevier, L. G., Jamil, T., Raitsos, D. E., Krokos, G. and Hoteit, I. (2019), 'Marine heatwaves reveal coral reef zones susceptible to bleaching in the Red Sea', _Global Change Biology_**25**(7), 2338-2351. * Gneiting and Raftery (2007) Gneiting, T. and Raftery, A. E. (2007), 'Strictly proper scoring rules, prediction, and estimation', _Journal of the American statistical Association_**102**(477), 359-378. * Goodfellow et al. (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y. (2014), 'Generative adversarial nets', _Advances in Neural Information Processing Systems_**27**. * Gramacy (2020) Gramacy, R. B. (2020), _Surrogates: Gaussian Process Modeling, Design, and Optimization for the Applied Sciences_, Chapman and Hall/CRC. * Gu et al. (2018) Gu, M., Wang, X. and Berger, J. O. (2018), 'Robust Gaussian stochastic process emulation', _The Annals of Statistics_**46**(6A), 3038-3066. * Hartigan and Wong (1979) Hartigan, J. A. and Wong, M. A. (1979), 'Algorithm AS 136: A k-means clustering algorithm', _Journal of the Royal Statistical Society: Series C_**28**(1), 100-108. * Hazra and Huser (2021) Hazra, A. and Huser, R. (2021), 'Estimating high-resolution Red Sea surface temperature hotspots, using a low-rank semiparametric spatial model', _The Annals of Applied Statistics_**15**(2), 572-596. * Hougaard (1986) Hougaard, P. (1986), 'Survival models for heterogeneous populations derived from stable distributions', _Biometrika_**73**(2), 387-396. * Hughes et al. (2017) Hughes, T. P., Kerry, J. T., Alvarez-Noriega, M., Alvarez-Romero, J. G., Anderson, K. D., Baird, A. H., Babcock, R. C., Beger, M., Bellwood, D. R., Berkelmans, R. et al. (2017),'Global warming and recurrent mass bleaching of corals', _Nature_**543**(7645), 373-377. * Huser (2021) Huser, R. (2021), 'EVA 2019 data competition on spatio-temporal prediction of Red Sea surface temperature extremes', _Extremes_**24**, 91-104. * Huser et al. (2017) Huser, R., Opitz, T. and Thibaud, E. (2017), 'Bridging asymptotic independence and dependence in spatial extremes using Gaussian scale mixtures', _Spatial Statistics_**21**, 166-186. * Huser et al. (2021) Huser, R., Opitz, T. and Thibaud, E. (2021), 'Max-infinitely divisible models and inference for spatial extremes', _Scandinavian Journal of Statistics_**48**(1), 321-348. * Huser et al. (2024) Huser, R., Opitz, T. and Wadsworth, J. L. (2024), 'Modeling of spatial extremes in environmental data science: Time to move away from max-stable processes', _arXiv preprint arXiv:2401.17430_. * Huser et al. (2023) Huser, R., Stein, M. L. and Zhong, P. (2023), 'Vecchia likelihood approximation for accurate and fast inference with intractable spatial max-stable models', _Journal of Computational and Graphical Statistics_. To appear. * Huser and Wadsworth (2019) Huser, R. and Wadsworth, J. L. (2019), 'Modeling spatial processes with unknown extremal dependence class', _Journal of the American Statistical Association_**114**(525), 434-444. * Huser and Wadsworth (2022) Huser, R. and Wadsworth, J. L. (2022), 'Advances in statistical modeling of spatial extremes', _Wiley Interdisciplinary Reviews: Computational Statistics_**14**(1), e1537. * Iuliano and Quagliarella (2013) Iuliano, E. and Quagliarella, D. (2013), 'Proper orthogonal decomposition, surrogate modelling and evolutionary optimization in aerodynamic design', _Computers & Fluids_**84**, 327-350. * Keydana (2023) Keydana, S. (2023), _Deep Learning and Scientific Computing with R torch_, CRC Press. * Kingma and Welling (2013) Kingma, D. P. and Welling, M. (2013), 'Auto-encoding variational Bayes', _arXiv preprint arXiv:1312.6114_. * Kingma et al. (2019) Kingma, D. P., Welling, M. et al. (2019), 'An introduction to variational autoencoders', _Foundations and Trends(r) in Machine Learning_**12**(4), 307-392. * Krupskii and Huser (2022) Krupskii, P. and Huser, R. (2022), 'Modeling spatial tail dependence with cauchy convolution processes', _Electronic Journal of Statistics_**16**(2), 6135-6174. * Ledford and Tawn (1996) Ledford, A. W. and Tawn, J. A. (1996), 'Statistics for near independence in multivariate extreme values', _Biometrika_**83**(1), 169-187. * Matheson and Winkler (1976) Matheson, J. E. and Winkler, R. L. (1976), 'Scoring rules for continuous probability distributions', _Management Science_**22**(10), 1087-1096. * Nolan (2020) Nolan, J. P. (2020), 'Univariate stable distributions', _Springer Series in Operations Research and Financial Engineering, DOI_**10**, 978-3. * Oesting and Huser (2022) Oesting, M. and Huser, R. (2022), 'Patterns in spatio-temporal extremes', _arXiv preprint arXiv:2212.11001_. * Opitz (2016) Opitz, T. (2016), 'Modeling asymptotically independent spatial extremes based on Laplace random fields', _Spatial Statistics_**16**, 1-18. * Padoan (2013) Padoan, S. A. (2013), 'Extreme dependence models based on event magnitude', _Journal of Multivariate Analysis_**122**, 1-19. * Padoan (2014)Polyak, B. T. (1964), 'Some methods of speeding up the convergence of iteration methods', _USSR Computational Mathematics and Mathematical Physics_**4**(5), 1-17. * Raitsos et al. (2013) Raitsos, D. E., Pradhan, Y., Brewin, R. J., Stenchikov, G. and Hoteit, I. (2013), 'Remote sensing the phytoplankton seasonal succession of the Red Sea', _PLoS One_**8**(6), e64909. * Reich and Shaby (2012) Reich, B. J. and Shaby, B. A. (2012), 'A hierarchical max-stable spatial model for extreme precipitation', _The Annals of Applied Statistics_**6**(4), 1430-1451. * Resnick (2008) Resnick, S. I. (2008), _Extreme Values, Regular Variation, and Point Processes_, Vol. 4, Springer Science & Business Media. * Richards et al. (2023) Richards, J., Sainsbury-Dale, M., Zammit-Mangion, A. and Huser, R. (2023), 'Likelihood-free neural Bayes estimators for censored peaks-over-threshold models', _arXiv preprint arXiv:2306.15642_. * Ruymgaart (1974) Ruymgaart, F. H. (1974), 'Asymptotic normality of nonparametric tests for independence', _The Annals of Statistics_ pp. 892-910. * Ruymgaart and van Zuijlen (1978) Ruymgaart, F. H. and van Zuijlen, M. (1978), 'Asymptotic normality of multivariate linear rank statistics in the non-iid case', _The Annals of Statistics_**6**(3), 588-602. * Sainsbury-Dale et al. (2024) Sainsbury-Dale, M., Zammit-Mangion, A. and Huser, R. (2024), 'Likelihood-free parameter estimation with neural Bayes estimators', _The American Statistician_**78**, 1-14. * Sargsyan (2017) Sargsyan, K. (2017), Surrogate models for uncertainty propagation and sensitivity analysis, _in_ 'Handbook of uncertainty quantification', Springer, pp. 673-698. * Sen and Puri (1967) Sen, P. K. and Puri, M. L. (1967), 'On the theory of rank order tests for location in the multivariate one sample problem', _The Annals of Mathematical Statistics_**38**(4), 1216-1228. * Sharma et al. (2021) Sharma, S., Gomez, M., Keller, K., Nicholas, R. E. and Mejia, A. (2021), 'Regional flood risk projections under climate change', _Journal of Hydrometeorology_**22**(9), 2259-2274. * Simpson et al. (2023) Simpson, E. S., Opitz, T. and Wadsworth, J. L. (2023), 'High-dimensional modeling of spatial and spatio-temporal conditional extremes using INLA and Gaussian Markov random fields', _Extremes_ pp. 1-45. * Simpson and Wadsworth (2021) Simpson, E. S. and Wadsworth, J. L. (2021), 'Conditional modelling of spatio-temporal extremes for Red Sea surface temperatures', _Spatial Statistics_**41**, 100482. * Sohn et al. (2015) Sohn, K., Lee, H. and Yan, X. (2015), 'Learning structured output representation using deep conditional generative models', _Advances in Neural Information Processing Systems_**28**. * Thibaud and Opitz (2015) Thibaud, E. and Opitz, T. (2015), 'Efficient inference and simulation for elliptical Pareto processes', _Biometrika_**102**(4), 855-870. * Wadsworth and Tawn (2022) Wadsworth, J. L. and Tawn, J. (2022), 'Higher-dimensional spatial extremes via single-site conditioning', _Spatial Statistics_**51**, 100677. * Wendland (1995) Wendland, H. (1995), 'Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree', _Advances in Computational Mathematics_**4**, 389. * Zammit-Mangion et al. (2024) Zammit-Mangion, A., Sainsbury-Dale, M. and Huser, R. (2024), 'Neural methods for amortised parameter inference', _arXiv preprint arXiv:2404.12484_. * Zammit-Mangion et al. (2020)Zhang, L., Risser, M. D., Molter, E. M., Wehner, M. F. and O'Brien, T. A. (2023), 'Accounting for the spatial structure of weather systems in detected changes in precipitation extremes', _Weather and Climate Extremes_**38**, 100499. * Zhang et al. (2022) Zhang, L., Shaby, B. A. and Wadsworth, J. L. (2022), 'Hierarchical transformed scale mixtures for flexible modeling of spatial extremes on datasets with many locations', _Journal of the American Statistical Association_**117**(539), 1357-1369. * Zhong et al. (2022) Zhong, P., Huser, R. and Opitz, T. (2022), 'Modeling nonstationary temperature maxima based on extremal dependence changing with event magnitude', _Annals of Applied Statistics_**16**, 272-299. Comparison to existing spatial extremes models Max-stable (Davison et al., 2012, 2019) or generalized Pareto processes (Ferreira and de Haan, 2014; Thibaud and Opitz, 2015) have the property that \\(\\chi_{ij}\\) is always positive (Huser and Wadsworth, 2022). Conversely, Gaussian processes (or multivariate Gaussian distributions) have the property that \\(\\chi_{ij}(u)\\) always converges to 0 as \\(u\\to 1\\), unless \\(X_{i}\\) and \\(X_{j}\\) are perfectly dependent. By contrast, tail dependence in observed environmental processes often seems to decay as events get more extreme and rare events often tend to be more spatially localized as the intensity increases (Huser and Wadsworth, 2019). This was observed in numerous studies, including Dutch wind gust maxima (Huser et al., 2021), threshold exceedances of the daily Fosberg fire index (Zhang et al., 2022), and winter maximum precipitation data over the Midwest of the U.S. (Zhang et al., 2023), just to name a few examples. The stability property of max-stable and generalized Pareto models is thus often a physically inappropriate restriction in the joint tail. However, a weakening \\(\\chi_{ij}(u)\\) as \\(u\\) increases does not necessarily lead to AI. As we extrapolate into the joint tail beyond the observed data, mis-classifying the tail dependence regime inevitably leads to inaccurate risk assessments. Therefore, we seek models that exhibit much more flexible tail characteristics and do not assume an extremal a dependence class _a priori_. Bopp et al. (2021) proposed a max-id model that exhibits AI (\\(\\gamma_{k}\\equiv\\gamma>0\\)) or AD (\\(\\gamma_{k}\\equiv 0\\)) for all pairs of locations, depending on the tail dependence strength in the data. This is an improvement over Reich and Shaby (2012) in which the tilting parameters are set to \\(\\gamma_{k}\\equiv 0\\), \\(k=1,\\ldots,K\\). In both cases, they used the radial basis functions which are not compactly supported and the tilting parameters are fixed as a single constant at all knots. As a result, these processes only exhibit one dependence class for all pairs of locations and induce a form of long-range dependence; that is, they cannot capture a change of asymptotic dependence class as a function of distance. In addition, both Reich and Shaby (2012) and Bopp et al. (2021) set \\(\\alpha_{0}=\\alpha\\), and their noise process \\(\\{\\epsilon(\\mathbf{s})\\}\\) has independent \\((1/\\alpha)\\)-Frechet marginals, i.e., \\(\\text{Frechet}(0,1,1/\\alpha)\\). While \\(\\alpha\\) also determines the dependence properties of the process through the exponentially-tilted PS variables in \\(\\{Y(\\mathbf{s})\\}\\), the \\(\\text{Frechet}(0,1,1/\\alpha)\\) variables usually end up being too noisy compared to the \\(\\{Y(\\mathbf{s})\\}\\) process. In our modification, we decouple the noise variance and the tail heaviness of latent variables. Specifically, the \\(\\text{Frechet}(0,\\tau,1/\\alpha_{0})\\) distribution concentrates around \\(\\tau\\) when the shape \\(1/\\alpha_{0}>1\\) and scale \\(\\tau>1\\), so that \\(\\epsilon(\\mathbf{s})\\) truly acts as a scaling factor that accounts for measurement errors for each time \\(t\\). Moreover, the spatially-varying tilting parameters introduce tail dependence for two close-by locations, when they are covered by the same basis function. The local dependence strength is proportional to the tail-heaviness of the latent variable at the closest knot. As we show in Section 1.3, there is local AD if \\(\\gamma_{k}=0\\), and there is local AI if \\(\\gamma_{k}>0\\), as expected. In addition, the compactness of the basis function support automatically introduces long-range exact independence (thus, also asymptotic independence) for two far-apart stations that are impacted by disjoint sets of basis functions.Overall, the dependence structure of our model is thus both non-stationaryand highly flexible. More importantly, the combination of the nonstationary max-id model with the VAE technique allows us to fit complicated spatial extremes process in exceptionally high dimensions. Beyond certain specific isolated exceptions, it is currently not possible to fit existing max-stable, inverted-max-stable, and other spatial extremes models using a full likelihood or Bayesian approach on a dataset of more than approximately \\(1,000\\) locations. Recent successes include Huser et al. (2023), who used the Vecchia approximation to make efficient inference on a 1000-dimensional max-stable process, Simpson et al. (2023) who used INLA to fit the spatial conditional extremes model in high dimensions, and Sainsbury-Dale et al. (2024) and Richards et al. (2023), who performed likelihood-free inference with neural Bayes estimators to fit various spatial extremes models on big data sets. However, Sainsbury-Dale et al. (2024) illustrated their method with a stationary process observed at about \\(700\\) locations, while Richards et al. (2023) fitted local stationary models to data sets of size up to about \\(1,000\\). By contrast, our approach considers fitting and simulating a globally non-stationary spatial extremes process, with parameters evolving over time, to a data set of unprecedented spatial dimension of more than \\(16,000\\) locations. ## Appendix B Technical details ### Properties of exponentially-tilted positive-stable variables Before we proceed to prove Proposition 1.1, we first recall some useful results in Hougaard (1986) about positive-stable (PS) distributions and their exponentially-tilted variation. If \\(Z\\sim\\exp\\)PS\\((\\alpha,0)\\), we denote the density function by \\(f_{\\alpha}(z)\\), \\(z>0\\). Then for \\(\\alpha\\in(0,1]\\), it has Laplace transform \\[L(s)=\\mathbb{E}e^{-sZ}=\\exp(-s^{\\alpha}),\\;s\\geq 0.\\] For an exponentially-tilted variable \\(Z\\sim\\exp\\)PS\\((\\alpha,\\gamma)\\), the Laplace transform becomes \\[L(s)=\\mathbb{E}e^{-sZ}=\\exp\\left[-\\{(\\gamma+s)^{\\alpha}-\\gamma^{\\alpha}\\} \\right],\\;s\\geq 0,\\;\\gamma\\geq 0\\] (B.1) and its density is \\[h(x;\\alpha,\\gamma)=\\frac{f_{\\alpha}(x)\\exp(-\\gamma x)}{\\exp(-\\gamma^{\\alpha}) },\\;x>0.\\] **Lemma B.1**.: _If \\(Z\\sim\\exp\\)PS\\((\\alpha,0)\\) and \\(\\alpha\\in(0,1)\\), then \\(Z\\sim\\text{Stable}\\left\\{\\alpha,1,\\cos^{1/\\alpha}(\\pi\\alpha/2),0\\right\\}\\) in the 1-parameterization (Nolan, 2020)._ Proof.: From Proposition 3.2 of Nolan (2020), we know that the Laplace transform of \\(Z\\sim\\text{Stable}(\\alpha,1,\\xi,0;1)\\), \\(\\alpha\\in(0,2]\\), is \\[\\mathbb{E}e^{-sZ}=\\begin{cases}\\exp\\{-\\xi^{\\alpha}(\\sec\\tfrac{\\pi\\alpha}{2}) s^{\\alpha}\\},&\\alpha\\in(0,1)\\cup(1,2],\\\\ \\exp\\{-\\xi\\tfrac{2}{\\pi}s\\log s\\},&\\alpha=1.\\end{cases}\\]When \\(\\xi=|\\cos\\frac{\\pi\\alpha}{2}|^{1/\\alpha}\\), the Laplace transform becomes \\[\\mathbb{E}e^{-sZ}=\\begin{cases}\\exp(-s^{\\alpha}),&\\alpha\\in(0,1),\\\\ \\exp(s^{\\alpha}),&\\alpha\\in(1,2].\\end{cases}\\] That is, \\(Z\\sim\\exp\\)PS\\((\\alpha,0)\\) when \\(\\alpha\\in(0,1)\\). **Remark 3**.: _If \\(\\alpha=1/2\\), then \\(|\\cos\\frac{\\pi\\alpha}{2}|^{1/\\alpha}=1/2\\) and \\(Z\\sim\\text{Stable}(1/2,1,1/2,0;1)\\), which is equivalent to \\(Z\\sim\\text{Levy}(0,1/2)\\) or \\(Z\\sim\\text{InvGamma}(1/2,1/4)\\)._ **Remark 4**.: _To facilitate the computation of the prior in Eq. (6), we follow the Monte Carlo integration steps in Section 4 of the Supplementary Material of Bopp et al. (2021) to calculate the density \\(h(\\cdot;\\alpha,\\gamma)\\)._ ### Proof of Proposition 1.1 Proof of Proposition 1.1.: Since at the location \\(\\mathbf{s}_{j}\\), \\[\\Pr(X(\\mathbf{s}_{j})\\leq x) =\\mathbb{E}\\left\\{\\Pr\\left(\\epsilon(\\mathbf{s}_{j})\\leq\\frac{x}{Y(\\bm {s}_{j})}\\middle|Z_{1},\\ldots,Z_{K}\\right)\\right\\}=\\mathbb{E}\\left[\\exp\\left\\{- \\left(\\frac{\\tau Y(\\mathbf{s}_{j})}{x}\\right)^{\\frac{1}{\\alpha_{0}}}\\right\\} \\middle|Z_{1},\\ldots,Z_{K}\\right]\\] \\[=\\mathbb{E}\\exp\\left\\{-\\left(\\frac{\\tau}{x}\\right)^{\\frac{1}{ \\alpha_{0}}}\\sum_{k=1}^{K}\\omega_{k}(\\mathbf{s}_{j},r_{k})^{\\frac{1}{\\alpha}}Z_{k} \\right\\}=\\exp\\left[\\sum_{k\\in\\mathcal{D}}\\gamma_{k}^{\\alpha}-\\sum_{k=1}^{K} \\left\\{\\gamma_{k}+\\left(\\frac{\\tau}{x}\\right)^{\\frac{1}{\\alpha_{0}}}\\omega_{ kj}^{\\frac{1}{\\alpha}}\\right\\}^{\\alpha}\\right].\\] To study the tail decay of the survival function, we apply Taylor's expansion with the Peano remainder: \\[(1+t)^{\\alpha}=1+\\alpha t+\\frac{\\alpha(\\alpha-1)}{2}t^{2}+o(t^{2}),\\text{ as }t\\to 0.\\] (B.2) Then, as \\(x\\to\\infty\\), we have \\[\\sum_{k\\in\\bar{\\mathcal{D}}}\\left\\{\\gamma_{k}+\\left(\\frac{\\tau}{x }\\right)^{\\frac{1}{\\alpha_{0}}}\\omega_{kj}^{\\frac{1}{\\alpha}}\\right\\}^{\\alpha }=\\sum_{k\\in\\bar{\\mathcal{D}}}\\gamma_{k}^{\\alpha}\\left\\{1+\\left(\\frac{\\tau}{x }\\right)^{\\frac{1}{\\alpha_{0}}}\\frac{\\omega_{kj}^{1/\\alpha}}{\\gamma_{k}} \\right\\}^{\\alpha}\\] \\[=\\sum_{k\\in\\bar{\\mathcal{D}}}\\gamma_{k}^{\\alpha}+\\alpha\\left( \\frac{\\tau}{x}\\right)^{\\frac{1}{\\alpha_{0}}}\\sum_{k\\in\\bar{\\mathcal{D}}}\\frac {\\omega_{kj}^{1/\\alpha}}{\\gamma_{k}^{1-\\alpha}}+\\frac{\\alpha(\\alpha-1)}{2} \\left(\\frac{\\tau}{x}\\right)^{\\frac{2}{\\alpha_{0}}}\\sum_{k\\in\\bar{\\mathcal{D}}} \\frac{\\omega_{kj}^{2/\\alpha}}{\\gamma_{k}^{2-\\alpha}}+o\\left(x^{-\\frac{2}{ \\alpha_{0}}}\\right),\\] which leads to \\[\\sum_{k\\in\\bar{\\mathcal{D}}}\\gamma_{k}^{\\alpha}-\\sum_{k=1}^{K} \\left\\{\\gamma_{k}+\\left(\\frac{\\tau}{x}\\right)^{\\frac{1}{\\alpha_{0}}}\\omega_{ kj}^{\\frac{1}{\\alpha}}\\right\\}^{\\alpha}=-\\left(\\frac{\\tau}{x}\\right)^{\\frac{ \\alpha}{\\alpha_{0}}}\\sum_ where the constants \\(c^{\\prime}_{j}\\), \\(c_{j}\\) and \\(d_{j}\\) are defined in Proposition 1.1. Next we apply the following Taylor expansion: \\[1-\\exp(-t)=t-\\frac{t^{2}}{2}+o(t^{2}),\\text{ as }t\\to 0.\\] (B.4) Combining (10) and (B.3) gives \\[\\bar{F}_{j}(x)=c^{\\prime}_{j}x^{-\\frac{\\alpha}{\\alpha_{0}}}+c_{j}x^{-\\frac{1} {\\alpha_{0}}}+d_{j}x^{-\\frac{2}{\\alpha_{0}}}-\\frac{(c^{\\prime}_{j}x^{-\\frac{ \\alpha}{\\alpha_{0}}}+c_{j}x^{-\\frac{1}{\\alpha_{0}}}+d_{j}x^{-\\frac{2}{\\alpha_{0 }}})^{2}}{2}+o\\left(x^{-\\frac{2}{\\alpha_{0}}}\\right),\\] from which we can expand the squared term and discard the terms with higher decaying rates than \\(o(x^{-2/\\alpha_{0}})\\) to establish (11). Proof of Corollary 1.1.1.: By definition, \\(t^{-1}=\\bar{F}_{j}\\{q_{j}(t)\\}\\). When \\(\\mathcal{C}_{j}\\cap\\mathcal{D}\ eq\\emptyset\\), (11) leads to \\[\\begin{split} t^{-1}=c^{\\prime}_{j}q_{j}^{-\\frac{\\alpha}{\\alpha_ {0}}}(t)\\left[1+\\frac{c_{j}}{c^{\\prime}_{j}}q_{j}^{-\\frac{1-\\alpha}{\\alpha_{0} }}(t)+\\frac{1}{c^{\\prime}_{j}}\\left(d_{j}-\\frac{c^{2}_{j}}{2}\\right)q_{j}^{- \\frac{2-\\alpha}{\\alpha_{0}}}(t)-\\right.\\\\ \\left.\\frac{c^{\\prime}_{j}}{2}q_{j}^{-\\frac{\\alpha}{\\alpha_{0}}}( t)-c_{j}q_{j}^{-\\frac{1}{\\alpha_{0}}}(t)+o\\left\\{q_{j}^{-\\frac{2-\\alpha}{ \\alpha_{0}}}(t)\\right\\}\\right]\\text{ as }t\\to\\infty.\\end{split}\\] (B.5) Since \\(q_{j}(t)\\to\\infty\\) as \\(t\\to\\infty\\), the term in the square bracket of the previous display can simply be approximated by \\(1+o(1)\\). Thus, we have \\[q_{j}(t)=c^{\\prime}_{j}{}^{\\frac{\\alpha_{0}}{\\alpha}}t^{\\frac{\\alpha_{0}}{ \\alpha}}\\{1+o(1)\\}.\\] (B.6) Since \\(\\alpha\\in(0,1)\\), we can also re-organize (B.5) to obtain \\[\\begin{split} q_{j}(t)-c^{\\prime}_{j}{}^{\\frac{\\alpha_{0}}{ \\alpha}}t^{\\frac{\\alpha_{0}}{\\alpha}}&=q_{j}(t)\\left(1-\\left[1+ \\frac{c_{j}}{c^{\\prime}_{j}}q_{j}^{-\\frac{1-\\alpha}{\\alpha_{0}}}(t)-\\frac{c^{ \\prime}_{j}}{2}q_{j}^{-\\frac{\\alpha}{\\alpha_{0}}}(t)+O\\left\\{q_{j}^{-\\frac{1} {\\alpha_{0}}}(t)\\right\\}\\right]^{-\\frac{\\alpha_{0}}{\\alpha}}\\right)\\\\ &=q_{j}(t)\\left[\\frac{\\alpha_{0}c_{j}}{\\alpha c^{\\prime}_{j}}q_{j} ^{-\\frac{1-\\alpha}{\\alpha_{0}}}(t)-\\frac{\\alpha_{0}c^{\\prime}_{j}}{2\\alpha}q_{ j}^{-\\frac{\\alpha}{\\alpha_{0}}}(t)+O\\left\\{q_{j}^{-\\frac{1}{\\alpha_{0}}}(t) \\right\\}\\right].\\end{split}\\] (B.7) On the last line, we applied the Taylor expansion in (B.2) again. Then we combine (B.6) and (B.7) to get \\[\\begin{split} q_{j}(t)-c^{\\prime}_{j}{}^{\\frac{\\alpha_{0}}{ \\alpha}}t^{\\frac{\\alpha_{0}}{\\alpha}}&=c^{\\prime}_{j}{}^{\\frac{ \\alpha_{0}}{\\alpha}}t^{\\frac{\\alpha_{0}}{\\alpha}}\\left\\{1+o(1)\\right\\}\\left\\{ \\frac{\\alpha_{0}c_{j}}{\\alpha c^{\\prime}_{j}{}^{1/\\alpha}}t^{1-\\frac{1}{\\alpha }}-\\frac{\\alpha_{0}}{2\\alpha}t^{-1}+O\\left(t^{-\\frac{1}{\\alpha}}\\right)\\right\\} \\\\ &=c^{\\prime}_{j}{}^{\\frac{\\alpha_{0}}{\\alpha}}t^{\\frac{\\alpha_{0}} {\\alpha}}\\left\\{\\frac{\\alpha_{0}c_{j}}{\\alpha c^{\\prime}_{j}{}^{1/\\alpha}}t^ {1-\\frac{1}{\\alpha}}-\\frac{\\alpha_{0}}{2\\alpha}t^{-1}+O\\left(t^{-\\frac{1}{ \\alpha}}\\right)\\right\\},\\end{split}\\] which concludes the proof for the first case. Similarly, when \\(\\mathcal{C}_{j}\\cap\\mathcal{D}=\\emptyset\\), we have \\[c_{j}^{\\alpha_{0}}t^{\\alpha_{0}}=q_{j}(t)\\left[1+\\left(\\frac{d_{j}}{c_{j}}-\\frac {c_{j}}{2}\\right)q_{j}^{-\\frac{1}{\\alpha_{0}}}(t)+o\\left\\{q_{j}^{-\\frac{1}{ \\alpha_{0}}}(t)\\right\\}\\right]^{-\\alpha_{0}}\\text{ as }t\\to\\infty,\\] which ensures \\(q_{j}(t)=c_{j}^{\\alpha_{0}}t^{\\alpha_{0}}\\{1+o(1)\\}\\), and \\[q_{j}(t)-c_{j}^{\\alpha_{0}}t^{\\alpha_{0}} =q_{j}(t)\\left(1-\\left[1+\\left(\\frac{d_{j}}{c_{j}}-\\frac{c_{j}}{2} \\right)q_{j}^{-\\frac{1}{\\alpha_{0}}}(t)+o\\left\\{q_{j}^{-\\frac{1}{\\alpha_{0}}} (t)\\right\\}\\right]^{-\\alpha_{0}}\\right)\\] \\[=c_{j}^{\\alpha_{0}}t^{\\alpha_{0}}\\{1+o(1)\\}\\left[\\alpha_{0}\\left( \\frac{d_{j}}{c_{j}}-\\frac{c_{j}}{2}\\right)q_{j}^{-\\frac{1}{\\alpha_{0}}}(t)+o \\left\\{q_{j}^{-\\frac{1}{\\alpha_{0}}}(t)\\right\\}\\right]\\] \\[=c_{j}^{\\alpha_{0}}t^{\\alpha_{0}}\\left\\{\\alpha_{0}\\left(\\frac{d_ {j}}{c_{j}^{2}}-\\frac{1}{2}\\right)t^{-1}+o(t^{-1})\\right\\}.\\] ### Proof of Proposition 1.2 Proof of Proposition 1.2.: The joint distribution for the discretization of \\(\\{X(\\mathbf{s})\\}\\) is \\[F(x_{1},\\ldots,x_{n}) =\\Pr(X(\\mathbf{s}_{1})\\leq x_{1},\\ldots,X(\\mathbf{s}_{n_{s}})\\leq x_{n})\\] \\[=\\mathbb{E}\\left\\{\\Pr\\left(\\left.\\epsilon(\\mathbf{s}_{1})\\leq\\frac{x_ {1}}{Y(\\mathbf{s}_{1})},\\ldots,\\epsilon(\\mathbf{s}_{n_{s}})\\leq\\frac{x_{n}}{Y(\\mathbf{s}_{ n_{s}})}\\right|Z_{1},\\ldots,Z_{K}\\right)\\right\\}\\] \\[=\\mathbb{E}\\left[\\prod_{j=1}^{n_{s}}\\exp\\left\\{-\\left(\\left. \\frac{\\tau Y(\\mathbf{s}_{j})}{x_{j}}\\right)^{\\frac{1}{\\alpha_{0}}}\\right\\}\\right|Z _{1},\\ldots,Z_{K}\\right]\\] \\[=\\prod_{k=1}^{K}\\mathbb{E}\\exp\\left\\{-\\sum_{j=1}^{n_{s}}\\omega_{ kj}^{\\frac{1}{\\alpha}}\\left(\\frac{\\tau}{x_{j}}\\right)^{\\frac{1}{\\alpha_{0}}}Z_{k} \\right\\}=\\exp\\left[\\sum_{k\\in\\mathcal{D}}\\gamma_{k}^{\\alpha}-\\sum_{k=1}^{K} \\left\\{\\gamma_{k}+\\tau^{\\frac{1}{\\alpha_{0}}}\\sum_{j=1}^{n_{s}}\\frac{\\omega_{ kj}^{1/\\alpha}}{x_{j}^{1/\\alpha_{0}}}\\right\\}^{\\alpha}\\right],\\] in which we utilized the Laplace transform of the exponentially-tilted PS variables displayed in Eq. (B.1). ### Proof of Theorem 1.3 Proof of Theorem 1.3.: By definitions of the tail dependence measures \\(\\chi_{ij}\\) and \\(\\eta_{ij}\\), \\[\\begin{split}\\chi_{ij}&=\\lim_{u\\to 1}\\frac{ \\Pr\\{X(\\mathbf{s}_{i})>F_{i}^{-1}(u),X(\\mathbf{s}_{j})>F_{j}^{-1}(u)\\}}{1-u}\\\\ &=\\lim_{t\\to\\infty}t\\Pr\\{X(\\mathbf{s}_{i})>q_{i}(t),X(\\mathbf{s}_{j})>q_{ j}(t)\\}\\\\ &=\\lim_{t\\to\\infty}t\\left[1-2\\left(1-\\frac{1}{t}\\right)+\\Pr\\{X( \\mathbf{s}_{i})\\leq q_{i}(t),X(\\mathbf{s}_{j})\\leq q_{j}(t))\\}\\right]\\\\ &=\\lim_{t\\to\\infty}2-t\\left[1-F_{ij}\\{q_{i}(t),q_{j}(t)\\}\\right], \\end{split}\\] (B.8) and \\[\\Pr\\{X(\\mathbf{s}_{i})>q_{i}(t),X(\\mathbf{s}_{j})>q_{j}(t)\\}=\\mathcal{L}(t)t^{-1/\\eta _{ij}},\\;t\\to\\infty.\\] Further, \\[\\lim_{t\\to\\infty}\\frac{\\log\\Pr\\{X(\\mathbf{s}_{i})>q_{i}(t),X(\\mathbf{s}_{j})>q_{j}(t) \\}}{\\log t}=-\\frac{1}{\\eta_{ij}},\\] (B.9) provided that \\[\\lim_{t\\to\\infty}\\frac{\\log\\mathcal{L}(t)}{\\log t}=0\\] for the slowly varying function \\(\\mathcal{L}\\). This can be easily shown using the Karamata Representation theorem (Resnick, 2008). 1. If \\(\\mathcal{C}_{i}\\cap\\mathcal{D}=\\emptyset\\) and \\(\\mathcal{C}_{j}\\cap\\mathcal{D}=\\emptyset\\), we know from Corollary 1.1.1 that \\(q_{i}(t)=c_{i}^{\\alpha_{0}}t^{\\alpha_{0}}\\{1+R_{i}(t)+o(t^{-1})\\}\\) and \\(q_{j}(t)=c_{j}^{\\alpha_{0}}t^{\\alpha_{0}}\\{1+R_{j}(t)+o(t^{-1})\\}\\), in which \\(R_{i}(t)=\\alpha_{0}(d_{i}/c_{i}^{2}-1/2)t^{-1}\\) and \\(R_{j}(t)=\\alpha_{0}(d_{j}/c_{j}^{2}-1/2)t^{-1}\\). Similarly to Proposition 1.1, we first deduce \\[\\log F_{ij}\\{q_{i}(t),q_{j}(t)\\}=\\sum_{k\\in\\bar{\\mathcal{D}}} \\gamma_{k}^{\\alpha}-\\sum_{k=1}^{K}\\left[\\gamma_{k}+\\frac{\\tau^{1/\\alpha_{0}} \\omega_{ki}^{1/\\alpha}}{c_{i}t\\{1+R_{i}(t)+o(t^{-1})\\}}+\\frac{\\tau^{1/\\alpha_ {0}}\\omega_{kj}^{1/\\alpha}}{c_{j}t\\{1+R_{j}(t)+o(t^{-1})\\}}\\right]^{\\alpha}\\] \\[\\qquad=\\sum_{k\\in\\bar{\\mathcal{D}}}\\gamma_{k}^{\\alpha}-\\sum_{k\\in \\bar{\\mathcal{D}}}\\left[\\gamma_{k}+\\frac{\\tau^{1/\\alpha_{0}}\\omega_{ki}^{1/ \\alpha}}{c_{i}t}\\{1-R_{i}(t)\\}+\\frac{\\tau^{1/\\alpha_{0}}\\omega_{kj}^{1/\\alpha }}{c_{j}t}\\{1-R_{j}(t)\\}+o\\left(\\frac{1}{t^{2}}\\right)\\right]^{\\alpha}\\] \\[\\qquad=\\sum_{k\\in\\bar{\\mathcal{D}}}\\gamma_{k}^{\\alpha}-\\sum_{k\\in \\bar{\\mathcal{D}}}\\gamma_{k}^{\\alpha}\\left[1+\\frac{\\alpha\\tau^{1/\\alpha_{0}} \\omega_{ki}^{1/\\alpha}/\\gamma_{k}}{c_{i}t}\\{1-R_{i}(t)\\}+\\frac{\\alpha\\tau^{1/ \\alpha_{0}}\\omega_{kj}^{1/\\alpha}/\\gamma_{k}}{c_{j}t}\\{1-R_{j}(t)\\}+o\\left( \\frac{1}{t^{2}}\\right)\\right],\\] in which the Taylor expansion in Eq. (B.2) is applied. Recall the definitions of \\(c_{i}\\)and \\(c_{j}\\) in Proposition 1.1, and we find \\[\\log F_{ij}\\{q_{i}(t),q_{j}(t)\\}=-\\frac{2}{t}+\\frac{R_{i}(t)+R_{j}(t)}{t}-o\\left( \\frac{1}{t^{2}}\\right)\\text{ as }t\\to\\infty.\\] Then it follows from Eq. (B.4) that \\[1-F_{ij}\\{q_{i}(t),q_{j}(t)\\} =1-\\exp\\left\\{-\\frac{2}{t}+\\frac{R_{i}(t)+R_{j}(t)}{t}-o\\left( \\frac{1}{t^{2}}\\right)\\right\\}\\] \\[=\\frac{2}{t}-\\frac{R_{i}(t)+R_{j}(t)}{t}+o\\left(\\frac{1}{t^{2}} \\right).\\] Plugging this result into (B.8), we have \\(\\chi_{ij}=\\lim_{t\\to\\infty}R_{i}(t)+R_{j}(t)+o(t^{-1})=0\\). In the meantime, \\[\\log\\Pr\\{X(\\mathbf{s}_{i})>q_{i}(t),X(\\mathbf{s}_{j})>q_{j}(t)\\}\\sim\\log\\frac{R_{i}(t) +R_{j}(t)}{t}=\\log\\alpha_{0}\\left(\\frac{d_{i}}{c_{i}^{2}}+\\frac{d_{j}}{c_{j}^{ 2}}-1\\right)-2\\log t\\] as \\(t\\to\\infty\\). By Eq. (B.9), \\(\\eta_{ij}=1/2\\). 2. If \\(\\mathcal{C}_{i}\\cap\\mathcal{D}=\\emptyset\\) and \\(\\mathcal{C}_{j}\\cap\\mathcal{D}\ eq\\emptyset\\), we know from Corollary 1.1.1 that \\(q_{i}(t)\\sim c_{i}^{\\alpha_{0}}t^{\\alpha_{0}}\\{1+R_{i}(t)+o(t^{-1})\\}\\) and \\(q_{j}(t)\\sim c_{j}^{\\prime\\,\\alpha_{0}/\\alpha}t^{\\alpha_{0}/\\alpha}\\{1+R_{j} ^{*}(t)+O(t^{-1/\\alpha})\\}\\) as \\(t\\to\\infty\\), in which \\(R_{i}(t)=\\alpha_{0}(d_{i}/c_{i}^{2}-1/2)t^{-1}\\) and \\(R_{j}^{*}(t)=\\alpha_{0}c_{j}t^{1-1/\\alpha}/(\\alpha c_{j}^{\\prime\\,1/\\alpha})- \\alpha_{0}t^{-1}/(2\\alpha)\\). Then \\[\\log F_{ij}\\{q_{i}(t),q_{j}(t)\\}=\\sum_{k\\in\\bar{\\mathcal{D}}}\\gamma_{k}^{ \\alpha}-\\sum_{k=1}^{K}\\left\\{\\gamma_{k}+\\frac{\\tau^{1/\\alpha_{0}}\\omega_{ki}^ {1/\\alpha}}{q_{i}^{1/\\alpha_{0}}(t)}+\\frac{\\tau^{1/\\alpha_{0}}\\omega_{kj}^{1/ \\alpha}}{q_{j}^{1/\\alpha_{0}}(t)}\\right\\}^{\\alpha}\\] \\[= \\sum_{k\\in\\bar{\\mathcal{D}}}\\gamma_{k}^{\\alpha}-\\sum_{k\\in\\bar{ \\mathcal{D}}}\\left\\{\\gamma_{k}+\\frac{\\tau^{1/\\alpha_{0}}\\omega_{ki}^{1/\\alpha }}{q_{i}^{1/\\alpha_{0}}(t)}+\\frac{\\tau^{1/\\alpha_{0}}\\omega_{kj}^{1/\\alpha}}{q _{j}^{1/\\alpha_{0}}(t)}\\right\\}^{\\alpha}-\\sum_{k\\in\\mathcal{D}}\\frac{\\tau^{ \\alpha/\\alpha_{0}}\\omega_{kj}}{q_{j}^{\\alpha/\\alpha_{0}}(t)}\\] \\[= \\log F_{j}\\{q_{j}(t)\\}+\\sum_{k\\in\\bar{\\mathcal{D}}}\\left\\{\\gamma _{k}+\\frac{\\tau^{1/\\alpha_{0}}\\omega_{kj}^{1/\\alpha}}{q_{j}^{1/\\alpha_{0}}(t)} \\right\\}^{\\alpha}-\\sum_{k\\in\\bar{\\mathcal{D}}}\\left\\{\\gamma_{k}+\\frac{\\tau^{1/ \\alpha_{0}}\\omega_{ki}^{1/\\alpha}}{q_{i}^{1/\\alpha_{0}}(t)}+\\frac{\\tau^{1/ \\alpha_{0}}\\omega_{kj}^{1/\\alpha}}{q_{j}^{1/\\alpha_{0}}(t)}\\right\\}^{\\alpha}.\\] (B.10) For the two summations on the right, we split \\(\\bar{\\mathcal{D}}\\) into \\(\\mathcal{C}_{j}\\cap\\bar{\\mathcal{D}}\\) and \\(\\bar{\\mathcal{D}}\\setminus(\\mathcal{C}_{j}\\cap\\bar{\\mathcal{D}})\\): \\[\\sum_{k\\in\\mathcal{C}_{j}\\cap\\bar{\\mathcal{D}}}\\left\\{\\gamma_{k}+\\frac{ \\tau^{1/\\alpha_{0}}\\omega_{kj}^{1/\\alpha}}{q_{j}^{1/\\alpha_{0}}(t)}\\right\\}^{ \\alpha}-\\sum_{k\\in\\mathcal{C}_{j}\\cap\\bar{\\mathcal{D}}}\\left\\{\\gamma_{k}+ \\frac{\\tau^{1/\\alpha_{0}}\\omega_{ki}^{1/\\alpha}}{q_{i}^{1/\\alpha_{0}}(t)}+ \\frac{\\tau^{1/\\alpha_{0}}\\omega_{kj}^{1/\\alpha}}{q_{j}^{1/\\alpha_{0}}(t)} \\right\\}^{\\alpha}=\\] \\[-\\sum_{k\\in\\mathcal{C}_{j}\\cap\\bar{\\mathcal{D}}}\\frac{\\alpha\\tau^{ 1/\\alpha_{0}}\\omega_{ki}^{1/\\alpha}}{\\gamma_{k}^{1-\\alpha}q_{i}^{1/\\alpha_{0}}( t)}-\\frac{\\alpha(\\alpha-1)}{2}\\sum_{k\\in\\mathcal{C}_{j}\\cap\\bar{\\mathcal{D}}} \\frac{\\tau^{2/\\alpha_{0}}}{\\gamma_{k}^{2-\\alpha}}\\left\\{\\frac{\\omega_{ki}^{2/ \\alpha}}{q_{i}^{2/\\alpha_{0}}(t)}+\\frac{2\\omega_{ki}^{1/\\alpha}\\omega_{kj}^{1 /\\alpha}}{q_{i}^{1/\\alpha_{0}}(t)q_{j}^{1/\\alpha_{0}}(t)}\\right\\}+o\\left(t^{-1 -\\frac{1}{\\alpha}}\\right)\\] (B.11)and \\[\\begin{split}\\sum_{k\\in\\bar{\\mathcal{D}}\\setminus(\\mathcal{C}_{j} \\cap\\bar{\\mathcal{D}})}\\left\\{\\gamma_{k}+\\frac{\\tau^{1/\\alpha_{0}}\\omega_{kj} ^{1/\\alpha}}{q_{j}^{1/\\alpha_{0}}(t)}\\right\\}^{\\alpha}-\\sum_{k\\in\\bar{ \\mathcal{D}}\\setminus(\\mathcal{C}_{j}\\cap\\bar{\\mathcal{D}})}\\left\\{\\gamma_{k}+ \\frac{\\tau^{1/\\alpha_{0}}\\omega_{ki}^{1/\\alpha}}{q_{i}^{1/\\alpha_{0}}(t)}+ \\frac{\\tau^{1/\\alpha_{0}}\\omega_{kj}^{1/\\alpha}}{q_{j}^{1/\\alpha_{0}}(t)} \\right\\}^{\\alpha}\\\\ \\sum_{k\\in\\bar{\\mathcal{D}}\\setminus(\\mathcal{C}_{j}\\cap\\bar{ \\mathcal{D}})}\\gamma_{k}^{\\alpha}-\\sum_{k\\in\\bar{\\mathcal{D}}\\setminus( \\mathcal{C}_{j}\\cap\\bar{\\mathcal{D}})}\\left\\{\\gamma_{k}+\\frac{\\tau^{1/\\alpha_{ 0}}\\omega_{ki}^{1/\\alpha}}{q_{i}^{1/\\alpha_{0}}(t)}\\right\\}^{\\alpha}.\\end{split}\\] (B.12) Feeding Eqs. (B.11) and (B.12) back into (B.10), we have \\[\\log F_{ij}\\{q_{i}(t),q_{j}(t)\\}=\\log F_{j}\\{q_{j}(t)\\}+\\log F_{i}\\{q_{i}(t)\\} +\\frac{c_{ij}}{q_{i}^{1/\\alpha_{0}}(t)q_{j}^{1/\\alpha_{0}}(t)}+o\\left(t^{-1- \\frac{1}{\\alpha}}\\right)\\text{ as }t\\to\\infty,\\] in which \\(c_{ij}=\\alpha(\\alpha-1)\\tau^{2/\\alpha_{0}}\\sum_{k\\in\\mathcal{C}_{i}\\cap \\mathcal{C}_{j}}\\gamma_{k}^{\\alpha-2}\\omega_{ki}^{1/\\alpha}\\omega_{kj}^{1/ \\alpha}\\). Then it follows from Eqs. (B.4) and (B.8) that \\[\\begin{split} 1-F_{ij}\\{q_{i}(t),q_{j}(t)\\}=& 1-\\exp\\left[\\log F_{j}\\{q_{j}(t)\\}+\\log F _{i}\\{q_{i}(t)\\}+\\frac{c_{ij}}{q_{i}^{1/\\alpha_{0}}(t)q_{j}^{1/\\alpha_{0}}(t)} +o\\left(t^{-1-\\frac{1}{\\alpha}}\\right)\\right]\\\\ =&-\\log F_{j}\\{q_{j}(t)\\}-\\log F_{i}\\{q_{i}(t)\\}- \\frac{c_{ij}}{q_{i}^{1/\\alpha_{0}}(t)q_{j}^{1/\\alpha_{0}}(t)}+o\\left(t^{-1- \\frac{1}{\\alpha}}\\right)\\\\ =&-\\log\\left(1-\\frac{1}{t}\\right)-\\log\\left(1-\\frac {1}{t}\\right)-\\frac{c_{ij}}{q_{i}^{1/\\alpha_{0}}(t)q_{j}^{1/\\alpha_{0}}(t)}+o \\left(t^{-1-\\frac{1}{\\alpha}}\\right)\\\\ =&\\frac{2}{t}-\\frac{c_{ij}}{c_{i}c_{j}^{\\prime\\,1/ \\alpha}t^{1+1/\\alpha}}+o\\left(t^{-1-\\frac{1}{\\alpha}}\\right)\\end{split}\\] Therefore, \\(\\Pr\\{X(\\mathbf{s}_{i})>q_{i}(t),X(\\mathbf{s}_{j})>q_{j}(t)\\}=\\frac{c_{ij}}{c_{i}c_{j}^ {\\prime\\,1/\\alpha}}t^{-1-1/\\alpha}+o\\left(t^{-1-1/\\alpha}\\right)\\). Then it follows from Eqs. (B.8) and (B.9) that \\(\\xi_{ij}=0\\) and \\(\\eta_{ij}=1/2\\). 3. When \\(\\mathcal{C}_{i}\\cap\\mathcal{D}\ eq\\emptyset\\) and \\(\\mathcal{C}_{j}\\cap\\mathcal{D}\ eq\\emptyset\\), we have \\(q_{i}(t)\\sim c_{i}^{\\prime\\,\\alpha_{0}/\\alpha}t^{\\alpha_{0}/\\alpha}\\left\\{1+O (t^{1-1/\\alpha})\\right\\}\\) and \\[q_{j}(t)\\sim c_{j}^{\\prime\\,\\alpha_{0}/\\alpha}t^{\\alpha_{0}/\\alpha}\\left\\{1+O(t^{1 -1/\\alpha})\\right\\}\\text{ as }t\\rightarrow\\infty.\\] It follows from (B.8) that \\[1-F_{ij}\\{q_{i}(t),q_{j}(t)\\}= 1-\\exp\\left\\{-d_{ij}t^{-1}-\\left(\\frac{c_{i}}{c_{i}^{\\prime\\,1/ \\alpha}}+\\frac{c_{j}}{c_{j}^{\\prime\\,1/\\alpha}}\\right)t^{-\\frac{1}{\\alpha}}-O (t^{1-\\frac{2}{\\alpha}})\\right\\}\\] \\[= d_{ij}t^{-1}+\\left(\\frac{c_{i}}{c_{i}^{\\prime\\,1/\\alpha}}+\\frac {c_{j}}{c_{j}^{\\prime\\,1/\\alpha}}\\right)t^{-\\frac{1}{\\alpha}}+O(t^{1-\\frac{2} {\\alpha}}),\\] and \\[t\\Pr\\{X(\\mathbf{s}_{i})>q_{i}(t),X(\\mathbf{s}_{j})>q_{j}(t)\\}=2-d_{ij}-\\left(\\frac{c_{i }}{c_{i}^{\\prime\\,1/\\alpha}}+\\frac{c_{j}}{c_{j}^{\\prime\\,1/\\alpha}}\\right)t^ {1-\\frac{1}{\\alpha}}-O(t^{2-\\frac{2}{\\alpha}}),\\] as \\(t\\rightarrow\\infty\\), in which \\(d_{ij}=\\tau^{\\alpha/\\alpha_{0}}\\sum_{k\\in\\mathcal{D}}\\{(\\omega_{ki}/c_{i}^{ \\prime})^{1/\\alpha}+(\\omega_{kj}/c_{j}^{\\prime})^{1/\\alpha}\\}^{\\alpha}\\in(1,2)\\). If \\(\\mathcal{C}_{i}\\cap\\mathcal{C}_{j}\ eq\\emptyset\\), we know from (B.8) that \\(\\chi_{ij}=2-d_{ij}\\in(0,1)\\) and \\[\\chi_{ij}(u)-\\chi_{ij}=\\left(\\frac{c_{i}}{c_{i}^{\\prime\\,1/\\alpha}}+\\frac{c _{j}}{c_{j}^{\\prime\\,1/\\alpha}}\\right)(1-u)^{\\frac{1}{\\alpha}-1}+O\\left\\{(1- u)^{\\frac{2}{\\alpha}-2}\\right\\}.\\] **Remark 5**.: _The exponent function, defined by_ \\[V(x_{1},\\ldots,x_{n_{s}})=\\lim_{t\\rightarrow\\infty}t(1-F[F_{1}^{-1}\\{1-(tx_{ 1})^{-1}\\},\\ldots,F_{n_{s}}^{-1}\\{1-(tx_{n_{s}})^{-1}\\}]),\\]_is a limiting measure that occurs in the limiting distribution for normalized maxima. It is used to describe the multivariate extremal dependence of a spatial process, and the \\(n_{s}\\)-dimensional extremal coefficient \\(V(1,\\ldots,1)\\) is of particular interest. This extremal coefficient has a range of \\([1,n_{s}]\\), with the lower and upper ends indicating, respectively, perfect dependence and independence. As a polarized case, if \\(\\gamma_{k}>0\\), for all \\(k=1,\\ldots,K\\), then \\(\\mathcal{C}_{j}\\cap\\mathcal{D}=\\emptyset\\) for all \\(j\\)'s, and thus we have_ \\[\\gamma_{k}^{\\alpha}-\\left\\{\\gamma_{k}+\\tau^{\\frac{1}{\\alpha_{0}}}\\sum_{j=1}^{ n_{s}}\\frac{\\omega_{kj}^{1/\\alpha}}{q_{j}^{1/\\alpha_{0}}(t)}\\right\\}^{\\alpha} \\sim\\alpha\\tau^{\\frac{1}{\\alpha_{0}}}\\gamma_{k}^{\\alpha-1}\\sum_{j=1}^{n_{s}} \\frac{\\omega_{kj}^{1/\\alpha}}{q_{j}^{1/\\alpha_{0}}(t)},\\;t\\to\\infty.\\] _Here, we can approximate \\(q_{j}(t)\\) using the results from Corollary 1.1.1. From Proposition 1.1, we can deduce that \\(V(1,\\ldots,1)=n_{s}\\), which corresponds to joint extremal independence. By contrast, if all \\(\\gamma_{k}=0\\) and one knot covers the entire spatial domain, we have \\(V(1,\\ldots,1)\\in(1,n_{s})\\), which corresponds to joint extremal dependence._ ## Appendix C Areal radius of exceedance ### Monte Carlo estimates of \\(\\mathrm{ARE}_{\\psi}(u)\\) Proof of Theorem 2.1.: It suffices to prove that \\[\\lim_{n\\to\\infty}\\frac{\\sum_{r=1}^{n_{r}}\\mathbbm{1}\\left(U_{ir}>u,U_{0r}>u \\right)}{\\sum_{r=1}^{n_{r}}\\mathbbm{1}\\left(U_{0r}>u\\right)}=\\chi_{\\mathbf{s}_{0},\\mathbf{g}_{i}}(u),\\qquad\\text{a.s.}\\] (C.1) for all \\(i=1,\\ldots,n_{g}\\). First, since \\(U_{0r^{\\prime}}=\\hat{F}_{0}(X_{0r^{\\prime}})\\), it is clear that \\[n_{r}U_{0r^{\\prime}}=\\sum_{r=1}^{n_{r}}\\mathbbm{1}\\left\\{X_{0r}\\leq X_{0r^{ \\prime}}\\right\\}\\] is the rank of \\(X_{0r^{\\prime}}\\) in \\(\\mathbf{X}_{0}\\), \\(r^{\\prime}=1,\\ldots,n_{r}\\). Thus, \\[\\frac{1}{n_{r}}\\sum_{r=1}^{n_{r}}\\mathbbm{1}\\left(U_{0r}>u\\right)=\\frac{ \\left\\lfloor n_{r}(1-u)\\right\\rfloor}{n_{r}}\\to 1-u,\\text{ as }n_{r}\\to\\infty,\\] (C.2) in which \\(\\left\\lfloor\\cdot\\right\\rfloor\\) is the floor function. Second, denote the rank of \\(X_{ir^{\\prime}}\\) in \\(\\mathbf{X}_{i}\\) by \\(R_{ir^{\\prime}}\\), \\(r^{\\prime}=1,\\ldots,n_{r}\\), \\(i=1,\\ldots,n_{g}\\). Then we know \\(R_{ir^{\\prime}}=n_{r}U_{ir^{\\prime}}\\) and \\[S_{i0}:=\\frac{1}{n_{r}}\\sum_{r=1}^{n_{r}}\\mathbbm{1}\\left(U_{ir}>u,U_{0r}>u \\right)=\\frac{1}{n_{r}}\\sum_{r=1}^{n_{r}}\\mathbbm{1}\\left\\{\\frac{R_{ir}}{n_{r} }>u\\right\\}\\mathbbm{1}\\left\\{\\frac{R_{0r}}{n_{r}}>u\\right\\},\\]This is thus a bivariate linear rank statistics of \\(\\mathbf{X}_{i}\\) and \\(\\mathbf{X}_{0}\\), for which the regression constants as defined in Sen and Puri (1967) all have a value of \\(1\\) and the scores have a product structure with each term being generated by \\(\\phi(x)=1\\,\\{x>u\\}\\), \\(x\\in(0,1)\\). Sen and Puri (1967) and Ruymgaart (1974) established the asymptotic normality of the multivariate linear rank statistics under weak restrictions that asymptotically no individual regression constant is much larger than the other constants and that \\(\\phi\\) is square integrable on \\((0,1)^{2}\\); that is, \\[0<\\int_{(0,1)^{2}}\\{\\phi(u_{1},u_{2})-\\bar{\\phi}\\}^{2}\\mathrm{d}u_{1}\\mathrm{d }u_{2}<\\infty\\text{ with }\\bar{\\phi}=\\int_{0}^{1}\\phi(u)\\mathrm{d}u,\\] in which \\(\\phi(u_{1},u_{2})=\\phi(u_{1})\\phi(u_{2})\\). Since our regression constants are all \\(1\\)'s, the restriction on the regression constants is easily satisfied. Also, for \\(\\phi(u_{1},u_{2})=1\\,\\{u_{1}>u,u_{2}>u\\}\\), \\(\\int_{0}^{1}\\int_{0}^{1}\\{\\phi(u_{1},u_{2})-\\bar{\\phi}\\}^{2}\\mathrm{d}u_{1} \\mathrm{d}u_{2}=\\bar{\\phi}-\\bar{\\phi}^{2}\\) with \\(\\bar{\\phi}=(1-u)^{2}\\). Therefore, \\[n^{1/2}\\{S_{i0}-\\mu_{i0}\\}\\to_{d}N(0,\\sigma_{i0}^{2})\\] (C.3) as \\(n_{r}\\to\\infty\\), in which \\(\\mu_{i0}\\) and \\(\\sigma_{i0}^{2}\\) can be derived using Eq. (1.3) and (3.5) in Ruymgaart (1974) as \\[\\mu_{i0} =\\int\\int\\phi(F_{i}(x))\\phi(F_{0}(y))\\mathrm{d}F_{i0}(x,y)=\\Pr\\{F _{i}(X_{i})>u,F_{0}(X_{0})>u\\},\\] \\[\\sigma_{i0}^{2} =\\mathrm{Var}\\big{(}1\\{F_{i}(X_{i})>u,F_{0}(X_{0})>u\\}+[1\\{F_{i}( X_{i})\\leq u\\}-u]\\Pr\\{F_{0}(X_{0})>u\\mid F_{i}(X_{i})=u\\}\\] \\[+[1\\{F_{0}(X_{0})\\leq u\\}-u]\\Pr\\{F_{i}(X_{i})>u\\mid F_{0}(X_{0})= u\\}\\big{)}.\\] (C.4) Since \\(\\mu_{i0}/(1-u)=\\chi_{0i}(u)\\), we know from Expressions (C.2) and (C.3) that as \\(n_{r}\\to\\infty\\), \\[n^{\\frac{1}{2}}\\left\\{\\frac{\\sum_{r=1}^{n_{r}}1(U_{ir}>u,U_{0r}>u)}{\\sum_{r=1 }^{n_{r}}1(U_{0r}>u)}-\\chi_{\\mathbf{s}_{0},\\mathbf{g}_{i}}(u)\\right\\}\\to_{d}N\\left\\{0, \\frac{\\sigma_{i0}^{2}}{(1-u)^{2}}\\right\\},\\] (C.5) which ensures Expression (C.1). **Remark 6**.: _From Expression (C.5), we see that the asymptotic normality of \\(n^{1/2}\\{\\widehat{\\mathrm{ARE}}_{\\psi}(u)-\\mathrm{ARE}_{\\psi}(u)\\}\\) is also guaranteed. However, the exact expression of its asymptotic variance requires a much more careful examination of the correlations among the ranks of \\(\\mathbf{X}_{i}\\), \\(i=0,1,\\ldots,n_{g}\\); that is, we need to device a multivariate linear rank statistics of \\(\\mathbf{X}_{i}\\), \\(i=0,1,\\ldots,n_{g}\\); see Ruymgaart and van Zwijlen (1978)._ ### Convergence of \\(\\mathrm{ARE}_{\\psi}(u)\\) Proof of Theorem 2.2.: By the definition of the tail dependence measure in Eq. (9), \\[\\lim_{u\\to 1}\\sum_{i=1}^{n_{g}}\\chi_{0i}(u)=\\sum_{i=1}^{n_{g}}\\chi_{0i}.\\]It is clear that the right-hand side is the Riemann sum of \\(\\chi_{\\mathbf{s}_{0},\\mathbf{s}}\\) as a function of \\(\\mathbf{s}\\) with respect to the grid. Since \\(\\chi_{\\mathbf{s}_{0},\\mathbf{s}}\\) is a continuous function of \\(\\mathbf{s}\\) (i.e., Riemann-integrable), we have \\[\\lim_{\\psi\\to 0}\\psi^{2}\\sum_{i=1}^{n_{g}}\\chi_{0i}=\\int_{\\mathcal{S}}\\chi_{\\mathbf{s }_{0},\\mathbf{s}}\\mathrm{d}\\mathbf{s}.\\] Therefore, we have \\[\\lim_{\\psi\\to 0,u\\to 1}\\psi\\left(\\sum_{i=1}^{n_{g}}\\chi_{0i}(u)\\right)^{1/2}= \\left\\{\\int_{\\mathcal{S}}\\chi_{\\mathbf{s}_{0},\\mathbf{s}}\\mathrm{d}\\mathbf{s}\\right\\}^{1/2}.\\] **Remark 7**.: _In the spatial extremes literature, many models that have a spatially-invariant set of dependence parameter \\(\\mathbf{\\phi}_{d}\\) and they satisfy_ \\[\\chi_{\\mathbf{s}_{0},\\mathbf{s}}(u)-\\chi_{\\mathbf{s}_{0},\\mathbf{s}}=c(\\mathbf{s}_{0},\\mathbf{s},\\mathbf{ \\phi}_{d})(1-u)^{d(\\mathbf{\\phi}_{d})}\\{1+o(1)\\},\\] _where \\(c(\\mathbf{s}_{0},\\mathbf{s},\\mathbf{\\phi}_{d})\\) is multiplicative constant defined by \\(\\mathbf{s}\\), \\(\\mathbf{s}_{0}\\) and \\(\\mathbf{\\phi}_{d}\\). Also, the rate of decay \\(d(\\mathbf{\\phi}_{d})\\) is independent of \\(\\mathbf{s}\\) and \\(\\mathbf{s}_{0}\\). Such examples include the models proposed by Huser et al. (2017), Huser and Wadsworth (2019) and Bopp et al. (2021). In this case,_ \\[\\pi\\widehat{\\mathrm{ARE}}_{\\psi}^{2}(u)-\\psi^{2}\\sum_{i=1}^{n_{g}}\\chi_{\\mathbf{s} _{0},\\mathbf{g}_{i}}\\approx\\left\\{\\psi^{2}\\sum_{i=1}^{n_{g}}c(\\mathbf{s}_{0},\\mathbf{g}_{i },\\mathbf{\\phi}_{d})\\right\\}(1-u)^{d(\\mathbf{\\phi}_{d})}\\{1+o(1)\\}.\\] _That is, \\(\\widehat{\\mathrm{ARE}}_{\\psi}(u)\\) has similar decaying behaviors as \\(\\chi_{\\mathbf{s}_{0},\\mathbf{s}}(u)\\), which was observed empirically in Figure 3(b) and 4(b) in Zhang et al. (2023)._ **Remark 8**.: _We note that Cotsakis et al. (2022) proposed a similar metric which measures the length of the perimeter of excursion sets of anisotropic random fields on \\(\\mathbb{R}^{2}\\) under some smoothness assumptions. This estimator acts on the empirically accessible binary digital images of the excursion regions and computes the length of a piecewise linear approximation of the excursion boundary. In their work, the main focus is to prove strong consistency of the perimeter estimator as the image pixel size tends to zero. In comparison, we show that our estimator of \\(\\mathrm{ARE}_{\\psi}(u)\\) is strongly consistent as the number of replicates drawn from the process \\(\\{X(\\mathbf{s})\\}\\) approaches infinity. Furthermore, the length scale \\(\\mathrm{ARE}_{\\psi}(u)\\) is, in our view, more interpretable than the perimeter of excursion sets. Also, \\(\\mathrm{ARE}_{\\psi}(u)\\) is closely tied to the bivariate \\(\\chi\\) measure, which further bridges spatial extremes to applications in other fields._ ## Appendix D XVAE details In this section, we will illustrate the details of Eqs. (5) and (8). Recall the encoder in the XVAE encodes the information in \\(\\mathbf{x}_{t},\\,t=1,\\ldots,n_{t}\\), using a three-layer perceptron neural network. The three-layer perceptron neural network has the form of: \\[\\mathbf{h}_{1,t} =\\text{relu}(\\mathbf{W}_{1}\\mathbf{x}_{t}+\\mathbf{b}_{1}),\\] (D.1) \\[\\mathbf{h}_{2,t} =\\text{relu}(\\mathbf{W}_{2}\\mathbf{h}_{1,t}+\\mathbf{b}_{2}),\\] \\[\\log\\mathbf{\\zeta}_{t}^{2} =\\mathbf{W}_{3}\\mathbf{h}_{2,t}+\\mathbf{b}_{3},\\] \\[\\mathbf{\\mu}_{t} =\\text{relu}(\\mathbf{W}_{4}\\mathbf{h}_{2,t}+\\mathbf{b}_{4}).\\] The weights \\(\\{\\mathbf{W}_{1},\\ldots,\\mathbf{W}_{4}\\}\\) and biases \\(\\{\\mathbf{b}_{1},\\ldots,\\mathbf{b}_{4}\\}\\) combined are denoted by \\(\\mathbf{\\phi}_{e}\\) and are shared across time replicates. Here, \\(\\mathbf{W}_{1}\\) is a \\(K\\times n_{s}\\) weight matrix and \\(\\mathbf{W}_{2},\\ldots,\\mathbf{W}_{4}\\) are all \\(K\\times K\\) matrices, and \\(\\mathbf{b}_{1},\\ldots,\\mathbf{b}_{4}\\) are all \\(K\\times 1\\) vectors. Then we use a Gaussian encoder \\(\\mathbf{z}_{t}\\sim N\\{\\mathbf{\\mu}_{t},\\text{diag}(\\mathbf{\\zeta}_{t}^{2})\\}\\) and we have \\[q_{\\mathbf{\\phi}_{e}}(\\mathbf{z}_{t}\\mid\\mathbf{x}_{t})=\\frac{1}{(2\\pi)^{n/2}\\prod_{k=1}^ {K}\\zeta_{kt}}\\exp\\left\\{-\\sum_{k=1}^{K}\\frac{(z_{kt}-\\mu_{kt})^{2}}{2\\zeta_{ kt}^{2}}\\right\\}.\\] (D.2) For the decoder, we also use a three-layer perceptron neural network: \\[\\mathbf{l}_{1,t} =\\text{relu}(\\mathbf{W}_{5}\\mathbf{z}_{t}+\\mathbf{b}_{5}),\\] (D.3) \\[\\mathbf{l}_{2,t} =\\text{relu}(\\mathbf{W}_{6}\\mathbf{l}_{1,t}+\\mathbf{b}_{6}),\\] \\[(\\alpha_{t},\\mathbf{\\gamma}_{t}^{T})^{T} =\\text{relu}(\\mathbf{W}_{7}\\mathbf{l}_{2,t}+\\mathbf{b}_{7}),\\] \\[\\mathbf{y}_{t} =(\\mathbf{\\Omega}^{1/\\alpha_{t}}\\mathbf{z}_{t})^{\\alpha_{0}},\\] in which \\(\\mathbf{\\Omega}=(\\mathbf{w}_{1},\\cdots,\\mathbf{w}_{n_{s}})^{T}\\) is a \\(n_{s}\\times K\\) matrix with its \\(j\\)th row being \\(\\mathbf{w}_{j}^{T}=(\\omega_{1j},\\ldots,\\omega_{Kj})\\). The weights \\(\\{\\mathbf{W}_{5},\\ldots,\\mathbf{W}_{7}\\}\\) and biases \\(\\{\\mathbf{b}_{5},\\ldots,\\mathbf{b}_{7}\\}\\) combined are denoted by \\(\\mathbf{\\phi}_{d}\\), in which \\(\\mathbf{W}_{5}\\) and \\(\\mathbf{W}_{6}\\) are both \\(K\\times K\\) matrices while \\(\\mathbf{W}_{7}\\) is a \\((K+1)\\times K\\) matrix, and \\(\\mathbf{b}_{5}\\) and \\(\\mathbf{b}_{6}\\) are \\(K\\times 1\\) vectors while \\(\\mathbf{b}_{7}\\) is a \\((K+1)\\times 1\\) vector. ### Reparameterization trick Recall that the ELBO is defined as \\[\\mathcal{L}_{\\mathbf{\\phi}_{e},\\mathbf{\\phi}_{d}}(\\mathbf{x}_{t})=\\mathbb{E}_{q_{\\mathbf{\\phi }_{e}}(\\mathbf{z}_{t}|\\mathbf{x}_{t})}\\left\\{\\log\\frac{p_{\\mathbf{\\phi}_{d}}(\\mathbf{x}_{t}, \\mathbf{Z}_{t})}{q_{\\mathbf{\\phi}_{e}}(\\mathbf{Z}_{t}\\mid\\mathbf{x}_{t})}\\right\\},\\] which can be approximated using Monte Carlo as shown in Eq. (2). However, it is not straightforward to approximate the partial derivative of the ELBO with respect to \\(\\mathbf{\\phi}_{e}\\) (denoted by \\(\ abla_{\\mathbf{\\phi}_{e}}\\mathcal{L}_{\\mathbf{\\phi}_{e},\\mathbf{\\phi}_{d}}\\)), which is needed in the stochastic gradient descent algorithm. Since the expectation in ELBO is taken under the distribution \\(q_{\\mathbf{\\phi}_{e}}(\\mathbf{z}_{t}\\mid\\mathbf{x}_{t})\\). \\[\ abla_{\\mathbf{\\phi}_{e}}\\mathcal{L}_{\\mathbf{\\phi}_{e},\\mathbf{\\phi}_{d}}(\\mathbf{x}_{t})\ eq \\mathbb{E}_{q_{\\mathbf{\\phi}_{e}}(\\mathbf{Z}_{t}|\\mathbf{x}_{t})}\\left\\{\ abla_{\\mathbf{\\phi}_{ e}}\\log\\frac{p_{\\mathbf{\\phi}_{d}}(\\mathbf{x}_{t},\\mathbf{Z}_{t})}{q_{\\mathbf{\\phi}_{e}}(\\mathbf{Z}_{t }\\mid\\mathbf{x}_{t})}\\right\\},\\] To simplify the gradient of the ELBO with respect to \\(\\mathbf{\\phi}_{e}\\), we express \\(\\mathbf{Z}_{t}\\) in terms of a ``` 0:\\(\\kappa\\): number of possible clusters from each time replicate \\(\\{\\mathbf{x}_{t}:t=1,\\ldots,n_{t}\\}\\): observed \\(n_{t}\\) spatial replicates \\(\\{\\mathbf{s}_{j}:j=1,\\ldots,n_{s}\\}\\): coordinates of the observed sites in the domain \\(\\mathcal{S}\\forall\\ddagger\\) \\(u\\): a high quantile level between 0 and 1 \\(\\lambda\\): minimum distance between knots Result: \\(K\\): number of data-driven knots \\(\\{\\tilde{\\mathbf{s}}_{1},\\ldots,\\tilde{\\mathbf{s}}_{K}\\}\\): the coordinates of data-driven knots \\(r\\): basis function radius shared by all knots \\(x^{*}\\gets u\\)th quantile of the concatenated vector \\((\\mathbf{x}_{1}^{T},\\cdots,\\mathbf{x}_{n_{t}}^{T})^{T}\\); // A high threshold \\(Knots\\leftarrow\\) list(); // Empty list for the chosen knot locations for\\(t\\gets 1,n_{t}\\)do \\(\\mathcal{E}_{t}\\leftarrow\\) where(\\(\\mathbf{x}_{t}>x^{*}\\)); // Indices of the locations exceeding the threshold \\(wss\\_vec\\leftarrow\\) repeat(NA, \\(\\kappa\\)); // Vector for the total within-cluster sums of squares fornclust\\(\\gets 1,\\kappa\\)do \\(init\\_centers\\leftarrow\\) sample(\\(\\{\\mathbf{s}_{j}:j\\in\\mathcal{E}_{t}\\},\\)\\(nclust\\)) ; // \\(nclust\\) initial centers \\(res\\_tmp\\leftarrow\\) kmeans(\\(\\{\\mathbf{s}_{j}:j\\in\\mathcal{E}_{t}\\},\\)\\(init\\_centers\\)); // Hartigan and Wong (1979) \\(wss\\_vec\\) [\\(nclust\\)] \\(res\\_tmp\\) [\"tot.withinss\"]; end for \\(best\\_nclust\\leftarrow\\) which.max(\\(wss\\_vec\\)); // Determine the best number of clusters \\(init\\_centers\\leftarrow\\) sample(\\(\\{\\mathbf{s}_{j}:j\\in\\mathcal{E}_{t}\\},\\)\\(best\\_nclust\\)); \\(res\\leftarrow\\) kmeans(\\(\\{\\mathbf{s}_{j}:j\\in\\mathcal{E}_{t}\\},\\)\\(init\\_centers\\)); \\(Knots\\leftarrow\\) append(\\(Knots,\\)\\(res\\) [\"centers\"]); // Cluster centers as knots end for\\(Knots\\leftarrow\\) remove points from \\(Knots\\) so that all knots are no closer than \\(\\lambda\\); \\(K\\leftarrow\\) length(\\(Knots\\)); \\(\\{\\tilde{\\mathbf{s}}_{1},\\ldots,\\tilde{\\mathbf{s}}_{K}\\}\\gets Knots\\); \\(r\\leftarrow\\) the minimum radius such that any \\(\\mathbf{s}\\in\\mathcal{S}\\) is covered by at least one basis function. ``` **Algorithm 1** Derive data-driven knots ### Effect of knot locations Algorithm 1 outlines how we derive the data-driven knots. First, we perform \\(k\\)-means clustering on each time replicate of the data input to determine how many clusters of high values (\\(u>0.95\\)) there are, and we then train XVAE with \\(K\\) being the number of clusters combined for all time replicates. Second, the cluster centroids are used as knot locations \\(\\{\\tilde{\\mathbf{s}}_{1},\\ldots,\\tilde{\\mathbf{s}}_{K}\\}\\). To initialize \\(\\mathbf{\\Omega}\\) (defined in Eq. (D.3)) using the Wendland basis functions \\(\\omega_{k}(\\mathbf{s},r)=\\{1-d(\\mathbf{s},\\tilde{\\mathbf{s}}_{k})/r\\}_{+}^{2}\\), \\(k=1,\\ldots,K\\), we pick \\(r\\) by looping over clusters and calculating the Euclidean distance of each point within one cluster from its centroid, and we set the maximum of all distances as the initial \\(r\\). If \\(r\\) is not large enough for all \\(\\omega_{k}(\\mathbf{s},r)\\) to cover the entire spatial domain, we gradually increase \\(r\\) until the full coverage is met. Figure D.1 displays the results from emulating the data set simulated from Model III while initializing the weights differently using the true knots and the data-driven knots. Figures D.1(b) and D.1(c) show one emulation replicate from the decoder for the 50th time replicate. We see that both figures exhibit a striking resemblance to the original simulation, Figure D.1: Comparing the emulation results from initializing the XVAE with the true knots and data-driven knots for data simulated from Model III. and from visual examination, we can see little difference in the quality of the emulations. Figure D.1(d) compares the spatial predictions on the 100 holdout locations from the two emulations. The CRPS and MSPE values are again very similar for emulations based on the true knots and data-driven knots. Figures D.1(e) and D.1(f) compare the simulated and emulated spatial fields of the 50th replicate by plotting their quantiles against each other (when pooling the spatial data into the same plot). We see that both emulations align very well with the simulated data set. Although this might not be the most appropriate way of evaluating the quality of the emulations because there is spatial dependence and non-stationarity within each spatial replicate, QQ-plots still provide value in determining whether the spatial distribution is similar at all quantile levels, which is complementary to the empirical \\(\\chi_{ij}(u)\\) described in Section 2.2. Overall, Figure D.1 demonstrates that emulation based on data-driven knots performs similarly to using the true knots. This justifies applying the XVAE on a data set stemming from a misspecified model (i.e., Models I or V, for which the data-generating process does not involve any Wendland basis functions). Thus, we will use the XVAE with data-driven knots in all remaining simulation experiments and the real data application. ### Stochastic gradient descent optimization A major advantage of approximating the ELBO as presented in Eq. (2) lies in the ability to perform joint optimization over all parameters (\\(\\boldsymbol{\\phi}_{e}\\) and \\(\\boldsymbol{\\phi}_{d}\\)) using stochastic gradient descent (SGD). This optimization is efficiently implemented using a tape-based automatic differentiation module called autograd within the R package torch(Falbel and Luraschi, 2023). Built on PyTorch, this package offers rapid array computation, leveraging robust GPU acceleration for enhanced computational efficiency. It stores all the data inputs and VAE parameters in the form of torch tensors, which are similar to R multi-dimensional arrays but are designated for fast and scalable matrix calculations and differentiation. Algorithm 2 outlines the pseudo-code for the ELBO optimization of our XVAE. As the ELBO is constructed within each iteration of the SGD algorithm, the autograd module of torch tracks the computations (i.e., linear operations and ReLU activation on the tensors) in all layers of the encoding/decoding neural networks, and then performs the reverse-mode automatic differentiation via a backward pass through the graph of tensor operations to obtain the partial derivatives or the gradients with respect to each weight and bias parameter (Keydana, 2023). The iterative steps of Algorithm 2 involve advancing in the direction of the gradients on the ELBO \\(\\sum_{t=1}^{n_{t}}\\mathcal{L}_{\\boldsymbol{\\phi}_{e},\\boldsymbol{\\phi}_{d}}( \\boldsymbol{x}_{t})\\) (or a minibatch version \\(\\sum_{t\\in\\mathcal{M}}\\mathcal{L}_{\\boldsymbol{\\phi}_{e},\\boldsymbol{\\phi}_{d} }(\\boldsymbol{x}_{t})\\), \\(\\mathcal{M}\\subset\\{1,\\ldots,n_{t}\\}\\)). This is guided by a user-defined learning rate \\(\ u>0\\). To enhance stability, a convex combination of the prior update and the current gradient incorporates a momentum parameter \\(\\zeta_{m}\\) into the optimization process (Polyak, 1964). Notably, our experiments indicate that setting the number of Monte Carlo samples \\(L\\) to 1 suffices, provided the minibatch size \\(|\\mathcal{M}|\\) is adequately large, aligning with the recommendation by Kingma and Welling (2013). Upon successful training of \\(\\boldsymbol{\\phi}_{e}\\) and \\(\\boldsymbol{\\phi}_{d}\\), the encoder and decoder can be efficiently executed as needed. Leveraging the amortized nature of our estimation approach, these processes generate an ensemble of numerous samples, all originating from the same (approximate) distribution as the spatial inputs. Importantly, our XVAE algorithm can scale efficiently to massive spatial data sets. The existing max-stable, inverted-max-stable, and other spatial extremes models are limited to applications with less than approximately \\(1,000\\) locations using a full likelihood or Bayesian approach; see Section A for more details on these alternative approaches. By contrast, our approach can fit a globally non-stationary spatial extremes process, with parameters evolving over time, to a data set of unprecedented spatial dimension of more than \\(16,000\\) locations, and also facilitates data emulation in such dimensions. See Section 4 for details. ### Finding starting values In finding a reasonable starting values of parameters in XVAE, we choose \\(\\alpha_{0}=1/4\\) and \\(\\tau=1\\) for the white noise process, and \\(\\alpha=1/2\\) for the latent exponentially-tilted PS variables. From Eq. (4), \\((\\mathbf{y}_{1}^{1/\\alpha_{0}},\\cdots,\\mathbf{y}_{n_{t}}^{1/\\alpha_{0}})=\\mathbf{\\Omega}^{1 /\\alpha}(\\mathbf{z}_{1},\\cdots,\\mathbf{z}_{n_{t}})\\), in which \\(\\mathbf{\\Omega}\\) is defined in Eq. (D.3). Since \\(\\{\\epsilon_{t}(\\mathbf{s}):t=1,\\ldots,n_{t}\\}\\) are treated as error processes, we have \\(\\mathbf{x}_{t}\\approx\\mathbf{y}_{t}\\) and thus a good approximation for \\(\\mathbf{z}_{t}\\) can be obtained via projection: \\[\\hat{\\mathbf{z}}_{t}\\approx\\{(\\mathbf{\\Omega}^{\\frac{1}{\\alpha}})^{T}\\mathbf{\\Omega}^{ \\frac{1}{\\alpha}}\\}^{-1}(\\mathbf{\\Omega}^{\\frac{1}{\\alpha}})^{T}\\mathbf{x}_{t}^{\\frac {1}{\\alpha_{0}}},\\;t=1,\\ldots,n_{t}.\\] We use QR decomposition to solve the following linear system to get the initial value \\(\\mathbf{W}_{1}^{(0)}\\): \\((\\hat{\\mathbf{z}}_{1},\\cdots,\\hat{\\mathbf{z}}_{n_{t}})^{T}=(\\mathbf{x}_{1},\\cdots,\\mathbf{x}_ {n_{t}})^{T}\\mathbf{W}_{1}^{T}\\). Also, set \\(\\mathbf{b}_{1}^{(0)}=(0,\\ldots,0)^{T}\\). The initial values of \\(\\mathbf{h}_{1,t}\\) in Eq. (D.1) satisfy \\(\\mathbf{h}_{1,t}\\approx\\hat{\\mathbf{z}}_{t}\\), \\(t=1,\\ldots,n_{t}\\). Furthermore, we set \\(\\mathbf{W}_{2}^{(0)}\\) and \\(\\mathbf{W}_{4}^{(0)}\\) to be identity matrices. All remaining parameters, both variational and generative, were initialized by random sampling from \\(N(0,0.01)\\). To optimize the ELBO following the steps outlined in Algorithm 2, we monitor the convergence of the ELBO via calculating the difference in the average ELBO values in the latest 100 iterations (or epochs) and the 100 iterations before that. Once the difference is less than \\(\\delta=10^{-6}\\), we stop the stochastic gradient search. ## Appendix E Additional figures from the simulation study We show additional figures that are complementary to those included in Section 3. Figure E.1 displays the simulated data sets from Models I, II, IV and V and their emulated fields using both XVAE and hetGP. See Figure 2 for comparison for Model III. Figure E.2 displays QQ-plots from the spatial data to compare the overall distributions of the simulated and emulated data sets. Figure E.3 compares the empirically estimated \\(\\chi_{h}(u)\\) as described in Section 2.2 from the data replicates simulated from Models I, II, IV and V and their emulations at three different distances \\(h=0.5,2,5\\) under the working assumption of stationarity. Figure E.4 shows the estimates of \\(\\text{ARE}_{\\psi}(u)\\) defined in Eq. (13), \\(\\psi=0.05\\), for both simulations and XVAE emulations under Models I, II, IV and V. See Figure 4 for **Algorithm 2** Stochastic Gradient Descent with momentum to maximize the ELBO defined in Eq. (2). We set \\(|\\mathcal{M}|=n_{t}\\) and \\(L=1\\) in our experiments. ``` 0: Learning rate \\(\ u>0\\), momentum parameter \\(\\zeta_{m}\\in(0,1)\\), convergence tolerance \\(\\delta\\) \\(\\{\\mathbf{x}_{t}:t=1,\\ldots,n_{t}\\}\\): observed \\(n_{t}\\) spatial replicates \\(q_{\\mathbf{\\phi}_{e}}(\\mathbf{z}_{t}\\mid\\mathbf{x}_{t})\\): inference model \\(p_{\\mathbf{\\phi}_{d}}(\\mathbf{x}_{t},\\mathbf{z}_{t})\\): generative data model Result: Optimized parameters \\(\\mathbf{\\phi}_{e},\\ \\mathbf{\\phi}_{d}\\) \\(j\\gets 0\\); \\(K\\leftarrow\\) Number of data-driven knots; \\(\\{\\tilde{\\mathbf{s}}_{1},\\ldots,\\tilde{\\mathbf{s}}_{K}\\}\\leftarrow\\) Specify knot locations; // See Section D.2 for details \\(r\\leftarrow\\) Basis function radius shared by all knots; \\((\\mathbf{\\phi}_{e}^{(j)},\\mathbf{\\phi}_{d}^{(j)})^{T}\\leftarrow\\) Initialized parameters; // See Section D.4 for details \\(\\mathbf{v}\\leftarrow\\mathbf{0}\\); // Velocity \\(\\mathbf{L}\\leftarrow\\) repeat(-Inf, 200); // A vector of 200 negative infinite values while\\(|\\)mean\\(\\{\\mathbf{L}[(j-200):(j-101)]\\}-\\)mean\\(\\{\\mathbf{L}[(j-100):j]\\}|>\\delta\\)do \\(\\mathcal{M}\\sim\\{1,\\ldots,n_{t}\\}\\); // Indices for the random minibatch \\(\\eta_{kt}\\stackrel{{\\text{i.i.d.}}}{{\\sim}}\\text{Normal}(0,1)\\), \\(k=1,\\ldots,K\\), \\(t\\in\\mathcal{M}\\); // Reparameterization trick for\\(t\\in\\mathcal{M}\\)do \\((\\mathbf{\\mu}_{t}^{T},\\log\\mathbf{\\zeta}_{t}^{T})^{T}\\leftarrow\\) EncoderNeuralNet\\({}_{\\mathbf{\\phi}_{e}^{(j)}}(\\mathbf{x}_{t})\\); \\(\\mathbf{z}_{t}\\leftarrow\\mathbf{\\mu}_{t}+\\mathbf{\\zeta}_{t}\\odot\\mathbf{\\eta}_{t}\\); \\((\\alpha_{t},\\mathbf{\\gamma}_{t}^{T})^{T}\\leftarrow\\)DecoderNeuralNet\\({}_{\\mathbf{\\phi}_{d}^{(j)}}(\\mathbf{z}_{t})\\); Calculate \\(q_{\\mathbf{\\phi}_{e}^{(j)}}(\\mathbf{z}_{t}\\mid\\mathbf{x}_{t})\\), \\(p_{\\mathbf{\\phi}_{d}^{(j)}}(\\mathbf{x}_{t}\\mid\\mathbf{z}_{t})\\) and \\(p_{\\mathbf{\\phi}_{d}^{(j)}}(\\mathbf{z}_{t})\\); // See Eq. (5)-(6) end Obtain the ELBO \\(\\mathcal{L}_{\\mathbf{\\phi}_{e}^{(j)},\\mathbf{\\phi}_{d}^{(j)}}(\\mathcal{M})=\\sum_{t\\in \\mathcal{M}}\\mathcal{L}_{\\mathbf{\\phi}_{e}^{(j)},\\mathbf{\\phi}_{d}^{(j)}}(\\mathbf{x}_{t})\\) and its gradients \\(\\mathbf{J}_{\\mathcal{L}}=\\{\ abla_{\\mathbf{\\phi}_{e},\\mathbf{\\phi}_{d}}\\mathcal{L}_{\\mathbf{ \\phi}_{e},\\mathbf{\\phi}_{d}}(\\mathcal{M})\\}(\\mathbf{\\phi}_{e}^{(j)},\\mathbf{\\phi}_{d}^{(j)})\\); Compute velocity update: \\(\\mathbf{v}\\leftarrow\\zeta_{m}\\mathbf{v}+\ u\\mathbf{J}_{\\mathcal{L}}\\); Apply update: \\((\\mathbf{\\phi}_{e}^{(j+1)},\\mathbf{\\phi}_{d}^{(j+1)})^{T}\\leftarrow(\\mathbf{\\phi}_{e}^{(j) },\\mathbf{\\phi}_{d}^{(j)})^{T}+\\mathbf{v}\\); \\(\\mathbf{L}\\leftarrow(\\mathbf{L}^{T},\\mathcal{L}_{\\mathbf{\\phi}_{e}^{(j)},\\mathbf{\\phi}_{d}^{(j )}}(\\mathcal{M}))^{T}\\) ; // Add the latest ELBO value to the vector \\(\\mathbf{L}\\) \\(j\\gets j+1\\); ``` **Algorithm 3** Stochastic Gradient Descent with momentum to maximize the ELBO defined in Eq. (2). \\(\\chi_{h}(u)\\) and \\(\\text{ARE}_{\\psi}(u)\\) estimates for Model III. Lastly, Figure E.5 shows coverage probabilities of \\(\\{\\gamma_{kt}:k=1,\\ldots,K\\}\\) for \\(t=1\\) from fitting Model III. Coverage probabilities when \\(\\gamma_{k}=0\\) are poor, though upper bounds of credible intervals are consistently less than \\(10^{-6}\\). ## Appendix F Red Sea Dataset ### Removing seasonality For any site \\(\\mathbf{s}_{j}\\), we combine daily observations across all days as a vector and denote it by \\(\\mathbf{v}_{j}=(v_{j1},\\cdots,v_{jN})^{T}\\) where \\(N=11,315\\) is the number of days between \\(1985/01/01\\) and \\(2015/12/31\\). Following Huser (2021), we remove the seasonality from the Red Sea SST daily records at a fixed \\(\\mathbf{s}_{j}\\) via subtracting the overall trend averaged within its neighborhood of radius \\(r=30\\) km, and then we repeat the same procedure for every other location. More specifically, denote the index set of all location with the neighborhood of \\(\\mathbf{s}_{j}\\) by \\(\\mathcal{N}_{j}=\\{i:||\\mathbf{s}_{i}-\\mathbf{s}_{j}||<r,\\ i=1,\\ldots,n_{s}\\}\\). To get rid of the seasonality in \\(\\mathbf{v}_{j}\\), we first concatenate all records in the neighborhood \\(\\{\\mathbf{v}_{i}:i\\in\\mathcal{N}_{j}\\}\\) to get a flattened response vector \\(\\mathbf{V}_{j}\\); that is, \\(\\mathbf{V}_{j}=(\\mathbf{v}_{i_{1}}^{T},\\mathbf{v}_{i_{2}}^{T},\\cdots,\\mathbf{v}_{i_{|\\mathcal{ N}_{j}|}}^{T})^{T}\\) where \\(\\{i_{1},\\ldots,i_{|\\mathcal{N}_{j}|}\\}\\) include all elements of \\(\\mathcal{N}_{j}\\). Thus, the length of the vector \\(\\mathbf{V}_{j}\\) is \\(|\\mathcal{N}_{j}|\\times N\\). Second, we construct the matrix \\(\\mathbf{M}=(\\mathbf{1}_{N},\\mathbf{t},\\mathbf{B}_{N\\times 12})\\), where \\(\\mathbf{t}=(1,\\ldots,N)^{T}\\) is used to capture linear time trend and the columns of \\(\\mathbf{B}\\) are 12 cyclic cubic spline bases defined over the continuous interval \\([0,366]\\) evaluated at \\(1,\\ldots,N\\) modulo 365 or 366 (i.e., the day in the corresponding year). These basis functions use equidistant knots over of \\([0,366]\\) that help capturing the monthly-varying features. Then, we vertically stack the matrix \\(\\mathbf{M}\\) for \\(|\\mathcal{N}_{j}|\\) times to build the design matrix \\(\\mathbf{M}_{j}\\). Through simple linear regression of \\(\\mathbf{V}_{j}\\) on \\(\\mathbf{M}_{j}\\), we get the fitted values \\(\\hat{\\mathbf{V}}_{j}=(\\hat{\\mathbf{v}}_{i_{1}}^{T},\\hat{\\mathbf{v}}_{i_{2}}^{T},\\cdots, \\hat{\\mathbf{v}}_{i_{|\\mathcal{N}_{j}|}}^{T})^{T}\\). To model the residuals \\(\\mathbf{V}_{j}-\\hat{\\mathbf{V}}_{j}\\), we only use an intercept and a time trend which are the first two columns of \\(\\mathbf{M}_{j}\\) (denote as \\(\\mathbf{M}_{j}^{\\sigma}\\)). The model for the residuals is \\[\\mathbf{V}_{j}-\\hat{\\mathbf{V}}_{j} \\sim N(\\mathbf{0},\\text{diag}(\\mathbf{\\epsilon}_{j}^{2})),\\] \\[\\log\\mathbf{\\epsilon}_{j} =\\mathbf{M}_{j}^{\\sigma}\\times(\\beta_{1},\\beta_{2})^{T}.\\] Hence we can estimate parameters \\((\\beta_{1},\\beta_{2})^{T}\\) via optimizing the multivariate normal density function, i.e., \\[(\\hat{\\beta}_{1},\\hat{\\beta}_{2})^{T}=\\operatorname*{argmin}_{( \\beta_{1},\\beta_{2})^{T}}\\left\\{-\\frac{1}{2}\\log\\mathbf{1}^{T}\\mathbf{\\epsilon}_{j}^{2 }-\\frac{1}{2}(\\mathbf{V}_{j}-\\hat{\\mathbf{V}}_{j})^{T}\\text{diag}(\\mathbf{\\epsilon}_{j}^ {-2})(\\mathbf{V}_{j}-\\hat{\\mathbf{V}}_{j})\\right\\}.\\] Let \\(\\hat{\\mathbf{\\epsilon}}_{j}=\\exp\\{\\mathbf{M}_{j}^{\\sigma}\\times(\\hat{\\beta}_{1},\\hat{ \\beta}_{2})^{T}\\}\\equiv(\\hat{\\mathbf{e}}_{i_{1}}^{T},\\hat{\\mathbf{e}}_{i_{2}}^{T}, \\cdots,\\hat{\\mathbf{e}}_{i_{|\\mathcal{N}_{j}|}}^{T})^{T}\\). Note that in defining the neighborhood of site \\(\\mathbf{s}_{j}\\), we also include the \\(j\\)th site. By an abuse of notation, we denote the fitted values corresponding to the \\(j\\)th site by \\(\\hat{\\mathbf{v}}_{j}\\) and \\(\\hat{\\mathbf{e}}_{j}\\), which correspond to the mean trend and residual standard deviations at site \\(\\mathbf{s}_{j}\\), respectively. Finally, the daily records at Figure 10: Simulated data sets (left column) and emulated fields (XVAE, middle column; hetGP, right column) from Models I, II, IV and V (top to bottom). In all cases, we use data-driven knots for emulation using XVAE. See Figure 2 for comparison for Model III. Figure 2: QQ-plots comparing simulated data sets and emulated fields from XVAE (left), and hetGP (right) based on Models I–V (top to bottom). Figure E.3: The empirically-estimated tail dependence measure \\(\\chi_{h}(u)\\) at \\(h=0.5\\) (left), 2 (middle), 5 (right) for Models I, II, IV and V (top to bottom), based on simulated (black), and XVAE emulated (red) data. See Figure 4 for \\(\\chi_{h}(u)\\) estimates for Model III. Figure E.4: Estimates of \\(\\text{ARE}_{\\psi}(u)\\), \\(\\psi=0.05\\), for both simulations (black) and XVAE emulations (red) under Models I, II, IV and V (left to right). See the right panel of Figure 4 for \\(\\text{ARE}_{\\psi}(u)\\) estimates for Model III. Figure E.5: Coverage probabilities for each of the parameters \\(\\gamma_{k}\\), \\(k=1,\\ldots,K=25\\), from emulating 100 simulated data sets of Model III, in which \\(n_{s}=2,000\\) and \\(n_{t}=100\\). The nominal levels of the credible intervals are 0.95 (red dashed line). Zero probabilities correspond to \\(\\gamma_{k}=0\\), \\(k=5,12,17\\). \\(\\mathbf{s}_{j}\\) can be de-trended by calculating \\[\\mathbf{v}_{j}^{*}=\\frac{\\mathbf{v}_{j}-\\hat{\\mathbf{v}}_{j}}{\\hat{\\mathbf{e}}_{j}},\\] (F.1) in which the subtraction and division are done on a elementwise basis. We repeat the procedure described above to remove the seasonal variability from all other locations. ### Marginal distributions of the monthly maxima After removing seasonality by normalization (see Eq. F.1), we extract monthly maxima from \\(\\mathbf{v}_{j}^{*}\\) at site \\(\\mathbf{s}_{j}\\) and denote them as \\(\\mathbf{m}_{j}=(m_{j1},\\ldots,m_{jn_{t}})\\), in which \\(n_{t}=372\\) is the number of months between 1985/01/01 and 2015/12/31 and \\(j=1\\ldots,n_{s}\\). Before applying our proposed model, we need to find a distribution which fits the monthly maxima well so we can transform the data to the Pareto-like distribution shown in Eq. (10). Given prior experience in analyzing monthly maxima, we propose two candidate distributions: the generalized extreme value (GEV) distribution and the general non-central \\(t\\) distribution. To choose between them, we choose to perform \\(\\chi^{2}\\) goodness-of-fit tests due to its flexibility in choosing the degrees of freedom as well as the size of intervals. The \\(\\chi^{2}\\) goodness-of-fit test at a site \\(\\mathbf{s}_{j}\\in\\mathcal{S}\\) proceeds as follows. First, we calculate the equidistant cut points within the range of all monthly maxima at \\(\\mathbf{s}_{j}\\) to get \\(n_{I}\\) intervals. Second, we count the number of monthly maxima falling within each interval and denote them by \\(O_{i}\\) (\\(i=1,\\ldots,n_{I}\\)). Third, we fit the GEV and \\(t\\) distributions to the block maxima series at \\(\\mathbf{s}_{j}\\) to get the parameter estimates. Then the expected frequencies \\(E_{i}\\) (\\(i=1,\\ldots,n_{I}\\)) is calculated by multiplying the number of monthly maxima at each site (i.e., \\(n_{t}\\)) by the probability increment of the fitted GEV or \\(t\\) distribution in each interval (denoted by \\(p_{i}\\)). Treating the frequencies as a multinomial distribution with \\(n_{t}\\) trials and \\(n_{I}\\) categories, we can derive the generalized likelihood-ratio test statistic for the null hypothesis \\(H_{0}\\) that \\((p_{1},\\cdots,p_{n_{I}})^{T}\\) are the true event probabilities. Specifically, under the null hypothesis \\(H_{0}\\), Wilk's Theorem guarantees \\[\\sum_{i=1}^{n_{I}}O_{i}\\log(O_{i}/E_{i})\\overset{d}{\\rightarrow}\\chi_{\ u}^ {2}\\text{ as }n_{t}\\rightarrow\\infty,\\] in which \\(\ u=n_{I}-4\\) when \\(H_{0}\\) corresponds to the GEV model which has three parameters (i.e., location, scale, and shape) and \\(\ u=n_{I}-3\\) when \\(H_{0}\\) corresponds to the \\(t\\) model which has two parameters (i.e., non-centrality parameter and degrees of freedom). Since \\(n_{t}=372\\) in the Red Sea SST data, we can safely assume that the asymptotic distribution is a good approximation of the true distribution under \\(H_{0}\\), which is then used to calculate the \\(p\\)-value to evaluate the goodness-of-fit. We repeat the procedure and obtain a \\(p\\)-value for each location. Figure F.1 shows the spatial maps for \\(p\\)-values along with the binary maps signifying whether the null hypothesis is accepted or not with significance level 0.05. In Figure F.1(c), the goodness-of-fit testsresult in \\(p\\)-values greater than \\(0.05\\) at all locations, indicating GEV distribution is a good fit for all monthly maxima time series. For the shaded locations in Figure F.1(b) and F.1(d), the fitdistr(., \"t\") function from the MASS package in R failed to converge when optimizing joint \\(t\\) likelihood, and we were not able to obtain parameter estimates of the \\(t\\) distribution at these locations which were needed for the subsequent \\(\\chi^{2}\\) tests. For the locations that have valid fitted \\(t\\) distributions in Figure F.1(b), the \\(p\\) values are mostly less than those in Figure F.1(a). This indicates that the GEV distribution, the asymptotic distribution for univariate block maxima, is a better choice to describe the marginal distribution of the monthly maxima, as expected. ### Marginal transformation Before applying our model to monthly maxima, certain transformations need to be done to match our marginals in Section 1.3.1. When performing the goodness-of-fit tests, we already obtained the sitewise GEV parameters: \\(\\mu_{j}\\), \\(\\sigma_{j}\\), and \\(\\xi_{j}\\) for \\(j=1,\\ldots,n_{s}\\). Since monotonic transformations of the marginal distributions do not alter the dependence structure of the data input, we define \\(x_{jt}=F_{jt}^{-1}\\{F_{\\text{GEV}}(m_{jt};\\mu_{j},\\sigma_{j},\\xi_{j})\\}\\), \\(t=1,\\ldots,n_{t},\\ j=1,\\ldots,n_{s}\\), in which \\(F_{jt}\\) is the marginal distribution function of \\(X_{t}(\\mathbf{s}_{j})\\) displayed in Eq. (10), the function \\(F_{\\text{GEV}}(\\cdot;\\mu_{j},\\sigma_{j},\\xi_{j})\\) is the distribution function of \\(\\text{GEV}(\\mu_{j},\\sigma_{j},\\xi_{j})\\), and \\(m_{jt}\\) is the monthly maximum at site \\(\\mathbf{s}_{j}\\) from \\(t^{\\text{th}}\\) month. Further, we have \\(\\mathbf{x}_{t}=(x_{1t},x_{2t},\\cdots,x_{n_{s}t})^{T}\\), \\(t=1,\\ldots,n_{t}\\), which will be treated as the response in Algorithm 2. It should be noted that \\(F_{jt}\\) is defined with the parameters \\(\\alpha_{t}\\), \\(\\mathbf{\\gamma}_{t}^{T}\\) and \\(\\mathbf{\\Omega}\\). Recall that the matrix \\(\\mathbf{\\Omega}\\), defined in Eq. (D.3), contains the basis function evaluations at all locations. After updating these parameters in each iteration of the stochastic gradient descent algorithm, we need to update the values of \\(\\{x_{jt}:t=1,\\ldots,n_{t},\\ j=1,\\ldots,n_{s}\\}\\) before continuing the next iteration. Figure F.1: In the left two panels, we show heatmaps of \\(p\\)-values from \\(\\chi^{2}\\) goodness-of-fit tests under the GEV model in (a) and the \\(t\\) model in (b). In the right two panels, we show binary \\(p\\)-values maps from \\(\\chi^{2}\\) goodness-of-fit tests under the GEV model in (c) and the \\(t\\) model in (d). ### Empirical \\(\\chi_{h}(u)\\) estimates Figure F.2: Empirically-estimated \\(\\chi_{h}(u)\\) for \\(h=0.5,2,5\\) (\\(\\approx 50\\)km, \\(200\\)km, \\(500\\)km) for the Red Sea SST monthly maxima (black) and the XVAE emulations (red).
Many real-world processes have complex tail dependence structures that cannot be characterized using classical Gaussian processes. More flexible spatial extremes models exhibit appealing extremal dependence properties but are often exceedingly prohibitive to fit and simulate from in high dimensions. In this paper, we aim to push the boundaries on computation and modeling of high-dimensional spatial extremes via integrating a new spatial extremes model that has flexible and non-stationary dependence properties in the encoding-decoding structure of a variational autoencoder called the XVAE. The XVAE can emulate spatial observations and produce outputs that have the same statistical properties as the inputs, especially in the tail. Our approach also provides a novel way of making fast inference with complex extreme-value processes. Through extensive simulation studies, we show that our XVAE is substantially more time-efficient than traditional Bayesian inference while outperforming many spatial extremes models with a stationary dependence structure. Lastly, we analyze a high-resolution satellite-derived dataset of sea surface temperature in the Red Sea, which includes 30 years of daily measurements at 16703 grid cells. We demonstrate how to use XVAE to identify regions susceptible to marine heatwaves under climate change and examine the spatial and temporal variability of the extremal dependence structure.
Give a concise overview of the text below.
244
arxiv-format/2404_15430v1.md
**Transiting Exoplanet Atmospheres in the Era of JWST** Eliza M.-R. Kempton\\({}^{1}\\) and Heather A. Knutson\\({}^{2}\\) \\({}^{1}\\)_Department of Astronomy, University of Maryland, College Park, MD 20742, USA_ \\({}^{2}\\)_Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125, USA_ ## 1 Introduction ### A Historical Perspective As soon as the first exoplanets were discovered in the 1990s, the quest to characterize these unique objects in more detail began in earnest. Images from science fiction movies come to mind when we picture alien planets, but these discoveries provided us with an exciting new opportunity to actually _measure_ the atmospheric properties of extrasolar worlds. The first transiting exoplanet was discovered in 2000 (Charbonneau et al., 2000). These are planets with orbits that track directly in front of their host stars as viewed by an Earthbound observer, producing a small dip in the amount of light received, known as a transit. We focus on transiting planets in this review article because they present special opportunities for measuring atmospheric properties. The clever techniques for transiting exoplanet atmospheric characterization that have been developed by the astronomical community (described in detail in Section 1.3) are all premised on using the known orbital geometry of the system to extract the planetary signal from the combined light of the planet and host star. These techniques have been applied with great success over the last two decades to measure a host of atmospheric properties. The first detection of an exoplanet atmosphere occurred in 2002 (Charbonneau et al., 2002). The planet, HD 209458b, was the only known transiting planet at the time (although that was not the case for long), and it belongs to a broader class of exoplanets referred to as 'hot Jupiters'. Such planets are aptly named for their large sizes and small orbital separations -- HD 209458b orbits its Sun-like host star every 3.5 days at an orbital distance of 0.05 AU, and it has a radius of 1.35 Jupiter radii (\\(R_{J}\\)). By measuring a small amount of excess absorption during transit at the wavelength of the sodium resonance doublet (589.3 nm) with the STIS instrument on the Hubble Space Telescope (HST), Charbonneau et al. (2002) inferred the presence of gaseous sodium in the planet's atmosphere. Although subsequent studies of HD 209458b's sodium absorption signal with ground-based high resolution spectrographs have revealed that this measurement may be biased by deformations in the stellar line shape due to the planetary transit (Casasayas-Barris et al., 2020, 2021), this firstatmospheric measurement unquestionably marked the birth of a new field of exoplanet atmospheric characterization studies. Not long after, came the first measurements of exoplanetary thermal emission via _secondary eclipse_(Deming et al., 2005; Charbonneau et al., 2005), which occurs when a planet passes _behind_ its host star. Then, in 2007, the first phase curve observations of thermal emission versus orbital phase were obtained for the hot Jupiter HD 189733b (Knutson et al., 2007). The thermal emission measurements were all made with NASA's Spitzer Space Telescope, which became a workhorse for infrared (IR) characterization of exoplanet atmospheres before it was decommissioned in late 2020. Other important firsts include the measurement of escaping gas from an exoplanet atmosphere (VidalMadjar et al., 2003), and the first robust detections of molecules and (more tentatively) high-altitude winds using a novel cross-correlation spectroscopy technique with high-resolution spectrographs on ground-based telescopes (Snellen et al., 2010). More details on all of these observational techniques can be found in Section 1.3. All of the aforementioned observations were of hot Jupiter targets. The first atmospheric spectrum of an object smaller than Neptune was obtained in 2010 for the planet GJ 1214b (Bean et al., 2010), ultimately indicating the presence of a thick layer of clouds or haze (Kreidberg et al., 2014). In 2018 the first thermal emission measurement was made for a rocky, terrestrial exoplanet, LHS 3844b, disappointingly indicating the lack of any atmosphere at all (Kreidberg et al., 2019). Today, exoplanet atmospheric characterization has become its own _bona fide_ sub-field of astronomy. Detections of several dozen atomic and molecular species along with clouds and hazes have been claimed in the literature for over 100 individual exoplanets1(see e.g. Burrows, 2014; Madhusudhan et al., 2016; Deming & Seager, 2017; Madhusudhan, 2019). We note that some of these detections have been made at high statistical significance, whereas others are more tentative or ambiguous, so we encourage the casual reader of the exoplanet atmospheres literature to do so with a critical eye. All of these atmospheric characterization studies have been helped along by the discovery of thousands of transiting exoplanets 2 with ground-based (e.g. HAT, WASP, MEarth, Speculoos) and space-based (e.g. CoRoT, Kepler, TESS) surveys. On the population level, tantalizing hints of planetary diversity have been uncovered, and well-founded attempts are being made to tie statistical trends in atmospheric properties to underlying theories of planet formation and evolution (e.g. Sing et al., 2016; Tsiaras et al., 2018; Welbanks et al., 2019; Mansfield et al., 2021; Goyal et al., 2021; Changeat et al., 2022; Deming et al., 2023; Brande et al., 2023; Gandhi et al., 2023). Footnote 2: A database that maintains a list of all known exoplanets: [https://exoplanetarchive.ipac.caltech.edu/](https://exoplanetarchive.ipac.caltech.edu/). In late 2021, the James Webb Space Telescope (JWST) launched successfully, and scientific operations began in the summer of 2022. The telescope's large aperture and IR observing capabilities have opened the door to studies of smaller and colder planets than had previously been possible (e.g. Kempton et al., 2023; Greene et al., 2023; Zieba et al., 2023). The high signal-to-noise (S/N) spectra delivered by JWST for larger and hotter planets additionally enable analyses of processes that had remained hidden in earlier datasets such as inhomogeneous cloud formation (Feinstein et al., 2023) and photochemistry (Tsai et al., 2023). This new era of exoplanet characterization with JWST is accompanied by a windfall of ground-based exoplanet data using recently commissioned highresolution spectrographs (e.g. CARMENES, ESPRESSO, CRIRES+, IGRINS, GIANO, MAROONX, etc.) that are providing detailed compositional measurements for hot and ultra-hot giant planets (e.g. Birkby, 2018; Giacobbe et al., 2021; Pelletier et al., 2023; Gandhi et al., 2023). The first JWST exoplanet observations have already been transformative, as have studies that have detected a slew of atomic, ionic, and molecular species in hot Jupiter atmospheres from the ground. These recent results will be summarized in the following sections along with the pre-existing context from two decades of exoplanet atmospheric characterization studies. ### Exoplanet Demographics We currently know of more than 10,000 extrasolar planets and planet candidates3, most of which orbit stars with masses ranging from 0.5 - 1.5\\(\\times\\) the mass of the Sun (for astronomers, this corresponds to F through early M spectral types). If we exclude planets detected using the microlensing technique (a small fraction of this total), nearly all of these planets orbit stars in our local neighborhood4 of the Milky Way galaxy. This means that when we discuss the properties of extrasolar planets in subsequent sections, we are implicitly focusing on planets orbiting relatively nearby and (unless specified otherwise) Sun-like stars. In this section, we provide a brief overview of this exoplanet population for the non-expert reader. We begin by briefly summarizing the two detection techniques most commonly used to find exoplanets and their corresponding sensitivities to different kinds of planets. For readers interested in learning more about complementary microlensing and direct imaging techniques, we recommend reviews by Gaudi (2022) and Currie et al. (2023). For a more comprehensive overview of exoplanet demographics, we recommend the review by Gaudi et al. (2021). Footnote 3: For the latest numbers see Footnote 2 above. Most unconfirmed candidates were detected using transit surveys and it is likely that this sample contains some false positives, which are typically multiple star systems where one stellar component eclipses another. Transiting planet candidates can be validated statistically using the transit light curve shapes and other complementary information, such as adaptive optics imaging to resolve nearby stars (e.g., Morton et al. 2016; Giacalone et al. 2021), or they can be confirmed directly by radial velocity measurements of the planet masses. Footnote 4: The distance from Earth to the center of the Milky Way is approximately 8.2 kpc (Bland-Hawthorn & Gerhard 2016), while most known exoplanets are located within a few hundred pc of the Earth’s location (see https://exoplanetarchive. ipac.caltech.edu/). #### 1.2.1 Detection Techniques The first planet orbiting a Sun-like star was detected using the radial velocity technique (Mayor & Queloz 1995). This technique relies on the fact that a star and planet will orbit around their mutual center of mass. This causes the star's spectrum to be Doppler shifted as it moves towards and then away from the observer. The semi-amplitude of this Doppler shift is largest for massive planets with short orbital periods (e.g. Fischer et al. 2014); smaller planets on more distant orbits have smallerradial velocity semi-amplitudes and are correspondingly harder to detect5. By measuring a planet's radial velocity semi-amplitude, we can place constraints on its mass (technically \\(M_{p}\\)sin(\\(i\\)), where \\(M_{p}\\) is the planet mass and \\(i\\) is the orbital inclination), orbital period, and orbital eccentricity. We can then convert this orbital period to an orbital semi-major axis using Kepler's third law. Footnote 5: The orbital motion of the Earth around the Sun produces a sinusoidal radial velocity signal with a semi-amplitude of 8.95, cm s\\({}^{-1}\\). We can use Equation 1 in Fischer et al. (2014) to calculate that a Jovian planet orbiting a Sun-like star with an orbital period of a few days would have a radial velocity semi-amplitude that is a factor of \\(\\sim\\)10\\({}^{3}\\) larger. Over the past decade the radial velocity technique has been overtaken by the transit technique, which is responsible for identifying most of the exoplanets known today. This technique focuses on planetary systems where the planet passes in front of its host star as seen from the Earth. During a transit, the planet will block part of the star's light. The amount of light blocked tells us the radius of the planet relative to that of the star, and the intervals between transits tell us the planet's orbital period. If we assume that the planet orbits are randomly oriented, the probability of seeing a transit P is given by P = \\(R\\),/\\(a\\), where \\(R\\), is the stellar radius and \\(a\\) is the planet's orbital semi-major axis (Winn, 2010). This means that transit surveys are biased towards close-in planets; this bias is even stronger than that of radial velocity surveys. Transit surveys also detect large planets more easily than small planets, as they block more of the star's light. It is very challenging to detect Earth analogues orbiting Sun-like stars in current radial velocity and transit surveys. Fortunately, the size of both the transit and radial velocity signals increase with decreasing stellar mass. As a result, it is significantly easier to detect small planets orbiting small stars ('M dwarfs'). Small stars are also significantly less luminous than the Sun, and the orbital periods corresponding to Earth-like insolations are much closer in. This means that most small (approximately 1-2 R\\({}_{\\oplus}\\)) transiting planets that are amenable to atmospheric characterization orbit low-mass stars. This has important implications for our understanding of the population-level properties of small rocky exoplanets. #### 1.2.2 Planet Types and Order-of-Magnitude Occurrence Rates As noted in Section 1.1, the close-in gas giant exoplanets known as 'hot Jupiters' were the first type of exoplanet detected in orbit around nearby Sun-like stars. These planets are relatively rare, with an order-of-magnitude occurrence rate of approximately 1% for Sun-like stars (e.g., Petigura et al., 2018; Dattilo et al., 2023). Gas giant planets at intermediate orbital distances (orbital periods of \\(\\sim\\) 10\\(-\\)100 days) are often referred to as 'warm Jupiters', and have a moderately enhanced occurrence rate relative to hot Jupiters (e.g., Fernandes et al., 2019; Fulton et al., 2021). Gas giant planets at larger separations (orbital periods greater than several hundred days) are typically referred to as 'cold Jupiters'. The most precise estimates of the occurrence rates of cold Jupiters currently come from radial velocity surveys, as there are very few transiting gas giant planets at these separations (e.g. Foreman-Mackey et al., 2016). These surveys indicate that the occurrence rate of gas giant planets rises dramatically as we move farther away from the star (e.g., Fernandes et al., 2019; Fulton et al., 2021), with 14 \\(\\pm\\) 2% of Sun-like stars hosting a gas giant planet between 2 - 8 AU (Fulton et al., 2021). Figure 1: Distribution of confirmed exoplanets in mass-period space. Planets with spectroscopic measurements that constrain their atmospheric properties are shown as dark points, those without are shown as light points. This review article focuses on atmospheric characterization of _transiting_ exoplanets, i.e. the dark purple pentagon symbols. Solar system planets are shown as yellow circles for context. Both the radial velocity and transit techniques are most sensitive to detecting massive planets on close-in orbits, while the direct imaging technique is most sensitive to young, self-luminous planets on relatively wide orbits. Figure adapted from Currie et al. (2023). Current radial velocity surveys of bright nearby stars have baselines as long as \\(\\sim\\) 30 years; this means that our knowledge of the occurrence rates of gas giant planets in these data sets is limited to planets with orbital semi-major axes comparable to or less than that of Saturn in our own solar system. Exoplanets smaller than Neptune (typically defined as \\(<\\) 4 R\\({}_{\\oplus}\\) or \\(\\lesssim\\) 10 M\\({}_{\\oplus}\\)) are often found on close-in orbits around Sun-like stars. Such planets have an overall much higher occurrence than the gas giant planets: \\(\\sim\\) 50% for orbital periods of less than 100 days (just outside the orbit of Mercury in the solar system; e.g., Fulton & Petigura, 2018; Hsu et al., 2019). This population is observed to have a bimodal radius distribution, with peaks at 1.3 and 2.4 R\\({}_{\\oplus}\\)(e.g., Fulton et al., 2017; Van Eylen et al., 2018; Fulton & Petigura, 2018; Hardegree-Ullman et al., 2020; Petigura et al., 2022). The smaller planets (radii between 1.0-1.7 R\\({}_{\\oplus}\\)) have bulk densities consistent with Earth-like compositions (e.g. Lozovsky et al., 2018; Dai et al., 2019), and are therefore termed as'super-Earths'. The larger planets (radii between 1.7-3.5 R\\({}_{\\oplus}\\)) have lower bulk densities, consistent with the presence of modest (a few percent of the total planet mass) hydrogen- and helium-rich gas envelopes (e.g., Lozovsky et al., 2018; Lee, 2019; Neil et al., 2022). These planets are therefore termed as'sub-Neptunes', although some may also have water-rich envelopes (see discussion below). The location of the bimodal radius 'gap' moves towards smaller radii at larger orbital separations (Fulton et al., 2017; Van Eylen et al., 2018; Fulton & Petigura, 2018; Hardegree-Ullman et al., 2020; Petigura et al., 2022). This suggests that the gap was carved out by either photoevaporative (e.g., Owen & Wu, 2017) or core-powered (e.g., Ginzburg et al., 2018; Gupta & Schlichting, 2019) mass loss6, although Lee et al. (2022) proposed that the division between the two populations might instead be largely primordial. Footnote 6: In photoevaporative mass loss models the atmospheric outflow is driven by heating from high-energy (extreme ultraviolet and X-ray) stellar irradiation. In core-powered mass loss models, the heat source driving the outflow is cooling of the planetary core. However, the predicted mass loss rates in core-powered mass loss models still depend on the total irradiation received by the planet, which determines the temperature of the atmosphere and the corresponding sound speed. The order-of-magnitude occurrence rates stated above apply to planets orbiting Sun-like stars (meaning F/G/K main-sequence stars, for astronomers). These values change with decreasing stellar mass; gas giant planets are a factor of 2-3 less common around low-mass stars (e.g., Montet et al., 2014;Bryant et al. 2023), while small planets on close-in orbits appear to be a factor of a few more common (e.g., Dressing & Charbonneau 2015; Mulders et al. 2015; Hardegree-Ullman et al. 2019; Hsu et al. 2020). #### 1.2.3 Composition Constraints from Bulk Density Measurements For planets with measured masses and radii, we can obtain a constraint on their bulk densities and corresponding bulk compositions. The bulk densities of gas giant exoplanets are relatively low, indicating that they possess thick, hydrogen-dominated gas envelopes (e.g. Thorngren et al. 2016). These bulk densities can be used to place an upper limit on the abundance of hydrogen and helium relative to heavier elements in the planet's atmosphere (often referred to as the planet's 'atmospheric Figure 2: Measured masses and radii for small planets orbiting small (M dwarf) stars from Luque & Pall’e (2022). This sample only includes the subset of planets with small fractional uncertainties in mass and radius. Theoretical mass-radius relations for planets with an Earth-like bulk composition (dashed brown line), half water and half Earth-like (solid blue line), and Earth-like with a few percent hydrogen-rich atmosphere (dashed orange line) are overplotted for comparison. The light orange shading denotes the range of possible hydrogen-rich atmospheres that can be retained by these planets under a range of possible starting assumptions. _Figure adapted from Rogers et al. (2023)_. metallicity' by astronomers; Thorngren et al. 2019). Exoplanets smaller than Neptune exhibit widely varying bulk densities, which reflect their varying bulk compositions. Rocky super-Earths have relatively high bulk densities, while sub-Neptunes with puffy hydrogen-rich atmospheres have much lower bulk densities (e.g. Lozovsky et al. 2018; Neil et al. 2022). Small planets with intermediate densities are more ambiguous, as their masses and radii can be equally well fit with either water-rich or hydrogen-rich envelopes (e.g., Mousis et al. 2020; Turbet et al. 2020; Aguichine et al. 2021, see Fig. 2). For lower-mass stars, which are less luminous, the water ice line is located much closer to the star. This means that even relatively close-in planets forming around low-mass stars may still be able to accrete significant quantities of ice-rich solids (e.g. Kimura & Ikoma 2022). Although some of these 'water worlds' may subsequently accrete hydrogen rich envelopes, such envelopes are more difficult to retain when they orbit low-mass stars. These stars are more magnetically active than their solar counterparts, which means that they emit more high energy photons, and also have more frequent flares and coronal mass ejections, all of which can drive atmospheric outflows (e.g., Harbach et al. 2021; Atri & Mogan 2021). It is therefore thought that planets with water-dominated envelopes may be more common around low-mass stars. Luque & Pall'e (2022) plotted all of the currently known planets orbiting low-mass stars with precisely measured masses and radii in mass-radius space and identified a sub-population of low-density planets whose densities appear to be well-matched by water-rich compositions (for alternative hydrogen-rich models, see Rogers et al. 2023). Upcoming observations of candidate water worlds using JWST will soon provide the first direct constraints on their atmospheric water content (see Section 2). These atmospheric characterization studies should provide a much clearer picture of the relative frequency of water worlds around low-mass stars. ### Observational Techniques for Atmospheric Characterization There are multiple complementary techniques that can be used to detect and characterize the atmospheric compositions of transiting extrasolar planets, as detailed below. All of these techniques leverage knowledge of the transiting planet's orbit to disentangle the combined (unresolved) light from the planet and its much brighter host star. This is distinct from the approach used to characterize directly imaged planets and brown dwarfs, whose thermal emission can be spatially resolved from that of their host stars (for a recent review, see Currie et al. 2023). When a transiting planet passes in front of its host star, it will block more of the star's light and therefore appear larger in wavelengths at which the planet's atmosphere is strongly absorbing. Conversely, the planet will appear smaller and block less of the star's light in wavelengths at which its atmosphere is relatively transparent. This wavelength-dependent transit depth is called a 'transmission spectrum', and is the most widely used method for characterizing the atmospheric compositions of transiting extrasolar planets. We can calculate the relative size of this wavelength-dependent change in transit depth \\(\\delta D_{tr}\\) using the following expression: \\[\\delta D_{tr}\\simeq\\frac{2R_{P}\\delta R_{P}}{R_{*}^{2}} \\tag{1}\\] where \\(R_{P}\\) is the planet radius, \\(R_{*}\\) is the stellar radius, and \\(\\delta R_{P}\\) is the wavelength-dependent change in the planet radius. The approximation holds so long as \\(\\delta R_{P}\\ll R_{P}\\), which is true even for hot giants with very low densities. This change can be approximated as a multiple of the atmospheric scale height \\(H\\): \\[\\delta R_{P}\\simeq sH=s\\left(\\frac{kT_{eq}}{\\mu g}\\right) \\tag{2}\\] where \\(k\\) is the Boltzmann constant, \\(T_{eq}\\) is the predicted atmospheric equilibrium temperature, \\(\\mu\\) is the atmospheric mean molecular weight, and \\(g\\) is the planet's surface gravity. The scaling factor \\(s\\) typically ranges between 1 - 5, with lower values more representative of weak absorption and/or atmospheres with significant aerosol opacity, and higher values more representative of strong absorption in a cloud-free atmosphere (e.g., Seager & Sasselov 2000; Miller-Ricci et al. 2009; Benneke & Seager 2012, 2013). For atmospheres with very high aerosol opacity, this signature may be completely obscured (see SS3). If we assume that the star and planet both radiate as blackbodies, we can calculate the planet's predicted equilibrium temperature as: \\[T_{eq}=T_{*}\\sqrt{\\frac{R_{*}}{a}}\\left[\\frac{1}{4}(1-A_{B})\\right]^{1/4} \\tag{3}\\] where \\(T_{*}\\) is the effective temperature of the host star, \\(a\\) is the planet's semi-major axis, and \\(A_{B}\\) is its Bond albedo (defined as the fraction of incident radiation that is reflected back to space; Seager 2010). This equation assumes that the planet's atmosphere efficiently redistributes heat from the day side to the night side. In the limit of no heat redistribution (i.e., instantaneous radiative equilibrium at each longitude and latitude point), the effective hemisphere-integrated dayside equilibrium temperature can be calculated by replacing the factor of \\(\\frac{1}{4}\\) with a factor of \\(\\frac{2}{3}\\)(Hansen, 2008), and intermediate values are possible between these two extremes7. As demonstrated by these expressions, the overall strength of absorption during the transit can vary by more than an order of magnitude when comparing hydrogen-dominated (low mean molecular weight) atmospheres to those with higher mean molecular weights (e.g., water; carbon dioxide, methane). The presence of high-altitude aerosols from photochemical hazes or condensate clouds can also attenuate the amplitude of gas absorption features by scattering the stellar photons as they pass through the atmosphere. The geometry of transmission spectroscopy means that we are primarily sensitive to the properties of the planet's atmosphere near the day-night terminator. This is particularly relevant when determining cloud Figure 3: Schematic diagram illustrating three techniques that can be used to characterize the atmospheric properties of transiting exoplanets. _Adapted from a figure originally created by Sara Seager (private commun.)_. properties, which can vary significantly with longitude (see Section 6). Even for clear atmospheres without significant cloud opacity, the relatively long path length of starlight passing through the planet's atmosphere means that this technique is primarily sensitive to atmospheric pressures between 0.001 - 0.1 bars for typical hot Jupiter atmospheres observed at near-infrared wavelengths (e.g., Fortney, 2005; Sing et al., 2016). If we wait approximately half an orbit we can also observe the planet passing behind its host star (the'secondary eclipse'). By measuring the relative decrease in light during this eclipse, we can determine the amount of light reflected (at optical wavelengths) or emitted (at IR wavelengths) by the planet. If we assume that the star and the planet both radiate as blackbodies and take the longwavelength (Rayleigh-Jeans) limit, we can write a simple expression for the depth of the secondary eclipse \\(D_{sec}\\): \\[D_{sec}\\simeq\\left(\\frac{R_{P}}{R_{*}}\\right)^{2}\\frac{T_{eq}}{T_{*}} \\tag{4}\\] The planet's emission spectrum also contains information about its atmospheric composition, as well as the average temperature as a function of pressure in its dayside atmosphere. For cloud-free hot Jupiter atmospheres observed at near-infrared wavelengths, the shorter path length of light emitted from the deeper layers of the atmosphere means that we can also potentially probe somewhat higher (a factor of a few) pressures as compared to transmission spectroscopy (e.g., Fortney, 2005; Showman et al., 2009). Close-in exoplanets are expected to be tidally locked, and as a result can exhibit large daynight temperature gradients. This means that the temperature, chemistry, and cloud properties on the daysides of these planets can differ from those measured at the terminator via transmission spectroscopy. We can obtain a global view of these atmospheres by measuring changes in the planet's brightness as a function of orbital phase (the planet's 'phase curve'). By measuring the planet's phase curve at IR wavelengths where its spectrum is dominated by thermal emission, we can map its emission spectrum as a function of longitude (Cowan & Agol, 2008, 2011; Rauscher et al., 2018; Morris et al., 2022). These phase curves provide invaluable information about the atmospheric circulation patterns of tidally locked exoplanets; see Section 6 for more details. We can also spatially resolve the dayside atmosphere using a second technique called 'eclipse mapping' (Williams et al., 2006; de Wit et al., 2012; Majeau et al., 2012). This technique utilizes the measured changes in brightness during secondary eclipse ingress and egress (defined as the periods when the planet is only partially occulted by the star; for definitions of these terms see Winn, 2010) to map the planet's dayside brightness as a function of longitude and latitude. This technique is complementary to phase curve observations, which can characterize the planet's night side but can only measure changes in the planet's atmospheric properties as a function of longitude. To date, most published observations of exoplanet atmospheres have been obtained at low spectral resolution using space telescopes (Spitzer, HST, and/or JWST). Because all of these techniques rely on measurements of very small changes in the star's brightness over multi-hour timescales, it is often difficult to achieve the required stability and precision using ground-based observatories. This is because the properties of Earth's atmosphere also vary on similar timescales. However, recent advances in instrumentation on ground-based telescopes have opened up new venues for atmospheric characterization at higher spectral resolution (\\(R>\\) 20,000, where \\(R=\\delta\\lambda/\\lambda\\)). At these resolutions, spectral features from the star, planet, and Earth's atmosphere can all be readily differentiated from one another. Crucially, the planet's spectral features are Doppler shifted by its orbital motion, while those of the star and Earth's atmosphere remain approximately constant in wavelength over several hour timescales. This means that we can use this wavelength-dependent shift to uniquely identify the absorption features from the planet's transmission or emission spectrum. Notably, this technique is not limited to transiting exoplanets and can also be used to detect spectral features in the emission spectra of non-transiting planets. For more details see the review by Birkby (2018). ### Common Model Frameworks for Interpreting Exoplanet Spectra When fitting transmission and emission spectra, we must necessarily make a range of simplifying assumptions in order to build simple parametric models that can be used in atmospheric retrieval frameworks. Although we are only sensitive to the atmospheric properties in a narrow range of pressures (typically 0.001 - 1 bars for hot Jupiter atmospheres observed at infrared wavelengthsat low to moderate spectral resolution), most retrievals typically assume that the inferred elemental abundances are representative of the bulk atmosphere (i.e., there is no net gradient in elemental abundances over the range of pressures, latitudes, or longitudes probed). Similarly, fits to transmission spectra often make the simplifying assumption that the atmosphere is isothermal, while fits to emission spectra typically utilize a simple parametric vertical temperature profile with up to six free parameters (e.g., Madhusudhan & Seager, 2009; Line et al., 2013). Many models assume that the atmospheric chemistry is in local thermal equilibrium, or retrieve for the abundances of individual molecules assuming a single fixed abundance for each molecule as a function of pressure. For an overview of the exoplanet retrieval codes commonly in use and the corresponding assumptions made by each, see MacDonald & Batalha (2023). It is worth noting that the high quality of recent JWST observations of hot Jupiters has forced modelers to revisit many of these assumptions, some of which have proven to be too simple for the sensitivity of these new data sets. For more background on exoplanet atmosphere modeling, the reader is encouraged to refer to the following review articles: Marley & Robinson (2015), Madhusudhan (2019), and Fortney et al. (2021). ### High-level Scientific Questions Our ability to characterize transiting exoplanet atmospheres is fundamentally limited by our great distance from these systems and the fact that the planet is viewed as an unresolved object, blended with the light from its host star. Despite immense improvements in remote sensing capabilities, it is safe to say that we will never in any of our lifetimes characterize an individual exoplanet atmosphere to the degree that we have for planets within our solar system. This is a crucial piece of context for the non-astronomer to understand when formulating a realistic vision for the types of questions that exoplanet studies can address. Yet exoplanets also present an immense opportunity -- that of studying a myriad of planetary systems at a _population_ level. Exoplanets also provide access to types of planetary environments that do not exist in our solar system (e.g. hot Jupiters, sub-Neptunes, super-Earths, and perhaps water-worlds). A simple summary is that exoplanets allow for coarse measurements for many objects, whereas solar system studies provide detailed data on the outcome of a single instance of planet formation. Leveraging both types of information together equips us with a more complete view of planetary systems and the processes that give rise to them. Given this context, the types of questions that exoplanet atmosphere studies aim to address are typically those that relate to bulk properties or large-scale atmospheric structure, or those that tie a collection of rough measurements to our understanding of the exoplanet population (or a subpopulation, thereof). Below, we provide an illustrative list of major open scientific questions that can be targeted through exoplanet atmospheric studies. These questions span the planet size and temperature range represented by transiting exoplanetary systems. Meaningful movement toward answering any of these questions would represent a major advance for (exo)planetary science. * Did close-in gas giant planets form _in situ_ or migrate in from farther out in the disk? * What are the large scale atmospheric dynamics for hot Jupiters, and how do they differ with respect to solar system giant planets and young, hot, directly-imaged planets on wide orbits? * What are the aerosols in exoplanet atmospheres made of and how do they form? * How much hydrogen and helium gas can small planets accrete, and which planets keep (or lose) their primordial hydrogen-rich atmospheres? * Do water worlds exist, and if so, how common are they? * How do interactions with magma oceans shape the observed atmospheric compositions of subNeptunes and terrestrial exoplanets? * What kinds of outgassed, high mean molecular weight atmospheres do terrestrial planets have, and what does that mean for their potential habitability? * Which terrestrial exoplanets lose their outgassed atmospheres? What determines their total atmospheric masses? ## 0.2 Atmospheric Composition The atmosphere is the outermost layer of a planet and the _only_ component of an exoplanet that can readily have its composition directly measured using remote sensing techniques8. We therefore rely on observations of an exoplanet's atmosphere as a window into its history and the processes that shape its present-day state. For example, atmospheric observations can be used to inform our understanding of unseen features and processes such as surface-atmosphere interactions or interior structure. High H\\({}_{2}\\)S or SO\\({}_{2}\\) concentrations in a terrestrial habitable zone planet could be indicative of surface volcanism (Kaltenegger and Sasselov, 2010); atmospheric O\\({}_{2}\\) and O\\({}_{3}\\) could signify the possible presence of surface life, especially when accompanied by disequilibrium biosignature pairs such as CH\\({}_{4}\\)(Lovelock, 1965; Domagal-Goldman et al., 2014); and a water world might be distinguished from a sub-Neptune with a dry rocky interior via an elevated abundance of water in its atmosphere (e.g. Rogers and Seager, 2010). Footnote 8: Spectroscopic characterization of rocky exoplanet surfaces might also be possible with JWST under ideal conditions (Hu et al., 2012; Whittaker et al., 2022). As with solar system planets, the present-day state of an exoplanet atmosphere is the outcome of its entire history of planet formation and evolution. By measuring an exoplanet's atmospheric composition, one can attempt to decode the processes that gave rise to that planet in the first place. On a single-planet basis, such an analysis is nearly impossible due to vast degeneracies in the range of histories that can all produce similar outcomes, in addition to fundamental uncertainties in the planet formation process and the evolution of protoplanetary disks. But on a population level, we can hope to link trends in atmospheric properties to simple theories for how planets form and evolve, and anchor those theories with measurements, analogously to how our understanding of the history of our solar system stems from observations of the many bodies orbiting the Sun. Several examples for how trends in exoplanet atmospheric observations might be tied back to planet formation and evolution theories are listed, below:* _Giant Planet Mass-Metallicity Relation:_ Solar system giant planets exhibit a tight anticorrelation between their mass and atmospheric metallicity9(Figure 4, left panel). A similar relation is predicted to be a general outcome of planet formation via core accretion, although there may also be considerable intrinsic scatter in the trend due to the stochastic nature of planet formation (Fortney et al., 2013; Venturini et al., 2016). Footnote 9: Metallicity here and throughout this review article is defined as \\((N_{\\chi}/N_{H})_{\\rm planet}/(N_{\\chi}/N_{H})_{\\rm Sun}\\), where \\(N_{\\chi}/N_{H}\\) is the ratio of the number of some metal species (X) relative to hydrogen. Species X is selected differently across the literature, depending on what is most readily observable, leading to an inherent inconsistency in how metallicity is measured in different studies. * _Carbon-to-Oxygen Ratios:_ The composition of a planet depends on its formation location relative to various snow lines in the protoplanetary disk. The abundant volatiles oxygen and carbon are expected to be especially critical to forming planets due to their roles in delivering icy materials. Measuring the C/O ratio in exoplanetary atmospheres is therefore useful for linking present-day envelope composition to the planet's birth location and the relative import of accreting solids vs. gas during envelope formation (Oberg et al., 2011; Madhusudhan et al., 2014). It has been difficult to measure C/O in solar system giant planets because they are all cold enough for oxygen to be sequestered out of the observable atmosphere via condensation processes (e.g. Helled & Lunine, 2014). Transiting exoplanets, which are typically highly irradiated, provide an excellent opportunity to directly measure atmospheric C/O without relying on model extrapolations (Madhusudhan, 2012). * _Other Elemental Abundance Ratios:_ As with C/O, measuring elemental abundance ratios of various volatile and/or refractory species (e.g. Si/O, Si/C, Fe/O, etc.) provides a tracer for formation location and conditions (Piso et al., 2016; Lothringer et al., 2021; Crossfield, 2023; Chachan et al., 2023). Relative abundance measurements can also be used to constrain physical and chemical processes such as condensation or transport (e.g. Gibson et al., 2022; Pelletier et al., 2023). * _Atmospheric Composition Straddling the Sub-Neptune to Super-Earth Radius 'Gap':_ A strong dip in exoplanetary occurrence for planets with radii \\(\\approx\\)1.6 \\(R_{\\Earth}\\) has been explained as being the dividing line between two populations of low-mass exoplanets: rocky super-Earths and gas-rich sub-Neptunes (Fulton et al. 2017, and see discussion in Section 1.2). Theories of photoevaporative (Owen & Wu 2017) and core-powered (Ginzburg et al. 2018; Gupta & Schlichting 2019) mass loss both posit that the sub-Neptunes are planets that have succeeded in retaining their primordial nebular gas atmospheres, while super-Earths have lost their hydrogen entirely and have secondary high mean molecular weight atmospheres. * _The Presence or Absence of Atmospheres on Terrestrial Exoplanets:_ In the solar system, a \"cosmic shoreline\" in escape velocity and insolation separates bodies with atmospheres from those without (Zahnle & Catling 2017, Figure 4, right panel). Identifying whether a similar dividing line exists for terrestrial exoplanets will help to constrain the processes by which (exo)planets retain or lose their atmospheres. In all of these cases, the trends being sought out are first-order to begin with, and the theories being tested are often highly simplified. As statistical trends in exoplanet atmospheres data are uncovered and as the data warrant it, it is only natural that these simpler ideas will give way to more complex ones, and progress will be made toward understanding the universality of the processes that shape planetary atmospheres throughout their lifetimes. An even more direct way to constrain planet formation and evolution via exoplanet studies would be to observe exoplanets of different ages. In fact, in recent years a considerable number of exoplanets orbiting young stars (i.e. with ages \\(\\lesssim\\) 100 Myr) have been discovered (e.g. David et al. 2016; Benatti et al. 2019; David et al. 2019; Plavchan et al. 2020). Atmospheric observations of younger planets could reveal atmospheric escape or degassing processes while they are still ongoing (Zhang et al. 2022b) and might even show us what true primordial atmospheres look like. Unfortunately, the practical challenges to characterizing atmospheres of young planets are considerable. Young stars tend to be quite active. The resulting stellar variability hinders our ability to detect the minute atmospheric signatures of exoplanets orbiting these stars (Cauley et al. 2018; Hirano et al. 2020; Palle et al. 2020a; Rackham et al. 2023). We are also fundamentally limited by the number of nearby young stars that are bright enough to present sufficient SNR for atmospheric characterization studies. Finally, measurements of atmospheric composition provide a direct indication of the chemical processes unfolding in a planet's atmosphere. For example, the measured abundances of molecules, atoms, and ions can be cross-checked against the predictions of thermochemical equilibrium for a given elemental mixture (e.g. Burrows, 2014; Lodders & Fegley, 2002; Schaefer & Fegley, 2010). Departures from equilibrium are then attributed to disequilibrium processes such as vertical or horizontal mixing, or photochemistry (Tsai et al., 2023). Furthermore, the detection of any aerosol species (see Section 3) can be related back to the chemical and physical conditions that gave rise to them in the first place. We therefore turn to spectroscopic measurements of exoplanet atmospheric composition as a powerful tool for probing the physics, chemistry, and history of exoplanetary environments. ### Water, Water Everywhere The first molecule to be reliably detected in a large number of exoplanet atmospheres was H\\({}_{2}\\)O. Water has many vibration-rotation absorption bands across the near-to-mid IR, and oxygen and hydrogen are cosmically abundant, making this an ideal molecule to search for. Furthermore water is stable in gas phase from \\(\\sim\\)370 - 2200 K -- at lower temperatures it condenses into clouds, droplets, or ice; and at higher temperatures it thermally dissociates. Fortunately, most transiting exoplanets have temperatures within the range in which gas-phase H\\({}_{2}\\)O is the expectation. The search for H\\({}_{2}\\)O in transiting exoplanet atmospheres from space was enabled by the installation of the Wide Field Camera 3 (WFC3) instrument on board HST during its 2009 servicing mission. WFC3 carries a grism centered on the strong 1.4 \\(\\mu\\)m water absorption band with sufficient spectral resolution to resolve the shape of the band, and a novel spatial scanning procedure was developed to spread the exoplanetary spectrum across many detector pixels so as to minimize concerns about detector systematics (McCullough & MacKenty, 2012). The first detection of the 1.4 \\(\\mu\\)m water feature in a giant planet atmosphere with the WFC3 spatial scanning mode was made by Deming et al. (2013), and many more soon followed (e.g. Wakeford et al., 2013; Kreidberg et al., 2014; Sing et al., 2016; Fu et al., 2017; Tsiaras et al., 2018; Changeat et al., 2022, etc; Figure 5). The ease with which the 1.4 \\(\\mu\\)m water feature became detectable with HST turned this absorption band into a powerful diagnostic for the chemistry of exoplanet atmospheres. Compared to the baseline expectation of solar composition, a weaker than expected water feature in transmission can be attributed to a low water abundance, a high mean molecular weight atmosphere, or an obscuring cloud deck that mutes the underlying spectral features (Miller-Ricci et al., 2009; Benneke & Seager, 2013). In thermal emission, the strength of the water feature depends on the H\\({}_{2}\\)O abundance and on the temperature gradient in the planet's atmosphere. Subtle differences in the shape of a spectrum resulting from each of these various scenarios can potentially be disentangled with Figure 4: Examples of statistical comparative planetology approaches (Bean et al., 2017) to constrain planet formation and evolution processes via ensemble observations of exoplanet atmospheres. **Left:** Atmospheric metallicity vs. mass for solar system planets (black symbols) and for exoplanets that have detections of carbon- and/or oxygen-bearing species using JWST (red symbols). Overlaid are the predictions from population synthesis models from Fortney et al. (2013) showing a rise and then a plateau in metallicity as planetary mass decreases (gray dots). The solar system giant planets are observed to follow a very tight mass-metallicity correlation (dashed line), with the caveat that oxygen is undetected in the atmospheres of these planets, as it is sequestered in condensates below the photosphere. _Figure adapted from Mansfield et al. (2018)_. **Right:** The “cosmic shoreline” (Zahnle & Catling, 2017) is denoted (yellow diagonal band), which is an observed delineation in escape speed and insolation between solar system bodies that do and do not possess gaseous atmospheres (toward the upper left and toward the lower right of the plot, respectively). Transiting exoplanets that will be observed in Cycles 1 and 2 of JWST are over-plotted in this same parameter space. Symbol size denotes the expected S/N of a single transit or eclipse observation using the methods of Kempton et al. (2018). The letters b-h denote the planets in the TRAPPIST-1 system, which are all slated for JWST observations, and the terrestrial solar system planets are shown for reference. By identifying which terrestrial exoplanets possess atmospheres and whether they are bounded by the same “shoreline” as for the solar system, astronomers can constrain the processes by which planets lose or retain their atmospheres. _Figure courtesy of jegug lh_. sufficiently high S/N and broad enough wavelength coverage (Benneke and Seager, 2012). By combining WFC3 measurements and longer wavelength observations with Spitzer's IRAC instrument (Figure 5), one can furthermore obtain constraints on a planet's metallicity and C/O ratio, by assuming that the atmosphere resides in a state of thermochemical equilibrium (e.g. Wakeford et al., 2018; Zhang et al., 2020). However, without simultaneous detection of the major carbon- and oxygen-bearing species in an atmosphere, these two properties remain degenerate with one another. To quantify water abundances and their associated uncertainties, novel retrieval techniques have ultimately been brought to bear on WFC3 transmission and emission spectra (e.g. Kreidberg et al., 2014; Madhusudhan et al., 2014; Line et al., 2016). Ultra-hot Jupiters, which have equilibrium temperatures in excess of \\(\\sim\\)2000 K, have presented a particularly interesting case for interpreting H\\({}_{2}\\)O detections. Weak or absent H\\({}_{2}\\)O features in dayside thermal emission spectra were noted for multiple ultra-hot Jupiters, and several hypothesis were posed to explain these observations (Evans et al., 2017; Sheppard et al., 2017; Kreidberg et al., 2018). Initially, retrievals were run that indicated either very low metallicities or very high C/O ratios (Sheppard et al., 2017; Pinhas et al., 2019; Gandhi et al., 2020). The former reduces the abundances of all'metal'-bearing species, and the latter reduces the H\\({}_{2}\\)O abundance by tying up nearly all atmospheric oxygen in the CO molecule. Either of these abundance patterns would be surprising though, especially since slightly cooler hot Jupiters are not observed to have similarly weak H\\({}_{2}\\)O features (Mansfield et al., 2021). A more natural explanation was posed by Arcangeli et al. (2018) and Parmentier et al. (2018) who pointed out that thermal dissociation of H\\({}_{2}\\)O at temperatures in excess of \\(\\sim\\)2200 K coupled with the onset of continuum opacity from the hydrogen anion (H-) around the same temperature provided a high-quality fit to available data without resorting to elemental abundance patterns that differed dramatically from the planets' host stars. Still, the precision and wavelength coverage from HST and Spitzer alone were not sufficient to unambiguously resolve the question of why the ultra-hot planets have muted water features. On the other end of the planetary'spectrum', sub-Neptunes and super-Earths have also been observed to have muted or absent water features in transmission. In this case, the interpretation is different because these smaller planets can have high mean molecular weight atmospheres, and they also tend to be colder planets, which makes them potentially amenable to aerosol formation. (Aerosols are not generally considered to be a major atmospheric constituent for ultra-hot planets Figure 5: Transmission spectra for various hot Jupiters plotted in units of the planet’s atmospheric scale height. Data are shown from the HST STIS and WFC3 instruments, and Spitzer IRAC channel 1 and 2 (3.6 and 4.5 \\(\\mu\\)m, respectively), as indicated. The WFC3 data cover a strong water band at 1.4 \\(\\mu\\)m, and STIS covers absorption lines from Na and K. The two Spitzer IRAC photometric channels are sensitive to CH, CO, and CO\\({}_{2}\\), although the lack of spectroscopic information over this wavelength range makes it difficult to fully constrain the atmospheric carbon chemistry. Muted spectral features and strongly sloping optical and near-IR spectra for the planets plotted toward the bottom of the figure are attributed to aerosol obscuration. _Figure adapted from Sing et al. (2016)_. because we do not know of any cloud or haze species that can form and persist at such high temperatures.) The spectra of rocky super-Earths will be discussed in more detail in Section 2.5. A key science question that astronomers aimed to address with the initial atmospheric observations of sub-Neptunes was to break degeneracies between'mini-Neptune' and water world scenarios by measuring the atmospheric composition and ascertaining whether it was hydrogen- or water-dominated (Miller-Ricci et al., 2009; Miller-Ricci & Fortney, 2010; Rogers & Seager, 2010). Unfortunately, degeneracies between aerosols and high mean molecular weight made such distinctions extremely challenging with available instruments prior to the launch of JWST (e.g. Bean et al., 2010; Berta et al., 2012; Knutson et al., 2014; Guo et al., 2020; Mikal-Evans et al., 2021, 2023a). Perhaps the most famous among sub-Neptunes is the planet GJ 1214b, which was observed to have a staggeringly flat transmission spectra with 12 stacked transits with the HST+WFC3 instrument (Kreidberg et al., 2014). The spectrum is so featureless that the only viable interpretation is a very thick and high-altitude layer of clouds or haze (see Section 3), which obscures any direct indications of the atmosphere below. One challenge to interpreting any claimed detections of water in low-mass exoplanets is that many such planets that are amenable to atmospheric characterization necessarily orbit low-mass Mdwarf stars. Small host stars are required to produce large transit depths and therefore sufficiently high S/N transmission spectra. But low-mass stars also have water in their _own_ spectra due to their correspondingly low temperatures. What's worse, the water is not expected to be uniformly distributed throughout the stellar atmosphere and instead to preferentially lie in cooler star-spot regions. The result is that H\\({}_{2}\\)O features can be spuriously imprinted on transmission spectra for planets orbiting M-dwarfs that do not originate in the planetary atmosphere but actually in the star itself (e.g. Deming & Sheppard, 2017; Rackham et al., 2018; Zhang et al., 2018; Lim et al., 2023). Techniques for mitigating stellar contamination in the transmission spectra of these systems is an ongoing area of research. Detection of H\\({}_{2}\\)O from the ground has also been enabled via high resolution spectroscopy. The first such measurement was made by Birkby et al. (2013) using the high-resolution CRIRESspectrograph on the Very Large Telescope (VLT) to capture water absorption lines in the dayside emission spectrum of the hot Jupiter HD189733b. As the observing techniques have matured and more near-IR high resolution spectrographs have come online, the rate of ground-based water detections has accelerated (see Figure 6). One particular advantage of high-resolution water detections over space-based measurements with HST+WFC3 is that the former are often simultaneously sensitive to oxygen- and carbon-bearing molecules, enabling direct constraints on the atmospheric C/O ratio (Pelletier et al., 2021; Line et al., 2021; Brogi et al., 2023). Such measurements have indicated C/O values for various hot Jupiters ranging from near-solar (the solar value is 0.55) to super-solar values near 1. With significant scatter in the results to-date, the implications for hot Jupiter formation are murky, but the picture should solidify in the coming years with many more direct C/O measurements enabled by JWST. Ground-based measurements of H\\({}_{2}\\)O have been attempted for sub-Neptunes and super-Earths as well, typically at lower spectral resolution, but to-date all have resulted in nondetections (e.g. Bean et al., 2010, 2011; Ca'ceres et al., 2014; Diamond-Lowe et al., 2018, 2020, 2020). ### Refractory Species in Hot Jupiter Atmospheres Hot and ultra-hot Jupiters have high enough temperatures that most refractory species are rendered in the gas phase, and some can even be ionized via thermal or non-thermal processes. This is advantageous for exoplanet studies because it means that many elements that would otherwise be sequestered deep within a colder giant planet like Jupiter or Saturn are accessible to direct detection. Measured abundance patterns can then be compared to theories of planet formation or used to identify various chemical processes, as discussed in Section 1.5. Another goal of refractory species detections in hot Jupiter atmospheres has been to identify the optical and UV absorbers that drive thermal inversions in these planets (see Section 5 for more details). TiO and VO were initially proposed as likely species to drive stratospheric inversions in hot Jupiters (Fortney et al., 2008). More recently Lothringer et al. (2018) pointed out that a whole host of atomic metals, metal hydrides, and oxides should be in the gas phase in ultra-hot planets and would serve as even stronger optical and UV opacity sources. Motivated by goals of measuring refractory abundances and identifying key optical absorbers, a large and increasing number of chemical species have been detected in hot Jupiter atmospheres in recent years. This has mostly been made possible by ground-based high-resolution spectrographs that observe at optical wavelengths, as well as the STIS instrument and WFC3/UVIS instrument mode aboard HST. Space-based observations with STIS and WFC3/UVIS jointly provide broad near-UV to optical wavelength coverage (\\(\\sim\\) 200-1000 nm) but only at relatively low spectral resolution, which presents a challenge for uniquely identify chemical species. For example, with low-resolution optical spectra, it has often been difficult to disentangle the causes of slopes in transmission spectra, which can be attributed to some combination of aerosol scattering (see Section 3), stellar activity, or optical absorbers (e.g. Pont et al., 2008; McCullough et al., 2014; Evans et al., 2018). At near-UV wavelengths certain ultrahot planets have been observed to have sharply increased transit depths, consistent with the presence of SiO, SH, Mg, and/or Fe, which would serve to drive thermal inversions or act as condensate cloud precursors (Evans et al., 2018; Fu et al., 2021; Lothringer et al., 2022). Individual strong lines due to atomic (e.g. Na, K; Sing et al., 2016) and ionic (e.g. Fe\\({}^{+}\\) and Mg\\({}^{+}\\); Sing et al., 2019) species have been easier to uniquely identify, albeit the line profiles are typically not fully resolved by HST, resulting in degenerate interpretations of abundances vs. broadening mechanisms. Sodium and potassium in particular have been identified in a large number of hot Jupiter spectra with STIS (see Figure 5). In some planets just one of these two species is detected, whereas others produce clear detections of both. Identifying abundance patterns in Na and K vs. fundamental parameters such as equilibrium temperature has so far been elusive. The optical opacity 'bumps' that have been observed with HST can be fully resolved via highresolution spectroscopy in order to uniquely identify the species present. To date, well over a dozen elements and 37 individual molecular, atomic, and ionic species have been identified in hot Jupiter atmospheres with high-resolution techniques, spanning a broad portion of the periodic table (e.g. Wyttenbach et al., 2015; Hoeijmakers et al., 2018; Ehrenreich et al., 2020; Tabernero et al., 2021;Kesseli et al. 2022; Langeveld et al. 2022; Pelletier et al. 2023; Flagg et al. 2023, Figure 6). Of these species, iron and sodium have so far proven the most readily detectable in a large number of hot and ultrahot atmospheres due to their especially strong and unique optical opacity patterns. Much of the focus of high-resolution studies initially was on the _detection_ of individual species. Papers reporting detection significances have recently been giving way to those that quantify relative and/or absolute abundances via high-resolution retrieval techniques (e.g. Gibson et al. 2020; Pelletier et al. 2021; Maguire et al. 2023; Kasper et al. 2023; Gandhi et al. 2023). Such studies have revealed a range of solar and non-solar abundance patterns. For instance, in a retrieval study of six high S/N ultrahot Jupiters, Gandhi et al. (2023) found iron abundances to be well-matched to the planets' host stars. However, other refractories such as Mg, Ni, and Cr presented more variable abundance patterns; and several species such as Na, Ti, and Ca were found to be uniformly under-abundant relative to stellar, implying some sort of depletion process such as condensation or ionization. In a detailed study of the ultrahot Jupiter WASP-76b, which measured abundances of 14 individual refractory species, Pelletier et al. (2023) similarly found abundances broadly consistent with solar (and stellar), with some notable exceptions. Elements with high condensation temperatures were found to be depleted, potentially implying condensation cold-trapping on the planet's night side, whereas Ni was over-abundant, perhaps indicating that WASP-76b accreted a differentiated planetary core during its late stages of formation. Studies such as these highlight the power of systematic investigations of gas-phase refractory elements in hot Jupiters to reveal the physics, chemistry, and history of these planets' atmospheres. ### The JWST Landscape The first JWST spectrum of a transiting exoplanet was released on July 12, 2022 as part of a handful of 'early release observations' (EROs) meant to demonstrate to the public the power of the newly commissioned space telescope10(Pontoppidan et al. 2022). The \\(\\sim\\)1300 K hot Jupiter WASP96bwas targeted with the NIRISS instrument (Figure 7). The resulting spectrum spanning 0.6 - 2.8 \\(\\mu\\)m had exactly the intended effect. It revealed a full rainbow of water features along with evidence for obscuring aerosols, and beyond that it gave the astronomical community a small taste of what was to come from exoplanet studies in the JWST era. Just a month and a half later, the first peer-reviewed scientific exoplanet result from JWST revealed the striking first-time discovery of CO\\({}_{2}\\) in an exoplanet atmosphere (JWST Transiting Exoplanet Community Early Release Science Team et al., 2023, Figure 7). Carbon dioxide, which had previously been out of reach for spectroscopic studies due to the wavelength coverage of available instruments, was detected at a staggering significance of 26\\(\\sigma\\). Chemically, the CO\\({}_{2}\\) molecule is especially interesting because it serves as a metallicity indicator in hot hydrogen-rich atmospheres (Fortney et al., 2010). The strong CO\\({}_{2}\\) absorption feature identified in the hot Jupiter WASP-39 b Figure 6: Current state of detections of ions, atoms, and molecules with high-resolution spectroscopy, as of summer 2023. The significance of each claimed detection is indicated by the symbol color. The embedded histogram shows the number of high-resolution spectroscopy atmospheric characterization papers published per year, revealing a steep acceleration. The uptick corresponds to multiple new instruments coming online as well as the maturation of the observing technique. _Figure courtesy of Arjun Savel._ indicates that the planet has \\(\\sim\\)10\\(\\times\\) enhanced metallicity relative to its host star. The planet's high metallicity and low mass (relative to Jupiter), intriguingly place it right along the solar system giant planet mass-metallicity relation (Constantinou et al. 2023, Figure 4). Figure 7: A selection of transmission spectra analyzed to-date from JWST. Wavelength coverage of the various instrument modes used for transmission spectroscopy are indicated above. Atomic and molecular opacity sources that have been identified in the planets shown are indicated below. Several other planets that have been observed with JWST but do not readily reveal any identifiable absorbers are not shown. Comparing against Figure 5, one can see the benefits of the expanded wavelength coverage and improved precision of JWST relative to HST and Spitzer for exoplanet atmospheric characterization. (_Figure data from Fu et al. (2022)_, _Radica et al. (2023)_, _Carter et al. (submitted)_, _Xue et al. (submitted)_, and Fu et al. (in prep.)_.) Further studies of WASP-39b by the The JWST Transiting Exoplanet Community (JTEC) Early Release Science (ERS) program have since produced a full panchromatic transmission spectrum of the planet from 0.6-5.2 \\(\\mu\\)m (JWST Transiting Exoplanet Community Early Release Science Team et al., 2023; Ahrer et al., 2023; Rustamkulov et al., 2023; Alderson et al., 2023; Feinstein et al., 2023, Figure 7). Spectral features from H\\({}_{2}\\)O, SO\\({}_{2}\\)(Tsai et al., 2023), and CO(Grant et al., 2023; EsparzaBorges et al., 2023) have been identified, in addition to the aforementioned CO\\({}_{2}\\), as well as signatures of patchy aerosol coverage. The discovery of SO\\({}_{2}\\) at 4.05 \\(\\mu\\)m is especially intriguing because this molecule was not predicted in observable amounts by any chemical equilibrium models. Instead, it is believed to be the byproduct of _photochemical_ alteration of the atmosphere (Polman et al., 2023; Tsai et al., 2023). The strength of the observed feature is well-matched by hot Jupiter photochemistry models, and it is now anticipated that SO\\({}_{2}\\) might appear in many JWST hot Jupiter observations. The discovery of photochemically derived species opens the door to a whole host disequilibrium chemistry studies, which will be an exciting new arena for JWST. Another possible hint of disequilibrium chemistry comes from attempts to detect methane, which is expected to be abundant in hydrogen-rich atmospheres below \\(\\sim\\)1000 K. Exoplanet atmosphere observations prior to the launch of JWST already hinted at a'missing methane' problem, with cooler planets not showing obvious signs of methane absorption in HST or Spitzer data (e.g. Stevenson et al., 2010; Kreidberg et al., 2018; Benneke et al., 2019), although some ground-based detections had been reported (Guilluy et al., 2019, 2022; Giacobbe et al., 2021; Carleo et al., 2022). Methane should be readily observable with JWST, as it has multiple strong absorption bands over the 1 - 8 \\(\\mu\\)m wavelength range. However, the molecule is notably absent from the transmission spectrum of the \\(\\sim\\)850 K planet HAT-P-18b with JWST's NIRISS instrument (Fu et al., 2022). Recently, methane was finally detected by JWST in yet colder planets: the \\(\\sim\\)825 K 'warm' Jupiter WASP-80b (Bell et al., 2023) and the \\(\\sim\\)360 K sub-Neptune K2-18b (Madhusudhan et al., 2023). In the latter case, the JWST measurement resolves previous ambiguity from HST+WFC3 observations as to which gas had been detected, H\\({}_{2}\\)O or CH\\({}_{4}\\)(Benneke et al., 2019; Tsiaras et al., 2019; Bezard et al., 2022). The accompanying detection of CO\\({}_{2}\\) and non-detection of water vapor in K2-18b, also perhaps indicatesthe presence of a liquid water ocean below the planet's thick atmosphere (Madhusudhan et al., 2023). Further JWST observations will map out the parameter space over which methane exists in hydrogen-rich planetary atmospheres and will hopefully hint at the underlying mechanisms behind the missing methane problem such as hot planetary interiors coupled with efficient vertical mixing (Fortney et al., 2020), horizontal quenching (Cooper & Showman, 2006; Zamyatina et al., 2023), or photochemistry (Line et al., 2011; Miller-Ricci Kempton et al., 2012). It is still early days for JWST, and the observatory has just begun to reveal its prowess in characterizing exoplanet atmospheric composition. Along with metallicities, the reliable measurement of C/O ratios in exoplanet atmospheres has been highly anticipated, enabled by the broad wavelength coverage of the JWST instruments. For example, the 0.6 - 12 \\(\\mu\\)m wavelength range covered by the JWST exoplanet instrument suite spans spectral features from H\\({}_{2}\\)O, CH\\({}_{4}\\), CO\\({}_{2}\\), and CO, which allows for direct measurement of the atmospheric C/O ratio under the assumption of thermochemical equilibrium (e.g. Batalha & Line, 2017). The first contstraints on metallicities and C/O ratios reveal the diverse outcomes of planet formation processes (Figures 4 and 8). Derived metallicities in hot Jupiters range from sub-solar (August et al., 2023) to highly super-solar (Bean et al., 2023). Whereas WASP-39b is found to lie directly along the solar system mass-metallicity relation, several other planets do not, implying either that this is not a universal correlation for giant planets, or that there is considerable scatter in the underlying trend. Measurements of C/O ratios for hot Jupiters have have also recovered a range of values from sub-solar (JWST Transiting Exoplanet Community Early Release Science Team et al., 2023; Coulombe et al., 2023; August et al., 2023), to solar (Radica et al., 2023), to super-solar (Bean et al., 2023). Published hot Jupiter studies with JWST are still in the small number statistics regime. However, planned observations of several dozen such planets with JWST in its first two years establishing whether abundance patterns align with specific theories of giant planet formation. Another arena in which JWST has already made its mark is to resolve previous ambiguity over the atmospheric composition of ultrahot Jupiters (see Section 2.2). Whereas with HST alone it had been challenging to determine the cause of weakened H\\({}_{2}\\)O features in ultrahot Jupiter emission spectra, the wavelength coverage and precision of JWST data for the planet WASP-18b has allowed for a robust measurement of the planet's underlying atmospheric composition (Coulombe et al., 2023). The NIRISS secondary eclipse spectrum of WASP-18b clearly shows evidence for weakened, but still significant, H\\({}_{2}\\)O features in emission. The detailed shape of the spectrum is best-fit by models with near-solar metallicity, sub-solar C/O, H\\({}^{-}\\) continuum opacity, and water depleted in the observable atmosphere via thermal dissociation. This composition is in line with 'vanilla' predictions of an Figure 8: Metallicity vs. C/O ratio for all planets with measurements of both carbon- and oxygen-bearing species from JWST or high-resolution ground-based spectrographs as of the writing of this article. Formation scenarios consistent with different combinations of metallicity and C/O are a summary of work described in Oberg et al. (2011), Madhusudhan et al. (2017), Booth et al. (2017), and Reggiani et al. (2022). (_Figure data from Bean et al. (2023), Brogi et al. (2023), August et al. (2023), Bell et al. (2023), Xue et al. (submitted.), Fu et al. (in prep.), Welbanks et al. (in prep.), Pelletier et al. (in prep.), and Mansfield et al. (in prep.)_.) unaltered nebular gas atmosphere in thermochemical equilibrium and rules out more exotic scenarios. Attempts to characterize even smaller exoplanets with JWST are also just beginning in earnest. Terrestrial planets will be discussed in more detail below, but the first JWST investigations of subNeptunes are also taking shape. Following on years of ambiguous characterization of sub-Neptunes that have produced degenerate interpretations of atmospheric composition and aerosols (see Sections 2.2 and 3), the first phase curve observation of a sub-Neptune (the planet GJ 1214 b) has revealed clear evidence that the planet has a high mean molecular weight atmosphere (Kempton et al. 2023). The planet's dayside and nightside thermal emission spectra additionally show spectroscopic signs of H\\({}_{2}\\)O and perhaps CH\\({}_{4}\\) (the two are partially degenerate with one another over the mid-IR wavelength range observed). The derived composition of the planet is consistent with GJ 1214b either being a water world or 'gas dwarf', i.e. an initially hydrogen-rich planet that has experienced considerable loss of lighter elements throughout its lifetime. Approximately 20 additional sub-Neptunes are scheduled for transmission spectrum observations with JWST during Cycles 1 and 211, opening the door to further compositional characterization of this intriguing class of planets, so long as spectral features are not entirely obscured by aerosols. Footnote 11: JWST (and HST) observations are scheduled in annual cycles. Cycle 1 can therefore be thought of as the first year of JWST operations, Cycle 2 as the second year, etc. ### The Challenge of Terrestrial Planets Terrestrial planets are especially challenging targets for atmospheric characterization due to their small sizes and also the expectation that their atmospheres will typically have high mean molecular weight, which reduces the size of spectral features observed in transmission (see Section 1.3 and also a review article by Wordsworth & Kreidberg (2022) for more background on terrestrial exoplanets). The first attempts to measure the atmospheric composition of rocky exoplanets typically resulted in non-detections of spectral features, which in turn could rule out cloud-free hydrogen dominated atmospheres but left open a wide range of plausible atmospheric compositions and cloud properties (e.g. de Wit et al. 2016, 2018; Diamond-Lowe et al. 2018, 2020b, 2023; Mugnai et al. 2021; Libby-Roberts et al. 2022). To this day, there have not yet been any robust detections of atmospheric species in terrestrial exoplanets. The small number of works that have claimed the detection of atmospheric gases for rocky planets via transmission spectroscopy have been called into question or have not been reproduced (e.g. Southworth et al. 2017; Swain et al. 2021). An alternative approach to characterizing rocky planet atmospheres was first demonstrated by Kreidberg et al. (2019) and Crossfield et al. (2022). The former measured the phase curve of the terrestrial exoplanet LHS 3844b, and the latter observed the secondary eclipse of GJ 1252 b, both with the Spitzer Space Telescope. The goal of both observations was to determine whether the planet in question has a thick atmosphere or is an airless barren rock. The technique is discussed in more detail in Section 5.4, but briefly, the premise is that, for tidally-locked exoplanets (as is expected to be the case for these and most other terrestrial planet atmospheric characterization targets orbiting M-dwarfs), an atmosphere serves to transport heat away from the planet's hot dayside to its colder nightside (Seager & Deming 2009; Koll 2022). In both cases, the planets' high observed dayside temperatures were found to be consistent with the lack of a substantial atmosphere, although Earththickness 1-bar atmospheres could not be ruled out. The inferred limits on atmospheric thickness are also composition-dependent due to the differing abilities of various gases to absorb light and transport heat, governed by their wavelength-dependent opacities (Whittaker et al. 2022; Ih et al. 2023; Lincowski et al. 2023). With Spitzer, such dayside thermal emission measurements were only possible for the few hottest and most favorable targets, but JWST has much expanded capabilities in this arena (Koll et al. 2019). The large aperture and IR observing capabilities of JWST have long promised to extend the parameter space of observable exoplanet atmospheres to terrestrial planets (e.g. Deming et al. 2009; Beichman et al. 2014; Batalha et al. 2015). Nearly 30 such planets (i.e. rocky super-Earths to subEarths) are already approved for observation in Cycle 1 and 2 of JWST operations. The list of planned observations includes all 7 of the planets in the TRAPPIST-1 system, as well as numerous terrestrial planets orbiting earlier (larger, warmer, and more massive) M stars, and a handful of ultrashort-period (USP) rocky planets with periods shorter than 1 day orbiting G, K, and M stars. The TRAPPIST-1 system is of particular interest for habitability studies aiming to identify biosignature gases because the late (i.e. small and cool) M-dwarf host star brings the habitable zone to very short orbital periods and produces large transit depths and thus atmospheric signal sizes (e.g. Barstow & Irwin, 2016; Krissansen-Totton et al., 2018; Lustig-Yaeger et al., 2019). TRAPPIST-1 e, f, and g are all potentially habitable environments, and are largely seen as the best prospects for characterizing potentially habitable worlds within the next decade (Gillon et al., 2017). The first year of JWST data has so far been marked by more non-detections of terrestrial atmospheres, but now with the vastly improved capabilities of the new facility, such measurements are more meaningfully constraining. Transmission spectra to-date are consistent with flat lines (LustigYaeger et al., 2023), or with the possibility that spectral features are caused by the host star and not the planet (Moran et al., 2023; Lim et al., 2023). Thermal emission studies of terrestrial planets (so far limited to TRAPPIST-1 b and c) have been more revealing. As with previous Spitzer thermal emission studies, the goal with these observations has been to measure the planets' dayside temperatures and infer the presence or lack of an atmosphere. For both planets, the measured dayside temperatures are again consistent with no atmosphere being present (Greene et al., 2023; Zieba et al., 2023). The mid-IR capabilities of JWST has allowed for these measurements to be made at much longer wavelengths than Spitzer could access. By observing at 15 microns in the center of an expected strong CO\\({}_{2}\\) absorption band, the JWST measurements can rule out very thin atmospheres (for TRAPPIST-1 b down to even Mars thickness), under the assumption that CO\\({}_{2}\\) would be a dominant gas in any moderately-irradiated terrestrial environment (Ih et al., 2023; Lincowski et al., 2023). Still, a key promise of JWST is to deliver spectra of smaller and cooler exoplanets than what was previously possible with HST. In light of the flat-line spectra and dayside thermal emission results, a question that has supplanted the characterization of rocky exoplanets has been to identify whether such planets possess atmospheres at all. Figure 4 (right panel) shows that Cycle 1 and 2 JWST targets cover the parameter space of planets that would and would not be expected to host atmospheres,based on solar system considerations. If some of the less-irradiated and/or higher surface gravity terrestrial exoplanets are found to have atmospheres, multiple modeling studies have shown JWST's capabilities to spectroscopically characterize such environments under optimal conditions of minimal cloud obscuration, large scale heights, and stacking multiple transits to improve S/N (e.g. Barstow & Irwin, 2016; Morley et al., 2017; Batalha et al., 2018; Krissansen-Totton et al., 2018; Lustig-Yaeger et al., 2019; Fauchez et al., 2019; Suissa et al., 2020; Pidhorodetska et al., 2020). For the subset of rocky planets without atmospheres, mid-IR emission spectroscopy measurements with JWST offer an exciting opportunity to characterize their surface compositions for the first time (e.g. Hu et al., 2012; Whittaker et al., 2022; Ih et al., 2023). ## 3 Aerosols ### Terminology and Background Aerosols in this work are defined to be any kind of particle suspended in a gaseous atmosphere, regardless of their composition or formation pathway12. Aerosols can be broken up into sub-categories: clouds are defined as solid particles or liquid droplets formed by _condensation_ processes, hazes are involatile particles produced by chemical (and often photochemical) processes, and dust is made up of solid particles suspended in an atmosphere that originated elsewhere (e.g. particles kicked up from the surface or those that originated from a meteorite breaking up as it entered a planet's atmosphere). These are all process-based definitions. If the formation mechanism for the particles in question is unknown, we revert to the blanket term 'aerosol'. We warn the reader that some published papers in the exoplanetary literature employ the term 'haze' when referring to small particles (\\(\\lesssim\\)1 \\(\\mu\\)m) and 'clouds' when referring to larger particles, but we prefer the process-based definitions for the physical insight they bring. Footnote 12: This definition and those that follow for clouds, haze, and dust are all attributed to an online article written for the Planetary Society by Sarah H’orst: [https://www.planetary.org/articles/0324-clouds-and-haze-and-dust-oh-my](https://www.planetary.org/articles/0324-clouds-and-haze-and-dust-oh-my). All solar system planets and moons with significant atmospheres have some sort of aerosol layer. For example, Earth has water clouds, surface dust, and technology-derived haze (i.e. smog). Venus hassulfuric acid clouds and haze. Titan has clouds and haze formed from organic compounds. It stands to reason that exoplanets too should have aerosol layers and that these will be a fundamental component of their atmospheres, governing energy balance, thermal structures, and observed spectra; as is the case for the solar system planets. As we will see, there is indeed plentiful observational evidence for exoplanet aerosols. For a more detailed review of aerosols in exoplanet atmospheres, we refer the reader to a recent article by Gao et al. (2021). The observational signatures of clouds and hazes in exoplanet transmission spectra are primarily muted (or absent) spectral features or strong blue-ward spectral slopes at optical wavelengths13. These come about due to the propensity of the aerosol particles to scatter or absorb starlight. Rayleigh-like scattering slopes arise for small particles, whereas flatter spectra result when particle sizes are larger or the clouds are very thick. The aerosol species themselves also have their own spectral signatures, but these are typically weak features with wavelength-dependent shapes that depend on particle size distribution (e.g. Wakeford & Sing, 2015). This results in degeneracies among spectra associated with distinct aerosol species, making the aerosol composition difficult to uniquely constrain spectroscopically. In thermal emission, the signatures of aerosols tend to be even more subtle. Aerosol layers can impact the thermal structure of an atmosphere (for example, causing thermal inversions in the case of very absorptive clouds or hazes, thus altering the shape of the planet's emission spectrum; Arney et al., 2016; Morley et al., 2015; Lavvas & Arfaux, 2021), and optically thick aerosols can mute spectral features; but these effects are not uniquely attributable to clouds and therefore can be challenging to interpret. The result is that one can often tell from an observation that aerosols are present, and inferences can be made about the vertical distribution of the clouds or haze, but concluding anything robustly about the aerosol composition on a planet-by-planet basis is exceedingly difficult. Forward models are useful to motivate which types of aerosols are consistent with a specific observation, providing probabilistic arguments on the aerosol composition. This approach can be especially powerful at the population level. Footnote 13: Additionally, since many cloud species contain oxygen, the rainout process can alter the C/O of the gas-phase atmosphere, which can impact abundance interpretations if not properly accounted for (e.g. Helling et al., 2019). Oftentimes in the exoplanet literature, aerosols are treated as a 'nuisance' parameter, due to their impact of hindering the detection the underlying gaseous atmosphere. Modeling the complex microphysics and chemical processes that lead to aerosol formation is a challenging task, so parameterized studies that reduce the aerosols to as few defining properties as possible are common (e.g. Ackerman & Marley, 2001; Benneke & Seager, 2012). Yet it is only by understanding the aeorols themselves, including their composition, formation, and optical properties, that we can gain a holistic picture of the planetary atmosphere in question. Studies of exoplanet aerosols additionally provide us with unique laboratories for probing cloud and haze formation in conditions that are not accessible within the solar system. ### Hot Jupiter Aerosols For hydrogen-rich atmospheres hotter than \\(\\sim\\)1000 K, the types of clouds that are able to form due to condensation processes are those that are more commonly thought of as refractory species. For example, based on chemical equilibrium calculations one would expect clouds of Fe, Ni, Al\\({}_{2}\\)O\\({}_{3}\\), Mg\\({}_{2}\\)SiO\\({}_{4}\\), TiO\\({}_{2}\\), and MnS for a solar-composition gas mixture (e.g. Burrows & Sharp, 1999; Mbarek & Kempton, 2016; Woitke et al., 2018; Kitzmann et al., 2023). Because such clouds are expected to only form at very high temperatures, and they incorporate trace species, it can be tempting to ignore the impacts of aerosols on hot Jupiter studies. Yet it was shown early on that transmission spectroscopy geometry, specifically the oblique geometric path taken by stellar photons through the exoplanetary atmosphere on their way to the observer, can result in cloud optical depths considerably in excess of unity, even for trace species (Fortney, 2005). Obscuration by clouds was one explanation immediately put forth for the weaker than expected sodium absorption signal seen in the very first exoplanet transmission observation (Charbonneau et al., 2002). These early studies indicated that cloud modeling would need to be an integral component of interpreting hot Jupiter atmospheric observations. The presence of aerosol layers has been inferred from multiple hot Jupiter transmission spectroscopy studies, starting with the benchmark planet HD 189733b (Pont et al., 2008). For that planet, a strong spectral slope over optical wavelengths accompanied by non-detections of sodium and potassium, which should have been present under clear atmosphere conditions, constituted strong evidence for cloud or haze obscuration. However, later work showed that the optical slope could equivalently be the signature of unocculted starspots on the surface of the planet's active host star, leading to an ambiguity in how to interpret the observational result. Since then, many other hot Jupiters have revealed optical spectral slopes and/or muted spectral features over IR wavelengths, indicating that aeorosol coverage is a likely culprit across the population (e.g. Sing et al., 2016; Wakeford et al., 2017, and see Figure 5). Other observational indications of clouds in hot Jupiters come from optical and IR phase curve observations, which will be discussed further in Section 6.2. On a population level, the aerosol composition and formation mechanism can become more apparent. By comparing the strength of transmission spectral features to a suite of aerosol microphysics forward models, Gao et al. (2020) argued that the muted spectral features for transiting Figure 9: Clear and cloudy atmosphere model tracks compared with transmission spectroscopy measurements of the 1.4 \\(\\mu\\)m water feature amplitude as a function of planetary equilibrium temperature for transiting gas giant planets. Hot Jupiter transmission spectra for planets colder than \\(\\sim\\)2100 K generally have weaker absorption features than what is predicted for clear solar-composition gas mixtures. The cloud microphysics models shown here provide a good overall fit to the observed trend. In addition to cloud condensation, the models include the formation of hydrocarbon hazes, which increasingly dominate at equilibrium temperatures below 950 K. The clouds are primarily formed from Mg\\({}_{2}\\)SiO\\({}_{4}\\), with minor contributions from Al\\({}_{2}\\)O\\({}_{3}\\) and TiO\\({}_{2}\\). Some variation in the degree of aerosol coverage is expected based on surface gravity, metallicity, and C/O ratio, which is likely driving the intrinsic scatter in the observed data points. _Figure from Gao et al. (2020)._ gas giant planets are primarily caused by silicate clouds for planets hotter than 950 K and hydrocarbon hazes for cooler planets (Figure 9). Their argument hinges on the relatively high abundances of Si, Mg, and C, their three main aerosol-forming species, and that other species of comparable abundance (e.g. Fe and Na) don't readily form clouds due to high nucleation energy barriers inhibiting particle formation. JWST is expected to improve our understanding of hot Jupiter aerosols by providing higherprecision spectra and broader wavelength coverage, allowing for degeneracies to be broken between aerosol coverage and high mean molecular weight or non-solar abundance patterns (Batalha and Line, 2017). Already this enhanced precision has led to the finding of _partial_ cloud coverage of the terminator of WASP-39b due to subtle departures in the shape of its transmission spectrum from a fullyclouded planet (Feinstein et al., 2023). The access to longer wavelengths with JWST also presents the opportunity to directly measure spectral features from aerosols (e.g. Wakeford and Sing, 2015), such as the silicate features that have been observed in mid-IR spectra of brown dwarfs and directly-imaged giant planets (e.g. Burningham et al., 2021; Miles et al., 2023). As for high-resolution spectroscopy studies, the signatures of aerosols are difficult to distinguish because high resolution data processing techniques typically remove the spectral continuum, which is where most of the aerosol information is contained (e.g. Snellen et al., 2010; de Kok et al., 2013). An advantage of high-resolution studies though is that the sharply peaked cores of spectral lines tend to extend above cloud decks, resulting in an ability to measure gas-phase composition even for aerosol enshrouded planets (Kempton et al., 2014; Hood et al., 2020; Gandhi et al., 2020). Another particularly promising avenue for further constraining aerosol composition in hot Jupiter atmospheres is by using 3-D diagnostics to determine _where_ on the planet (i.e. as a function of longitude and/or latitude) the aerosols are located. The aerosol spatial distribution and the physical conditions derived at those locations (e.g. temperature, UV irradiation, wind speeds) can then be directly linked to a proposed aerosol formation mechanism and composition. Such analyses can be accomplished on high S/N spectra and phase curves from JWST or high-resolution spectra from ground-based telescopes (e.g. Kempton et al., 2017; Ehrenreich et al., 2020; Espinoza and Jones, 2021). 2021; Parmentier et al. 2021; Roman et al. 2021). We discuss 3-D diagnostics for aerosols further in Section 6.2. ### Aerosols in Sub-Neptunes and Super-Earths Because they tend to orbit smaller stars and thus have cooler temperatures, transiting planets smaller than Neptune are especially likely to host aerosol layers. This was heavily implied by the first investigations of sub-Neptune exoplanets, which returned featureless transmission spectra (e.g. Bean et al. 2010; Berta et al. 2012; Knutson et al. 2014). To date, most flat and muted sub-Neptune and super-Earth transmission spectra are consistent with either aerosols or high mean molecular weight atmospheres, leading to degenerate interpretation. However, in the case of the planet GJ 1214b, the data were obtained at high enough precision by stacking multiple transits with HST that the degeneracy could be broken, and a thick aerosol layer remains as the only viable explanation (Kreidberg et al. 2014a). JWST should similarly provide the precision to break the aerosol vs. mean molecular weight degeneracy in a _single_ transit for many sub-Neptunes, allowing for improved aerosol characterization for smaller planets. Figure 10: Strength of the 1.4 \\(\\mu\\)m water feature in units of scale heights for transmission spectra of planets 2–6 \\(R_{\\oplus}\\) in size. The blue parabola is a best-fit second order polynomial trend, implying a minimum in the strength of transmission spectral features around an equilibrium temperature of 600 K, perhaps indicating the conditions for maximal aerosol coverage. **Left:** Colored lines indicate clear and cloudy models from Morley et al. (2015). The dotted and dashed contours are for 100\\(\\times\\) and 300\\(\\times\\) solar metallicity, respectively. **Right:** Colored lines indicate clear and hazy models from Morley et al. (2015). The hazy models include soot hazes only. The dotted and dashed contours are for 1% haze precursor conversion into soots while the dashed contours are 10%. _Figure courtesy of Yoni Brande_. Even without an unambiguous detection of aerosols on a planet-by-planet basis, the ubiquity of flattened transmission spectra and indications that the flatness correlates with planetary equilibrium temperature (Crossfield & Kreidberg, 2017; Libby-Roberts et al., 2020; Gao et al., 2021) point to aerosol coverage being a defining characteristic of sub-Neptune exoplanets. Hints that the aerosol coverage may clear at high temperatures (\\(\\gtrsim 900\\) K) and lower temperatures (\\(\\lesssim 400\\) K) provide hints as to the dominant particle composition and formation pathway (Figure 10). As for the aerosol formation mechanism, it is hypothesized that hydrocarbon-based hazes readily form in hydrogen-rich sub-Neptune exoplanets below a temperature of \\(\\sim\\)850 K (e.g. Morley et al., 2015; Kawashima & Ikoma, 2019). Under such conditions, methane is expected to be plentiful in chemical equilibrium. Methane is readily photolyzed by the UV radiation from the host star, producing a rich collection of higher-order hydrocarbons that can continue to polymerize and ultimately form large involatile haze particles (e.g. Miller-Ricci Kempton et al., 2012; Morley et al., 2013; Kawashima & Ikoma, 2018; Lavvas et al., 2019). This is analagous to how we believe hydrocarbon 'tholin' haze forms in Titan's atmosphere. The propensity for hazes to form under sub-Neptune conditions is supported by lab work, in which an ensemble of gases are irradiated by a UV or plasma energy source, and the resulting solid particles are collected and analyzed (H\\({}^{\\prime}\\)orst et al., 2018; He et al., 2018). Interestingly, lab studies are also able to form hazes in gas mixtures without methane (He et al., 2020), indicating that haze formation in exoplanet atmospheres may come about via diverse chemical pathways that have yet to be characterized. Candidates for condensate clouds in sub-Neptunes include sulfides (ZnS, Na\\({}_{2}\\)S), sulfates (K\\({}_{2}\\)SO\\({}_{4}\\)), salts (KCl), and graphite (Miller-Ricci Kempton et al., 2012; Morley et al., 2013; Mbarek & Kempton, 2016). These are the expected equilibrium condensates over the temperature range of most sub-Neptunes studied to date, although some of these species may not form clouds due to their high nucleation-limited energy barriers (Gao et al., 2020). Arguments for haze being dominant over clouds in sub-Neptune atmospheres also hinge on how thick and high up the aerosols layers must be in order to fully flatten transmission spectra, especially in the case of the well-studied planet GJ 1214b. It is difficult to build models in which low-abundance species such as ZnS or KCl are able to provide sufficient opacity to match existing observational data (Morley et al., 2013, 2015). Recent JWST phase curve observations of GJ 1214b have thrown another surprise into our evolving understanding of sub-Neptune exoplanets. The very high measured Bond albedo of the planet based on its global thermal emission (\\(A_{B}\\approx 0.5\\)) implies that the planet's aerosol layer is highly reflective (Kempton et al., 2023). This is in tension with our understanding of hydrocarbon hazes (e.g. soots and tholins), which are primarily believed to be dark and absorptive (Khare et al., 1984; Morley et al., 2013, 2015). Additional lab and theoretical work is urgently needed to understand how such reflective and abundant aerosols are formed in sub-Neptune environments. Some possibilities are a more reflective type of hydrocarbon or darker particles coated in high-albedo condensates (e.g. Lavvas et al., 2019). Upcoming JWST observations should shed light on whether sub-Neptune aerosols share a universal set of properties and whether transitions from clear to aerosol-enshrouded conditions occur at expected levels of insolation. ## 4 Atmospheric Mass Loss Transiting exoplanets are particularly vulnerable to atmospheric mass loss as a result of their close-in orbits. The smallest and most highly irradiated exoplanets may lose their entire atmosphere (see Section 1.2), while the population of close-in gas giant planets appears to be minimally altered by atmospheric mass loss (e.g. Vissapragada et al., 2022; Lampo'n et al., 2023). For giant planets, atmospheric outflows are driven by high energy irradiation from the host star, which causes the uppermost layers of the planet's atmosphere to expand until they become unbound; this process is often referred to as 'photoevaporation' (for a comprehensive review of theoretical work on this topic, see Owen, 2019). For smaller planets, core-powered mass loss (Ginzburg et al., 2018) might also play an important role. For a more detailed discussion of current constraints on these processes from the measured radius-period distribution of sub-Neptune-sized exoplanets, see Rogers et al. (2021) and Owen et al. (2023). We can directly observe the atmospheric outflows of transiting planets by measuring the depth of the transit in strong atomic absorption lines. The large planet-star radius ratios of transiting gas giant exoplanets make them particularly favorable targets for these observations. The first atmospheric outflows from close-in gas giant planets were detected by measuring the strength of the absorption in the Lyman \\(\\alpha\\) line of hydrogen (Vidal-Madjar et al., 2003; Lecavelier Des Etangs et al., 2010). This line is located in the UV, and can therefore only be accessed by space telescopes like HST. Because the core of this line is masked by absorption from the interstellar medium, these observations are only sensitive to absorption in the line wings and are limited to relatively nearby (distances of \\(\\sim\\) 100 pc or less) stars. This absorption corresponds to the higher velocity components of the outflow, which are located farther from the planet (e.g., Owen et al., 2023). To date, there are seven planets whose outflows have been measured in this line; see Fig. 11 for a visualization of where these planets are located in mass-period space. Figure 11: Orbital period versus mass distribution for the current sample of planets with measured mass loss rates (\\(>3\\sigma\\) significance; for a complete list see review by Dos Santos, 2022) using either Lyman \\(\\alpha\\) (blue filled circles; we also include a detection of AU Micb by Rockcliffe et al., 2023), metastable helium (red filled circles, we also include detections for HAT-P-67b, TOI-1268b, TOI-1420b, and TOI-2134b from GullySantiago et al., 2023; Pérez Gonz’alez et al., 2023; Zhang et al., 2023), or both (blue circles with red fill). The size of the points is scaled according to the host star’s brightness in \\(J\\) band (infrared), which depends on the star’s distance and mass; brighter stars (smaller \\(J\\) magnitudes) are generally located closer to the Earth and/or have larger masses. The full sample of confirmed planets is shown as grey circles for comparison. There is a deficit of sub-Saturn-sized planets on close-in orbits; this region is called the ‘Neptune desert’, and its approximate boundaries as defined in Mazeh et al. (2016) are shown as black dashed lines. _Figure courtesy of M. Saidel_. Recent theoretical (Oklop'ci'c & Hirata, 2018) and observational work (Spake et al., 2018; Nortmann et al., 2018) revealed that atmospheric outflows could also be detected using metastable helium absorption at 1083 nm. Unlike Lyman \\(\\alpha\\), this line can be readily observed using high resolution spectrographs on ground-based telescopes. Because we can measure absorption in the line core, this line provides a complementary tool to probe the lower velocity components of the outflow, which are located closer to the planet. To date, atmospheric outflows have been measured for 20 planets using this line (see Fig. 11). Outflows have also been detected in the optical H\\(\\alpha\\), H\\(\\beta\\), and H\\(\\gamma\\) lines (e.g., Jensen et al., 2012; Yan & Henning, 2018; Casasayas-Barris et al., 2019; Wyttenbach et al., 2020), as well as UV lines of other atomic species (e.g., Vidal-Madjar et al., 2004; Sing et al., 2019; Dos Santos et al., 2023). Some of the Figure 12: Measurement of hot Jupiter HAT-P-32 b’s 1083 nm metastable helium absorption signal as a function of orbital phase from Zhang et al. (2023c). The unusually extended nature of this planet’s outflow is distinct from that of most other hot Jupiters with published helium detections, which tend to have more narrowly confined outflows. **Left, upper panel:** Helium line equivalent width (EW) for HAT-P-32 b as a function of orbital phase. Solid circles indicate data taken in conjunction with a transit event, while open circles indicate data taken as part of a stellar monitoring program. The phased data are divided into five sections marked by gray shading and colored accordingly. The period where the planet is transiting the star is shown with dark gray shading. Results from a 3D hydrodynamic model are overplotted as a solid gray line. **Left, lower panel:** Equivalent width values after subtracting the average stellar spectrum. **Right:** Slice through the orbital plane of a 3D hydrodynamic simulation of a system with properties similar to that of HAT-P-32. The outflowing gas expands into long tails that lead and trail the planet’s orbit, resulting in strong helium absorption before and after the transit. Approximate viewing angles for each colored time segment are shown with colored labels. The logarithmic gas density distribution is indicated using the color bar on the right. _Figure from Zhang et al. (2023c). refractory atomic species detected in optical high spectral resolution data sets (see Section 2.3) likely also probe unbound regions of the atmosphere, but more detailed models are needed in order to interpret these absorption signals (Linssen & Oklopcic, 2023). By combining the information from multiple lines together, we can obtain a more detailed picture of the overall structure and thermodynamics of the outflow (Lampo'n et al., 2021; Yan et al., 2022; Huang et al., 2023; Linssen & Oklopcic, 2023). The magnitude of the atmospheric absorption signal during transit can be converted into a mass loss rate by modeling the outflow as a spherically symmetric isothermal Parker wind (Oklopcic & Hirata, 2018; Lampo'n et al., 2020; Dos Santos et al., 2022; Linssen et al., 2022). If the outflow is not spherical but instead is sculpted into a comet-like tail by the stellar wind, we would expect to see an extended absorption signal after the end of the transit egress (e.g., Ehrenreich et al., 2015; Lavie et al., 2017; Kirk et al., 2020; Spake et al., 2021). If there is outflowing material orbiting just ahead of the planet, we may also see absorption prior to the planet's ingress, or even absorption extending many hours before and/or after the planet transit (Zhang et al., 2023; Gully-Santiago et al., 2023, see Fig. 12). The time-dependent absorption signal, as well as its spectroscopically resolved velocity structure, therefore provide us with important information about the three-dimensional structure of the atmospheric outflow (e.g. Wang & Dai, 2021, 2021; MacLeod & Oklopcic, 2022). In addition to stellar winds, these outflow geometries may also be shaped by the planetary and stellar magnetic field geometries (Owen & Adams, 2014; Schreyer et al., 2023; Fossati et al., 2023). It is more challenging to detect atmospheric outflows from sub-Neptune-sized planets. There are currently only three sub-Neptune-sized planets orbiting mature (> 1 Gyr) stars with published detections (Bourrier et al., 2018; Ninan et al., 2020; Palle et al., 2020; Orell-Miquel et al., 2022; Zhang et al., 2023), one of which is disputed (G) 1214b; see discussion in Spake et al. 2022). Fortunately, these outflows are more easily observable if we focus on the subset of small planets orbiting young stars. Young stars are more active and have enhanced high energy fluxes (e.g., Johnstone et al., 2021; King & Wheatley, 2021), while young planets have radii that are still inflated by leftover heat from their formation. As a result, young sub-Neptunes are expected to have enhanced mass loss rates as compared to their more evolved counterparts. Observations of young transiting sub-Neptunes have revealed the presence of atmospheric outflows in both Lyman \\(\\alpha\\)(Zhang et al., 2022c) and metastable helium (Zhang et al., 2022a, 2023b; Orell-Miquel et al., 2023). These observations can be used to test the predictions of atmospheric mass loss models seeking to explain the origin of the bimodal radius distribution of small close-in planets (see Section 1.2). ## 5 Dayside Temperature Structure ### The Physics of Thermal Inversions Measuring the thermal structure of exoplanet atmospheres provides critical insight into how energy is transported and deposited in planetary envelopes. In the solar system, for example, we know that Earth has a stratospheric thermal inversion due to UV/optical absorption by the \\(0_{3}\\) molecule. Venus has a lower equilibrium temperature than the Earth, despite receiving nearly twice as much energy from the Sun as the Earth does, due to its high Bond albedo. Mercury has a scaling hot dayside and a frigid nightside because it lacks a thick atmosphere to transport heat. All of these types of processes can be assessed by measuring the dayside temperatures and vertical temperature gradients in exoplanet atmospheres. Thermal inversions in particular have been an interesting phenomenon accessed via dayside thermal emission spectra. Planetary atmospheres that are strongly absorbing at the wavelengths at which their host stars puts out most of their energy will experience heating at the location where the stellar energy is deposited. To ensure global energy balance, this heating comes at a cost of cooling regions deeper in the atmosphere, thus creating a thermal inversion in which temperature _increases_ with altitude, peaking around the region where the starlight is absorbed (i.e. the \\(\\tau\\sim 1\\) surface, where \\(\\tau\\) is the optical depth). Spectroscopically, thermal inversions are identified by observing spectral lines in emission, as opposed to absorption lines, which are seen when temperature decreases outwardly. The shape of spectral features relative to the surrounding continuum is therefore used to assess the temperature gradient in the observable portion of an exoplanet atmosphere via thermal emission measurements. By detecting a thermal inversion and simultaneously measuring atmospheric composition, astronomers can also attempt to infer which absorber(s) are responsible for the upperatmosphere heating. As discussed in Section 2.3, TiO, VO, and a variety of refractory species have been proposed as optical and UV absorbers that can generate thermal inversions in hot Jupiters (e.g. Fortney et al., 2008; Lothringer et al., 2018). Other opacity sources such as hazes, clouds, or even water vapor for planets orbiting M-dwarfs have been proposed to similarly drive thermal inversions in cooler planets (e.g. Morley et al., 2015; Arney et al., 2016; Malik et al., 2019; Lavvas & Arfaux, 2021; Roman et al., 2021). Because thermal inversions are generated by absorption of incident stellar energy, they are primarily expected to be a dayside phenomenon, although efficient horizontal heat exchange can cause them to persist away from the sub-stellar point and even around to a planet's nightside (e.g. Komacek et al., 2022). ### The Hot-to-Ultrahot Jupiter Transition Forward models of hot Jupiter emission spectra have long predicted a transition in dayside thermal structure from planets with inversions to those without, as a function of decreasing planetary temperature. The thermal inversions would be driven by gas-phase optical and UV absorbers that condense out of the atmosphere at lower temperatures, thus rendering the atmosphere more transparent to stellar irradiation (and therefore producing un-inverted temperature profiles) at lower equilibrium temperatures (e.g. Hubeny et al., 2003). Fortney et al. (2008) initially proposed that TiO and VO should be the key drivers of thermal inversions, resulting in a transition to inverted temperature profiles around a planetary equilibrium temperature of 1500 K. Evidence of thermal inversions from secondary eclipse spectra probing planets around this cutoff temperature with Spitzer observations was initially mixed (e.g. Richardson et al., 2007; Charbonneau et al., 2008; Knutson et al., 2008, 2009; Deming et al., 2011; Todorov et al., 2010, 2012, 2013; Baskin et al., 2013; Diamond-Lowe et al., 2014). Ultimately, improved spectroscopic investigations with the HST+WFC3 instrument and high resolution ground-based spectrographs clearly demonstrated un-inverted temperature profiles in various hot Jupiters around and above the predicted 1500 K cutoff temperature, via spectral features appearing in absorption (Birkby et al., 2013; Kreidberg et al., 2014; Schwarz et al., 2015; Line et al., 2016, 2021). This was accompanied by failures to definitively detect gas-phase TiO and VO in transmission spectra of some of the same planets, implying removal via nightside condensation coldtrapping or some other disequilibrium chemistry process, or perhaps a more mundane explanation such as inaccurate line lists (e.g. D'esert et al., 2008; Hoeijmakers et al., 2015). Even hotter planets were ultimately required to produce definitive evidence for thermal inversions. 'Ultrahot' Jupiters, as discussed in Section 2.2 are those that are so hot that water dissociates in their atmospheres, and various refractory elements (not just Ti and V) are predicted to be in the gas phase (Parmentier et al., 2018; Lothringer et al., 2018). In these planets, temperature inversions are predicted to be helped along by gas-phase metals and oxides such as Fe, Mg, SiO, etc. Formally the cutoff between hot and ultrahot Jupiters occurs around \\(T_{eq}\\) = 2200 K. The first ultrahot Jupiter to produce a clear detection of a dayside thermal inversion was WASP-121 b (Evans et al., 2017). The 1.4 \\(\\mu\\)m water feature in this planet's secondary eclipse spectrum appears in emission, although the feature is quite subtle. Other ultrahot Jupiters, as mentioned in Section 2.2, produced nearly featureless secondary eclipse spectra across the WFC3 bandpass, leading to ambiguous interpretation as to whether water was simply absent from these atmospheres or the dayside temperature profiles were isothermal, thus masking any spectral features (Sheppard et al., 2017; Mansfield et al., 2018; Kreidberg et al., 2018; Mansfield et al., 2021). The picture of a transition to ultrahot planets with thermal inversions becomes clearer with population-level studies. When looking at WFC3 thermal emission spectra vs. the planets' measured dayside temperatures, Mansfield et al. (2021) identify a clear trend from un-inverted temperature profiles at lower dayside temperatures, to inverted profiles at dayside temperatures above \\(\\sim\\)2500 K (Figure 13). This is in line with predictions from forward models, although such models still predict the transition to thermal inversions to occur at somewhat lower temperatures. Interestingly, both the models and the data reveal a shift back to featureless spectra with WFC3 at even higher dayside temperatures (\\(T_{day}\\)\\(\\gtrsim\\) 3000 K), corresponding to full removal of atmospheric H\\({}_{2}\\)O via thermal dissociation. Another population-level prediction is that ultrahot planets orbiting earlier-type (i.e. hotter) host stars should produce even larger thermal inversions because the peak of the stellar spectral energy distribution (SED) aligns particularly well with the expected UV/optical opacity sources in the planets' atmospheres. This prediction has played out in secondary eclipse observations of the ultrahot Jupiter KELT-20b, which orbits a hot A-type host star. For this planet, the 1.4 \\(\\mu\\)m H\\({}_{2}\\)0 feature appears strongly in emission, much more so than for comparably irradiated planets orbiting later-type G stars (Fu et al., 2022). Recent JWST and high-resolution emission spectroscopy studies have solidified our understanding of the hot-to-ultrahot Jupiter transition by providing increased precision and wavelength coverage. For example, JWST has the power to resolve the subtle shape of spectral features that were previously hidden in the noise of HST observations. In the case of the the ultrahot Jupiter WASP-18 b, the planet's emission spectrum, which was nearly featureless in HST observations (Sheppard et al., 2017; Arcangeli et al., 2018) is now revealed by JWST to contain very subtle water features in emission, thus confirming the presence of a thermal inversion (Coulombe et al., 2023). In contrast, the cooler 'normal' hot Jupiter HD 149026b shows spectral features in absorption, including clear detections of H\\({}_{2}\\)O and a strong CO\\({}_{2}\\) feature implying high metallicity (Bean et al., 2023). With high-resolution observations from the ground, the emission spectra of various ultrahot Jupiters also provide clear indications of emission lines, demonstrating inverted thermal structures (e.g. Kasper et al., 2021; Yan et al., 2022, 2023; Brogi et al., 2023; van Sluijs et al., 2023). In at least one case, the detection of a thermal inversion is (finally) accompanied by a high-confidence detection of TiO in the transmission spectrum of the same planet (Yan et al., 2020; Prinoth et al., 2022). In summary, with newer and better data and population-level studies, astronomers are now finding that the dayside thermal structures of hot and ultrahot Jupiters appear to align with the basic predictions of forward models, albeit with a transition to inverted temperature profiles occurring at somewhat higher equilibrium temperatures than what is predicted for solar composition atmospheres. The forward models assume thermochemical equilibrium and 1-D radiative-convective energy balance, with gas-phase metals and metal oxides serving as strong optical and UV absorbers at high temperatures. Additional new frontiers that will be opened with JWST in the near term include studies of the thermal structures of even colder giant planets. At lower temperatures, dayside clouds or even haze might play a primary role in mediating the deposition of stellar energy in the planets'atmospheres. Hints of such effects already exist in the WFC3 emission spectra of the coolest hot Jupiters investigated to-date (Crouzet et al., 2014; Mansfield et al., 2021). JWST will also enable more detailed studies of the _3-D_ structure of giant planet daysides, which will be discussed in more detail in Section 6. Figure 13: Brightness temperature vs. wavelength for hot Jupiter secondary eclipse spectra observed by HST with the WFC3 instrument. Brightness temperature is a measure of the approximate temperature of the photosphere at the wavelength being observed. The 1.4 \\(\\mu\\)m water band is indicated by the gray shaded region. For less irradiated planets (toward the bottom right of the plot), the 1.4 \\(\\mu\\)m water band appears in absorption, indicating atmospheres with non-interted temperature profiles. For hotter planets, the absorption features disappear, and in some cases (e.g. WASP-76b, WASP-121b, WASP-12b), the 1.4 \\(\\mu\\)m water band subtly inverts into emission, indicating a possible thermal inversion. When compared against forward models of hot Jupiter emission spectra, these observations align well with predictions that thermal inversions occur for planets hotter than \\(\\sim\\)2000 K, and water dissociation reduces the abundance of H\\({}_{2}\\)O in the dayside atmosphere. _Figure adapted from Mansfield et al. (2021)_. ### Hot Jupiter Albedos Efforts to measure the reflected light from hot Jupiters at optical wavelengths began soon after the discovery of such planets, in order to constrain their albedos. These studies initially resulted in non-detections and upper limits, some of which were quite constraining (e.g. Charbonneau et al., 1999; Rowe et al., 2008; Winn et al., 2008). It was quickly realized that the implied low albedos were in line with the predictions from radiative transfer models for such planets. In the absence of dayside clouds, strong optical absorption lines such as those from Na, K, TiO, etc. absorb out much of the incident stellar radiation, while the only source of reflected light is Rayleigh scattering from the gaseous atmosphere (Seager et al., 2000; Burrows et al., 2008). For cooler planets, in which reflective dayside clouds are expected, geometric albedos should be higher (e.g. Cahoy et al., 2010; Adams et al., 2022), but the general trend of shallower secondary eclipses with lower levels of insolation typically makes it more challenging to detect such signals. Space telescopes such as CoRoT, Kepler, and TESS were ultimately able to detect the optical secondary eclipses of a number of hot and ultrahot Jupiters, although the broad photometric bandpasses of these facilities has meant that it is typically not possible to fully disentangle the relative contributions of thermal emission vs. scattered light, resulting in model-dependent albedo inferences (e.g. Alonso et al., 2009; Christiansen et al., 2010; Demory et al., 2011). A compilation of optical secondary eclipse measurements for 21 planets with CoRoT, Kepler; and TESS reveals that most such detections have been made at less than 3-\\(\\sigma\\) confidence, with inferred geometric albedos ranging between 0 and \\(\\sim\\)0.3 (Wong et al., 2020). One notable exception is the planet Kepler-7b, which has an inferred albedo of \\(\\sim\\) 0.25-0.35, measured at high confidence (Demory et al., 2011; Wong et al., 2020). This is consistent with the planet's (relatively) low dayside temperature of \\(\\sim\\)1000 K and the expectation that such conditions are conducive to the formation of reflective clouds. More recently, the European CHEOPS satellite has demonstrated its ability to produce well-constrained measurements of hot Jupiter geometric albedos (Brandeker et al., 2022; Krenn et al., 2023). The inferred values for the hot Jupiters HD 209458b and HD 189733b from CHEOPS lightcurves are 0.096 \\(\\pm\\) 0.016 and 0.076 \\(\\pm\\) 0.016. These albedos are far lower than for any solar system planets but in linewith models of hot Jupiters having cloud-free dayside atmospheres. In summary, hot Jupiters are dark, but cooler giant planets may be more reflective. ### The Dayside Temperatures of Sub-Neptunes and Super-Earths Detecting the thermal emission from smaller and typically cooler sub-Neptunes and superEarths is a much more technically challenging endeavor than for hot Jupiters. Because of this, such studies have mostly been limited to simply detecting a secondary eclipse and measuring an associated brightness temperature, as opposed to full spectroscopic characterization. Once measured, the dayside temperature of the planet can then be used to obtain a combined constraint on both daynight heat redistribution and albedo. All tidally-locked planets have a maximum dayside temperature that can be achieved if the planet's only energy source is the irradiation from its host star: \\[T_{max}=T_{\\star}\\sqrt{\\frac{R_{\\star}}{d}}\\left(\\frac{2}{3}\\right)^{1/4}. \\tag{5}\\] This is simply Equation 3 taken in the limit of no day-night heat redistribution (instantaneous reradiation) and zero albedo. Lower measured dayside temperatures are indicative of either a reflective planet or considerable day-night heat transport (or some combination thereof; Koll et al. 2019; Mansfield et al. 2019; Koll 2022). To date there have only been successful thermal emission detections for two sub-Neptunes: the planets TOI-824b (Roy et al. 2022) and GJ 1214b (Kempton et al. 2023). The former is a hot dense sub-Neptune, whereas the latter is a cooler planet that was already known to have a thick aerosol layer from transmission spectroscopy measurements (see Section 3.3). The dayside temperature of TOI-824b is consistent with its \\(T_{max}\\), whereas GJ 1214b is significantly colder. For sub-Neptunes, a maximally hot dayside, implying poor day-night heat redistribution, requires a high mean molecular weight atmosphere. This result comes from 3-D general circulation models, which demonstrate that heat transport efficiency decreases as a function of increasing mean molecular weight (e.g. Kataria et al. 2014; Charnay et al. 2015; Zhang & Showman 2017). Hydrogen-rich, solar-composition sub-Neptune atmospheres are predicted to transport heat very efficiently, resulting in cooler daysideand nearly homogeneous global temperatures. Conversely, GJ 1214b's dayside temperature is colder than even its zero-albedo temperature in the limit of fully efficient day-night heat transport, meaning the planet must have a non-zero albedo. This interpretation is confirmed by a full-orbit phase curve with JWST that is best fit by a high mean molecular weight atmosphere coupled with the presence of highly reflective aerosols (see Sections 3.3 and 6). GJ 1214b is also the only planet smaller than Neptune to have spectral features identified in its dayside thermal emission spectrum. Subtle departures from a blackbody shape imply the presence of gaseous water in this planet's atmosphere and a non-inverted temperature profile (Kempton et al., 2023). Interestingly, for planets orbiting M-dwarf host stars, water vapor can actually serve as a source of thermal inversions (Malik et al., 2019; Selsis et al., 2023). This is because its strong near-IR opacity efficiently absorbs stellar light, which in this case peaks at red to near-IR wavelengths. The predicted thermal inversions are fairly weak and high up in the planets' atmospheres though, making their observable consequences negligible for low-resolution spectroscopy with JWST. Rocky planet thermal emission measurements with Spitzer and more recently JWST have focused on measuring dayside temperatures (as well as phase curves in certain cases) to constrain the presence or absence of an atmosphere. Rocky planets without atmospheres have no mechanism by which to transport heat to their nightsides (Seager & Deming, 2009; Koll et al., 2019; Koll, 2022). Furthermore, many kinds of rocks that are known to form planetary surfaces in the solar system are very dark14(Hu et al., 2012; Mansfield et al., 2019). It therefore can be concluded that a terrestrial planet with a maximally hot dayside temperature is unlikely to have an atmosphere, whereas colder dayside temperatures imply the presence of an atmosphere. Several terrestrial planets to-date have been subjected to this'secondary eclipse test' to measure their dayside temperatures, with the conclusion in the majority of cases being to rule out the presence of thick atmospheres to varying degrees of confidence (Kreidberg et al., 2019; Crossfield et al., 2022; Whittaker et al., 2022; Greene et al., 2023; Ih et al., 2023, see Section 2.5). The coldest terrestrial planet yet observed in secondary eclipse is TRAPPIST-1c. For that planet, its dayside temperature is only \\(\\sim\\)2-\\(\\sigma\\) consistent with its \\(T_{max}\\) value (Zieba et al., 2023). In this case, the presence of a thick atmosphere is not definitively ruled out, implying that perhaps less irradiated planets are more likely to retain their atmospheres, even if they orbit active M-dwarf stars. Further measurements of rocky planet secondary eclipses with JWST will continue to map out the parameter space of which planets do and do not possess atmospheres, with many such observations already planned for Cycle 2. ## 6 Three-dimensional atmospheric structure Close-in exoplanets are expected to be tidally locked, with permanent day and night sides. As a result, they can exhibit relatively large day-night temperature gradients, along with corresponding gradients in their atmospheric chemistries and cloud properties. Importantly, tidally locked planets will have relatively slow rotation periods (on the order of days) compared to the gas giant planets in the solar system. This means that the typical length scales for their atmospheric circulation patterns will be much larger (\\(\\sim\\) hemisphere-scale) than those of planets like Earth, Jupiter, or Saturn. For a review of the fundamental principles and relevant dynamical regimes for atmospheric circulation on close-in gas giant planets, we recommend Showman et al. (2010) and Showman et al. (2020). ### Fundamentals of Day-Night Heat Transport on Hot Jupiters There is a considerable body of observational constraints on the atmospheric circulation patterns of hot Jupiters. During its sixteen years of operation, the Spitzer Space telescope measured broadband infrared secondary eclipse depths for more than a hundred close-in gas giant exoplanets (e.g., Baxter et al., 2020; Wallack et al., 2021; Deming et al., 2023). It also measured broadband infrared phase curves for several dozen gas giant exoplanets (e.g. Bell et al., 2021; May et al., 2022). There are only a few planets with spectroscopic phase curves measured with HST (Stevenson et al., 2014; Kreidberg et al., 2018; Arcangeli et al., 2019, 2021; Mikal-Evans et al., 2022) and (more recently) JWST (MikalEvans et al., 2023; Kempton et al., 2023; Bell et al., 2023). Lastly, there are currently two published secondary eclipse maps of the dayside atmospheres of these planets, one from Spitzer (Majeau et al., 2012; de Wit et al., 2012) and one from JWST (Coulombe et al., 2023). There are several big-picture takeaways that have emerged from the current body of observations. First, both models (Perez-Becker & Showman, 2013; Komacek & Showman, 2016; Komacek et al., 2017) and observations (e.g., Wallack et al., 2021; Bell et al., 2021; May et al., 2022; Deming et al., 2023) agree that the most highly irradiated gas giant exoplanets have a lower day-night heat redistribution efficiency (defined as the fraction of energy incident on the dayside that is transported to the night side by atmospheric winds) than their more moderately irradiated counterparts. This means that the most highly irradiated gas giant exoplanets have relatively large day-night temperature contrasts, while their cooler counterparts tend to have more uniform temperature distributions (see Figure 14). These same data also indicate that most close-in gas giant exoplanets have a super-rotating (eastward) equatorial band of wind that transports energy from the day side to the night side, in good agreement with predictions from general circulation models (see Figure 15 and review by Showman et al., 2020). This is readily apparent in infrared Spitzer phase curve observations (Bell et al., 2021; May et al., 2022), which show that the hottest region on the day side is shifted eastward of the substellar point for most hot Jupiters (this corresponds to a phase curve that peaks just before the secondary eclipse). There are several notable exceptions to this trend, which we discuss in more detail later in this section. We can also see the effects of atmospheric circulation in high resolution emission and transmission spectroscopy, where we can directly measure the Doppler shift induced by the planet's atmospheric winds. This can manifest as either a net shift in the lines for a single coherent flow direction, or an overall broadening of the lines for observations that integrate over multiple flow directions (e.g. Miller-Ricci Kempton & Rauscher, 2012; Showman et al., 2013; Beltz et al., 2021, 2022). Doppler shifts due to atmospheric winds have been seen in high resolution transmission spectroscopy, which probes the day-night terminator region (e.g. Snellen et al., 2010;Louden & Wheatley 2015; Brogi et al. 2016; Flowers et al. 2019; Seidel et al. 2021; Kesseli et al. 2022; Pai Asnodkar et al. 2022; Gandhi et al. 2022), and in emission spectroscopy, which integrates over the dayside atmosphere (e.g. Yan et al. 2023; Lesjak et al. 2023). ### Complications from Clouds, Chemical Gradients, and Magnetic Fields The non-uniform temperature distributions in the atmospheres of close-in gas giant planets also have important implications for their condensate cloud properties. Clouds that can condense in one region of the atmosphere may not be able to condense in other regions; this can lead to hemisphere-sized cloudy and clear regions in the atmospheres of these planets (Figure 15, and for model predictions of the irradiation-dependent cloud distributions, see Parmentier et al. 2018, 2021; Roman Figure 14: Dayside brightness temperatures for the sample of hot Jupiters with measured eclipse depths in the 3.6 and 4.5 \\(\\mu\\)m bands with Spitzer as a function of their predicted equilibrium temperatures. Planets with relatively inefficient day-night recirculation will lie closer to the red dashed line (maximum dayside temperature assuming zero recirculation and zero albedo), while planets with relatively efficicient day-night circulation will lie closer to the black dashed line (complete day-night recirculation of energy, zero albedo). The subset of hot Jupiters whose dayside albedos are enhanced by reflective silicate clouds (equilibrium temperatures near 1500 K) can also lie below the black line, as indicated by the blue dashed line. Planets with black circles have spectral slopes that are inconsistent with that of a blackbody across the 3.6 to 4.5 \\(\\mu\\)m band, indicating the presence of strong molecular features. _Figure from Wallack et al. (2021)_. et al., 2021). We can see empirical evidence for patchy clouds in the optical phase curves of close-in planets, which exhibit localized regions of high albedo due to the presence of reflective silicate clouds (Demory et al., 2013). When viewed in transmission, the properties of clouds and/or hazes are also expected to differ between the dawn and dusk terminators. This effect will cause the shape of the transit ingress (when the planet is entering the disk of the star) to differ from that of the transit egress (when the planet is exiting the disk of the star), and should be detectable with JWST (Kempton et al., 2017; Powell et al., 2019; Espinoza and Jones, 2021; Steinrueck et al., 2021; Carone et al., 2023). Close-in gas giant planets also appear to have surprisingly uniform nightside temperatures, and it has been suggested that this may be due to the presence of nightside clouds (Keating et al., 2019; Gao and Powell, 2021). If confirmed by JWST, this would have important consequences for atmospheric circulation patterns on hot Jupiters, as the presence of these clouds can inhibit radiative cooling on the planet's night side, resulting in a globally hotter atmosphere and a reduced offset for the dayside hot spot (Parmentier et al., 2021; Roman et al., 2021). These day-night temperature gradients can also lead to chemical gradients between the dayside and nightside atmospheres. For the most highly irradiated hot Jupiters, current observations suggest that refractory species may condense on the night side, even when the dayside atmosphere is hot enough for them to remain in gas phase (Lothringer et al., 2022; Pelletier et al., 2023). High resolution transmission spectroscopy has also been used to argue for gradients in composition between the dawn and dusk terminators (Ehrenreich et al., 2020; Mikal-Evans et al., 2022; Prinoth et al., 2022, 2023; Gandhi et al., 2022). These chemical gradients can complicate efforts to measure wind speeds using high resolution spectroscopy (e.g. Wardenier et al., 2023; Savel et al., 2023). Recent studies have also explored the role that the dissociation of H\\({}_{2}\\) on the day side and its subsequent recombination on the night side might play in day-night energy transport in these highly irradiated atmospheres (Bell and Cowan, 2018; Tan and Komacek, 2019; Mansfield et al., 2020; Roth et al., 2021; Changeat, 2022). There is also emerging evidence suggesting that atmospheric flow patterns on the most highly irradiated hot Jupiters may be altered by magnetic effects. At these temperatures, the atmosphere consists of a mixture of neutral and ionized species. If the planet has a strong magnetic field, this can lead to magnetically induced drag and correspondingly weakened day-night energy transport (Perna et al., 2010; Menou, 2012). Observationally, this would have the effect of moving the hot spot on the day side closer to the substellar point. Recent observations of the ultra-hot Jupiter WASP-18 b with JWST indicate that its relatively small dayside hot spot offset is best matched by circulation models Figure 15: Properties of hot Jupiters with three different incident flux levels from general circulation models with and without radiatively active clouds included. The top two rows show the temperature distribution at the top of the atmosphere (1 mbar), and the middle two rows show the temperature distribution slightly deeper down, at the approximate level of the infrared photosphere. The bottom panel shows how the presence of clouds alters the level of the photosphere by increasing the atmospheric opacity; more cloudy regions have lower photospheric pressures, meaning that we do not see as deep into these cloudy regions. The approximate photospheric pressure for the clear atmosphere (60 mbar) is indicated by a red line on the color bar. _Figure from Roman et al. (2021)_. with enhanced drag due to MHD effects (Coulombe et al. 2023). Other ultra-hot Jupiters also appear to have similarly small hot spot offsets in their Spitzer phase curves (Bell et al. 2021; May et al. 2022). As the magnetic field increasingly dominates the atmospheric flow patterns, it may cause the location of the dayside hot spot to vary from orbit to orbit, perhaps even shifting it westward of the substellar point (i.e. opposite of the predicted wind direction for a neutral atmosphere; Rogers 2017; Hindle et al. 2021a,b). This may explain the westward and/or time-varying hot spot offsets of several hot Jupiters (e.g. Dang et al. 2018; Bell et al. 2019). Several planets also appear to have time-varying optical phase curves (Armstrong et al. 2016; Jackson et al. 2019a,b), although in some cases these variations may be the result of stellar and/or instrumental variability (Lally & Vanderburg 2022). ### Circulation Patterns of Sub-Neptune-Sized Planets There are currently only a few sub-Neptune-sized planets with phase curve observations. As discussed in Section 5.4, for rocky planets these phase curves can be used to infer the presence or absence of a thick atmosphere based on the observed day-night temperature gradient (Seager & Deming 2009). For planets with thick atmospheres, the shape of the phase curve can also be used to constrain the planet's atmospheric composition. A recent JWST observation of the midIR phase curve of the sub-Neptune GJ 1214 b by Kempton et al. (2023) indicates that this planet likely possesses a high mean molecular weight atmosphere with highly reflective clouds or hazes. Spitzer phase curves of hot rocky super-Earths K2-141 b and LHS 3844 b indicate that these planets have large day-night temperature gradients, suggesting that their atmospheres must be relatively tenuous, if they possesses one at all (Kreidberg et al. 2019; Zieba et al. 2022). Although the Spitzer IR phase curve of the super-Earth 55 Cnc e initially appeared to require the presence of a thick atmosphere (Demory et al. 2016b), a subsequent re-analysis of these data resulted in a larger daynight temperature gradient more in line with those observed for other hot rocky super-Earths (Mercier et al. 2022). Puzzlingly, this planet also appears to have a time-varying infrared flux from its dayside (Demory et al. 2016a), along with a variable optical phase curve (Meier Vald'es et al. 2023). This may indicate the presence of a tenuous, time-varying outgassed high mean molecular weight atmosphere (Heng 2023). JWSTwill soon observe phase curves for multiple additional rocky exoplanets and sub-Neptunes, expanding our understanding of their atmospheric properties. ## 0.7 Conclusions With the launch of JWST and the advent of high-resolution spectrographs on large groundbased telescopes, the exoplanet atmospheres field has entered a new era. As detailed in the previous sections, we are now reliably measuring chemical abundances and abundance ratios, global temperature fields, wind speeds, and atmospheric escape rates. At the time of writing this article we are just over one year into the JWST mission, and we are already seeing results that are fundamentally shifting our understanding of exoplanet atmospheres. Some early takeaways include the diversity of chemical inventories in giant planet atmospheres and an apparent lack of atmospheres on at least some rocky planets orbiting M-dwarfs. At the same time, a multitude of results from ground-based high-resolution spectroscopy are revealing the richness of (ultra)hot Jupiter chemistry. With these new observational results come new scientific questions. It is already clear that at this improved level of measurement precision, the 3-D nature of exoplanet atmospheres will need to be carefully taken into account to not bias any scientific conclusions. This challenge is accompanied by new opportunities to directly infer properties of 3-D circulation and weather in exoplanet atmospheres. Measurements of individual exoplanets' spectra are also telling a story about how those planets formed and evolved, but backing out the correct narrative is a truly challenging endeavor, which can be helped along somewhat by population-level investigations. The characterization of smaller exoplanets is one of the key promises of the JWST mission, but new questions have arisen about what subset of such planets even host atmospheres at all and how to disentangle the signatures of stellar activity from atmospheric absorption. Investigations of sub-Neptunes aimed at distinguishing those with primordial atmospheres from a potential population of water worlds must still contend with the confounding influence of aerosols on spectroscopic observations. Along the way, the properties of the aerosols themselves are presenting their own surprises. The pace of new observational results from JWST and ground-based high-resolution studies is only accelerating. We are in a regime in which our state of knowledge of exoplanet atmospheres in each successive year expands substantially. As such, this review article serves as a snapshot in time of the state of exoplanet observations following the first year of JWST science. We anticipate that some of the open questions presented in this article will be resolved in the near term, whereas others will take future generations of telescopes and scientists to fully answer. What we can surely say is that exoplanet atmospheres have yet to reveal all of their surprises to us. E.M.R.K. and H.A.K. would like to acknowledge Jacob Bean, Yoni Brande, Thayne Currie, Peter Gao, Jegug Ih, Megan Mansfield, James Rogers, Michael Roman, Morgan Saidel, Arjun Savel,, Nicole Wallack, and Zhoujian Zhang who graciously contributed figures to this review. H.A.K. is also grateful to the Woods Hole Geophysical Fluid Dynamics Program, which provided a thoughtful and interactive venue for developing ideas incorporated into several sections of this review. E.M.R.K. would like to thank Jacob Bean and Tad Komacek for insightful discussions while developing this manuscript. ## References * Ackerman & Marley (2001) Ackerman, A. S., & Marley, M. S. 2001, ApJ, 556, 872, doi: 10.1086/321540 * Adams et al. (2022) Adams, D. J., Kataria, T., Batalha, N. E., Gao, P., & Knutson, H. A. 2022, ApJ, 926, 157, doi: 10.3847/1538-4357/ac3d32 * Aguichine et al. (2021) Aguichine, A., Mousis, O., Deleuil, M., & Marcq, E. 2021, ApJ, 914, 84, doi: 10.3847/1538-4357/abfa99 * Ahrer et al. (2023) Ahrer, E.-M., Stevenson, K. B., Mansfield, M., et al. 2023, Nature, 614, 653, doi: 10.1038/s41586-022-05590-4 * Alderson et al. (2023) Alderson, L., Wakeford, H. R., Alam, M. K., et al. 2023, Nature, 614, 664, doi: 10.1038/s41586-022-05591-3 * Alonso et al. (2009) Alonso, R., Guillot, T., Mazeh, T., et al. 2009, A&A, 501, L23, doi: 10.1051/0004-6361/200912505 * Arcangeli et al. (2021) Arcangeli, J., D'esert, J. M., Parmentier, V., Tsai, S. M., & Stevenson, K. B. 2021, A&A, 646, A94, doi: 10.1051/0004-6361/202038865 * Arcangeli et al. (2018) Arcangeli, J., D'esert, J.-M., Line, M. R., et al. 2018, ApJ, 855, L30, 10.3847/2041-8213/aab272 * Arcangeli et al. (2019) Arcangeli, J., D'esert, J.-M., Parmentier, V., et al. 2019, A&A, 625, A136, doi: 10.1051/0004-6361/201834891 * Armstrong et al. (2016) Armstrong, D. J., de Mooij, E., Barstow, J., et al. 2016, Nature Astronomy, 1, 0004, doi: 10.1038/s41550-016-0004 * Arney et al. (2016) Arney, G., Domagal-Goldman, S. D., Meadows, V. S., et al. 2016, Astrobiology, 16, 873, doi: 10.1089/ast.2015.1422 * Atri & Mogan (2021) Atri, D., & Mogan, S. R. C. 2021, MNRAS, 500, L1, doi: 10.1093/mnrasl/slaa166 * August et al. (2023) August, P. C., Bean, J. L., Zhang, M., et al. 2023, arXiv e-prints, arXiv:2305.07753, doi: 10.48550/arXiv.2305.07753 * Barstow & Irwin (2016) Barstow, J. K., & Irwin, P. G. J. 2016, MNRAS, 461, L92, doi: 10.1093/mnrasl/slw109 * Baskin et al. (2013) Baskin, N. J., Knutson, H. A., Burrows, A., et al. 2013, ApJ, 773, 124, doi: 10.1088/0004-637X/773/2/124 * Batalha et al. (2015) Batalha, N., Kalirai, J., Lunine, J., Clampin, M., & Lindler, D. 2015, arXiv e-prints, arXiv:1507.02655, doi: 10.48550/arXiv.1507.02655 * Batalha et al. (2018) Batalha, N. E., Lewis, N. K., Line, M. R., Valenti, J., & Stevenson, K. 2018, ApJ, 856, L34, doi: 10.3847/2041-8213/aab896 * Batalha & Line (2017) Batalha, N. E., & Line, M. R. 2017, AJ, 153, 151, doi: 10.3847/1538-3881/aa5faa * Baxter et al. (2020) Baxter, C., D'esert, J.-M., Parmentier, V., et al. 2020, A&A, 639, A36, 10.1051/0004-6361/201937394 * Bean et al. (2016) Bean, J. L., Abbot, D. S., & Kempton, E. M. R. 2017, ApJ, 841, L24, doi:10.3847/2041-8213/aa738a * Bean et al. (2010) Bean, J. L., Miller-Ricci Kempton, E., & Homeier, D. 2010, Nature, 468, 669, doi:10.1038/nature09596 * Bean et al. (2011) Bean, J. L., D'esert, J.-M., Kabath, P., et al. 2011, ApJ, 743, 92, doi:10.1088/0004-637X/743/1/92 * Bean et al. (2023) Bean, J. L., Xue, Q., August, P. C., et al. 2023, Nature, 618, 43, doi:10.1038/s41586-023-05984-y * Beichman et al. (2014) Beichman, C., Benneke, B., Knutson, H., et al. 2014, arXiv e-prints, arXiv:1411.1754, doi:10.48550/arXiv:1411.1754 * Bell & Cowan (2018) Bell, T. J., & Cowan, N. B. 2018, ApJ, 857, L20, doi:10.3847/2041-8213/aabcc8 * Bell et al. (2019) Bell, T. J., Zhang, M., Cubillos, P. E., et al. 2019, MNRAS, 489, 1995, doi:10.1093/mnras/stz2018 * Bell et al. (2021) Bell, T. J., Dang, L., Cowan, N. B., et al. 2021, MNRAS, 504, 3316, doi:10.1093/mnras/stab1027 * Bell et al. (2023a) Bell, T. J., Welbanks, L., Schlawin, E., et al. 2023a, arXiv e-prints, arXiv:2309.04042, doi:10.48550/arXiv.2309.04042 * Bell et al. (2023b) Bell, T. J., Kreidberg, L., Kendrew, S., et al. 2023b, arXiv e-prints, arXiv:2301.06350, doi:10.48550/arXiv.2301.06350 * Beltz et al. (2021) Beltz, H., Rauscher, E., Brogi, M., & Kempton, E. M. R. 2021, AJ, 161, 1, 10.3847/1538-3881/abb67b * Beltz et al. (2022) Beltz, H., Rauscher, E., Kempton, E. M. R., et al. 2022, AJ, 164, 140, doi:10.3847/1538-3881/ac897b * Benatti et al. (2019) Benatti, S., Nardiello, D., Malavolta, L., et al. 2019, A&A, 630, A81, doi:10.1051/0004-6361/201935598 * Benneke & Seager (2012) Benneke, B., & Seager, S. 2012, ApJ, 753, 100, doi:10.1088/0004-637X/753/2/100 * Benneke & Seager (2013) --. 2013, ApJ, 778, 153, doi:10.1088/0004-637X/778/2/153 * Benneke et al. (2019a) Benneke, B., Knutson, H. A., Lothringer, J., et al. 2019a, Nature Astronomy, 3, 813, doi:10.1038/s41550-019-0800-5 * Benneke et al. (2019b) Benneke, B., Wong, I., Piaulet, C., et al. 2019b, ApJ, 887, L14, doi:10.3847/2041-8213/ab59dc * Berta et al. (2012) Berta, Z. K., Charbonneau, D., D'esert, J.-M., et al. 2012, ApJ, 747, 35, doi:10.1088/0004-637X/747/1/35 * Beezard et al. (2022) Beezard, B., Charnay, B., & Blain, D. 2022, Nature Astronomy, 6, 537, doi:10.1038/s41550-022-01678-z * Birkby (2018) Birkby, J. L. 2018, in Handbook of Exoplanets, ed. H. J. Deeg & J. A. Belmonte, 16, doi:10.1007/978-3-319-55333-716 * Birkby et al. (2013) Birkby, J. L., de Kok, R. J., Brogi, M., et al. 2013, MNRAS, 436, L35, doi:10.1093/mnrasl/slt107 * Bland-Hawthorn & Gerhard (2016) Bland-Hawthorn, J., & Gerhard, O. 2016, ARA&A, 54, 529, doi:10.1146/annurev-astro-081915-023441* Booth et al. (2017) Booth, R. A., Clarke, C. J., Madhusudhan, N., & Ilee, J. D. 2017, MNRAS, 469, 3994, 10.1093/mnras/stx1103 * Bourrier et al. (2018) Bourrier, V., Lecavelier des Etangs, A., Ehrenreich, D., et al. 2018, A&A, 620, A147, doi:10.1051/0004-6361/201833675 * Brande et al. (2023) Brande, J., Crossfield, I. J. M., Kreidberg, L., et al. 2023, arXiv e-prints, arXiv:2310.07714, doi:10.48550/arXiv.2310.07714 * Brandeker et al. (2022) Brandeker, A., Heng, K., Lendl, M., et al. 2022, A&A, 659, L4, doi:10.1051/0004-6361/202243082 * Brogi et al. (2016) Brogi, M., de Kok, R. J., Albrecht, S., et al. 2016, ApJ, 817, 106, doi:10.3847/0004-637X/817/2/106 * Brogi et al. (2023) Brogi, M., Emeka-Okafor, V., Line, M. R., et al. 2023, AJ, 165, 91, doi:10.3847/1538-3881/acaf5c * Bryant et al. (2023) Bryant, E. M., Bayliss, D., & Van Eylen, V. 2023, MNRAS, 521, 3663, doi:10.1093/mnras/stad626 * Burningham et al. (2021) Burningham, B., Faherty, J. K., Gonzales, E. C., et al. 2021, MNRAS, 506, 1944, doi:10.1093/mnras/stab1361 * Burrows et al. (2008) Burrows, A., Budaj, J., & Hubeny, I. 2008, ApJ, 678, 1436, doi:10.1086/533518 * Burrows & Sharp (1999) Burrows, A., & Sharp, C. M. 1999, ApJ, 512, 843, doi:10.1086/306811 * Burrows (2014) Burrows, A. S. 2014, Nature, 513, 345, doi:10.1038/nature13782 * C'aceres et al. (2014) C'aceres, C., Kabath, P., Hoyer, S., et al. 2014, A&A, 565, A7, doi:10.1051/0004-6361/201321087 * Cahoy et al. (2010) Cahoy, K. L., Marley, M. S., & Fortney, J. J. 2010, ApJ, 724, 189, 10.1088/0004-637X/724/1/189 * Carleo et al. (2022) Carleo, I., Giacobbe, P., Guilluy, G., et al. 2022, AJ, 164, 101, doi:10.3847/1538-3881/ac80bf * Carone et al. (2023) Carone, L., Lewis, D. A., Samra, D., Schneider, A. D., & Helling, C. 2023, arXiv e-prints, arXiv:2301.08492, doi:10.48550/arXiv.2301.08492 * Casasayas-Barris et al. (2019) Casasayas-Barris, N., Pall'e, E., Yan, F., et al. 2019, A&A, 628, A9, doi:10.1051/0004-6361/201935623 * Casasayas-Barris et al. (2020) --. 2020, A&A, 635, A206, doi:10.1051/0004-6361/201937221 * Casasayas-Barris et al. (2021) Casasayas-Barris, N., Pall'e, E., Stangret, M., et al. 2021, A&A, 647, A26, doi:10.1051/0004-6361/202039539 * Cauley et al. (2018) Cauley, P. W., Kuckein, C., Redfield, S., et al. 2018, AJ, 156, 189, doi:10.3847/1538-3881/aaddf9 * Chachan et al. (2023) Chachan, Y., Knutson, H. A., Lothringer, J., & Blake, G. A. 2023, ApJ, 943, 112, doi:10.3847/1538-4357/aca614 * Changeat (2022) Changeat, Q. 2022, AJ, 163, 106, doi:10.3847/1538-3881/ac4475 * Changeat et al. (2020) Changeat, Q., Edwards, B., Al-Refaie, A. F., et al. 2020, A&A, 647, A10, doi:10.1051/0004-6361/202039539 * C'aceres et al. (2014) C'aceres, C., Kabath, P., Hoyer, S., et al. 2014, A&A, 565, A7, doi:10.1051/0004-6361/20143612022, ApJS, 260, 3, doi: 10.3847/1538-4365/ac5cc2 * Charbonneau et al. (2000) Charbonneau, D., Brown, T. M., Latham, D. W., & Mayor, M. 2000, ApJL, 529, L45, doi: 10.1086/312457 * Charbonneau et al. (2002) Charbonneau, D., Brown, T. M., Noyes, R. W., & Gilliland, R. L. 2002, ApJ, 568, 377, 10.1086/338770 * Charbonneau et al. (2008) Charbonneau, D., Knutson, H. A., Barman, T., et al. 2008, ApJ, 686, 1341, doi: 10.1086/591635 * Charbonneau et al. (1999) Charbonneau, D., Noyes, R. W., Korzennik, S. G., et al. 1999, ApJL, 522, L145, doi: 10.1086/312234 * Charbonneau et al. (2005) Charbonneau, D., Allen, L. E., Megeath, S. T., et al. 2005, ApJ, 626, 523, doi: 10.1086/429991 * Charnay et al. (2015) Charnay, B., Meadows, V., Misra, A., Leconte, J., & Arney, G. 2015, ApJL, 813, L1, doi: 10.1088/2041-8205/813/1/L1 * Christiansen et al. (2010) Christiansen, J. L., Ballard, S., Charbonneau, D., et al. 2010, ApJ, 710, 97, doi: 10.1088/0004-637X/710/1/97 * Constantinou et al. (2023) Constantinou, S., Madhusudhan, N., & Gandhi, S. 2023, ApJL, 943, L10, doi: 10.3847/2041-8213/acaead * Cooper & Showman (2006) Cooper, C. S., & Showman, A. P. 2006, ApJ, 649, 1048, doi: 10.1086/506312 * Coulombe et al. (2023) Coulombe, L.-P., Benneke, B., Challener, R., et al. 2023, Nature, 620, 292, doi: 10.1038/s41586-023-06230-1 * Cowan & Agol (2008) Cowan, N. B., & Agol, E. 2008, ApJL, 678, L129, doi: 10.1086/588553 * Cowan & Agol (2011) --. 2011, ApJ, 726, 82, doi: 10.1088/0004-637X/726/2/82 * Crossfield (2023) Crossfield, I. J. M. 2023, ApJL, 952, L18, doi: 10.3847/2041-8213/ace35f * Crossfield & Kreidberg (2017) Crossfield, I. J. M., & Kreidberg, L. 2017, AJ, 154, 261, doi: 10.3847/1538-3881/aa9279 * Crossfield et al. (2022) Crossfield, I. J. M., Malik, M., Hill, M. L., et al. 2022, ApJL, 937, L17, 10.3847/2041-8213/ac886b * Crouzet et al. (2014) Crouzet, N., McCullough, P. R., Deming, D., & Madhusudhan, N. 2014, ApJ, 795, 166, doi: 10.1088/0004-637X/795/2/166 * Currie et al. (2023) Currie, T., Biller, B., Lagrange, A., et al. 2023, in Astronomical Society of the Pacific Conference Series, Vol. 534, Astronomical Society of the Pacific Conference Series, ed. S. Inutsuka, Y. Aikawa, T. Muto, K. Tomida, & M. Tamura, 799, doi: 10.48550/arXiv.2205.05696 * Dai et al. (2019) Dai, F., Masuda, K., Winn, J. N., & Zeng, L. 2019, ApJ, 883, 79, doi: 10.3847/1538-4357/ab3a3b * Dang et al. (2018) Dang, L., Cowan, N. B., Schwartz, J. C., et al. 2018, Nature Astronomy, 2, 220, doi: 10.1038/s41550-017-0351-6 * Dattilo et al. (2023) Dattilo, A., Batalha, N. M., & Bryson, S. 2023, arXiv e-prints, arXiv:2308.00103, doi: 10.48550/arXiv.2308.00103 * David et al. (2016) David, T. J., Hillenbrand, L. A., Petigura, E. A., et al. 2016, Nature, 534, 658, doi: 10.1038/nature18293David, T. J., Cody, A. M., Hedges, C. L., et al. 2019, AJ, 158, 79, doi: 10.3847/1538-3881/ab290f de Kok, R. J., Brogi, M., Snellen, I. A. G., et al. 2013, A&A, 554, A82, doi: 10.1051/0004-6361/201321381 de Wit, J., Gillon, M., Demory, B. O., & Seager, S. 2012, A&A, 548, A128, doi: 10.1051/0004-6361/201219060 de Wit, J., Wakeford, H. R., Gillon, M., et al. 2016, Nature, 537, 69, doi: 10.1038/nature18641 de Wit, J., Wakeford, H. R., Lewis, N. K., et al. 2018, Nature Astronomy, 2, 214, doi: 10.1038/s41550-017-0374-z Deming, D., Line, M. R., Knutson, H. A., et al. 2023, AJ, 165, 104, doi: 10.3847/1538-3881/acb210 Deming, D., Seager, S., Richardson, L. J., & Harrington, J. 2005, Nature, 434, 740, doi: 10.1038/nature03507 Deming, D., & Sheppard, K. 2017, ApJ, 841, L3, doi: 10.3847/2041-8213/aa706c Deming, D., Seager, S., Winn, J., et al. 2009, PASP, 121, 952, doi: 10.1086/605913 Deming, D., Knutson, H., Agol, E., et al. 2011, ApJ, 726, 95, doi: 10.1088/0004-637X/726/2/95 Deming, D., Wilkins, A., McCullough, P., et al. 2013, ApJ, 774, 95, doi: 10.1088/0004-637X/774/2/95 Deming, L. D., & Seager, S. 2017, Journal of Geophysical Research (Planets), 122, 53, doi: 10.1002/2016JE005155 Demory, B.-O., Gillon, M., Madhusudhan, N., & Queloz, D. 2016a, MNRAS, 455, 2018, doi: 10.1093/mnras/stv2239 Demory, B.-O., Seager, S., Madhusudhan, N., et al. 2011, ApJ, 735, L12, doi: 10.1088/2041-8205/735/1/L12 Demory, B.-O., de Wit, J., Lewis, N., et al. 2013, ApJ, 776, L25, doi: 10.1088/2041-8205/776/2/L25 Demory, B.-O., Gillon, M., de Wit, J., et al. 2016b, Nature, 532, 207, doi: 10.1038/nature17169 D'esert, J. M., Vidal-Madjar, A., Lecavelier Des Etangs, A., et al. 2008, A&A, 492, 585, doi: 10.1051/0004-6361:200810355 Diamond-Lowe, H., Berta-Thompson, Z., Charbonneau, D., Dittmann, J., & Kempton, E. M. R. 2020a, AJ, 160, 27, doi: 10.3847/1538-3881/aac6dd Diamond-Lowe, H., Charbonneau, D., Malik, M., Kempton, E. M. R., & Beletsky, Y. 2020b, AJ, 160, 188, doi: 10.3847/1538-3881/aba74f Diamond-Lowe, H., Mendon,ca, J. M.,Charbonneau, D., & Buchhave, L. A. 2023, AJ, 165, 169, doi: 10.3847/1538-3881/acbf39 * Diamond-Lowe et al. (2014) Diamond-Lowe, H., Stevenson, K. B., Bean, J. L., Line, M. R., & Fortney, J. J. 2014, ApJ, 796, 66, doi: 10.1088/0004-637X/796/1/66 * Domagal-Goldman et al. (2014) Domagal-Goldman, S. D., Segura, A., Claire, M. W., Robinson, T. D., & Meadows, V. S. 2014, ApJ, 792, 90, doi: 10.1088/0004-637X/792/2/90 * Dos Santos (2022) Dos Santos, L. A. 2022, arXiv e-prints, arXiv:2211.16243, doi: 10.48550/arXiv.2211.16243 * Dos Santos et al. (2022) Dos Santos, L. A., Vidotto, A. A., Vissapragada, S., et al. 2022, A&A, 659, A62, doi: 10.1051/0004-6361/202142038 * Dos Santos et al. (2023) Dos Santos, L. A., Garc'ia Mun\"oz, A., Sing, D. K., et al. 2023, AJ, 166, 89, doi: 10.3847/1538-3881/ace445 * Dressing & Charbonneau (2015) Dressing, C. D., & Charbonneau, D. 2015, ApJ, 807, 45, doi: 10.1088/0004-637X/807/1/45 * Ehrenreich et al. (2015) Ehrenreich, D., Bourrier, V., Wheatley, P. J., et al. 2015, Nature, 522, 459, doi: 10.1038/nature14501 * Ehrenreich et al. (2020) Ehrenreich, D., Lovis, C., Allart, R., et al. 2020, Nature, 580, 597, doi: 10.1038/s41586-020-2107-1 * Esparza-Borges et al. (2023) Esparza-Borges, E., L'opez-Morales, M., Adams Redai, J. I., et al. 2023, arXiv e-prints, arXiv:2309.00036, doi: 10.48550/arXiv.2309.00036 * Espinoza & Jones (2021) Espinoza, N., & Jones, K. 2021, AJ, 162, 165, doi: 10.3847/1538-3881/ac134d * Evans et al. (2017) Evans, T. M., Sing, D. K., Kataria, T., et al. 2017, Nature, 548, 58, doi: 10.1038/nature23266 * Evans et al. (2018) Evans, T. M., Sing, D. K., Goyal, J. M., et al. 2018, AJ, 156, 283, doi: 10.3847/1538-3881/aaebf * Fauchez et al. (2019) Fauchez, T. J., Turbet, M., Villanueva, G. L., et al. 2019, ApJ, 887, 194, doi: 10.3847/1538-4357/ab5862 * Feinstein et al. (2023) Feinstein, A. D., Radica, M., Welbanks, L., et al. 2023, Nature, 614, 670, doi: 10.1038/s41586-022-05674-1 * Fernandes et al. (2019) Fernandes, R. B., Mulders, G. D., Pascucci, I., Mordasini, C., & Emsenhuber, A. 2019, ApJ, 874, 81, doi: 10.3847/1538-4357/ab0300 * Fischer et al. (2014) Fischer, D. A., Howard, A. W., Laughlin, G. P., et al. 2014, in Protostars and Planets VI, ed. H. Beuther, R. S. Klessen, C. P. Dullemond, & T. Henning, 715-737, doi: 10.2458/azu_uapress.9780816531240-ch031 * Flagg et al. (2023) Flagg, L., Turner, J. D., Deibert, E., et al. 2023, ApJ, 953, L19, doi: 10.3847/2041-8213/ace529 * Flowers et al. (2019) Flowers, E., Brogi, M., Rauscher, E., Kempton, E. M. R., & Chiavassa, A. 2019, AJ, 157, 209, doi: 10.3847/1538-3881/ab164c * Foreman-Mackey et al. (2019) Foreman-Mackey, D., Morton, T. D., Hogg,D. W., Agol, E., & Sch\"olkopf, B. 2016, AJ, 152, 206, doi: 10.3847/0004-6256/152/6/206 * Fortney (2005) Fortney, J. J. 2005, MNRAS, 364, 649, doi: 10.1111/j.1365-2966.2005.09587.x * Fortney et al. (2021) Fortney, J. J., Dawson, R. I., & Komacek, T. D. 2021, Journal of Geophysical Research (Planets), 126, e06629, doi: 10.1029/2020JE006629 * Fortney et al. (2008) Fortney, J. J., Lodders, K., Marley, M. S., & Freedman, R. S. 2008, ApJ, 678, 1419, doi: 10.1086/528370 * Fortney et al. (2013) Fortney, J. J., Mordasini, C., Nettelmann, N., et al. 2013, ApJ, 775, 80, doi: 10.1088/0004-637X/775/1/80 * Fortney et al. (2010) Fortney, J. J., Shabram, M., Showman, A. P., et al. 2010, ApJ, 709, 1396, doi: 10.1088/0004-637X/709/2/1396 * Fortney et al. (2020) Fortney, J. J., Visscher, C., Marley, M. S., et al. 2020, AJ, 160, 288, 10.3847/1538-3881/abc5bd * Fossati et al. (2023) Fossati, L., Pillitteri, I., Shaikhislamov, I. F., et al. 2023, A&A, 673, A37, doi: 10.1051/0004-6361/202245667 * Fu et al. (2017) Fu, G., Deming, D., Knutson, H., et al. 2017, ApJ, 847, L22, doi: 10.3847/2041-8213/aa8e40 * Fu et al. (2021) Fu, G., Deming, D., Lothringer, J., et al. 2021, AJ, 162, 108, doi: 10.3847/1538-3881/ac1200 * Fu et al. (2022) Fu, G., Espinoza, N., Sing, D. K., et al. 2022, ApJ, 940, L35, doi: 10.3847/2041-8213/ac9977 * Fulton & Petigura (2018) Fulton, B. J., & Petigura, E. A. 2018, AJ, 156, 264, doi: 10.3847/1538-3881/aae828 * Fulton et al. (2017) Fulton, B. J., Petigura, E. A., Howard, A. W., et al. 2017, AJ, 154, 109, doi: 10.3847/1538-3881/aa80eb * Fulton et al. (2021) Fulton, B. J., Rosenthal, L. J., Hirsch, L. A., et al. 2021, ApJS, 255, 14, doi: 10.3847/1538-4365/abfcc1 * Gandhi et al. (2020) Gandhi, S., Brogi, M., & Webb, R. K. 2020a, MNRAS, 498, 194, doi: 10.1093/mnras/staa2424 * Gandhi et al. (2022) Gandhi, S., Kesseli, A., Snellen, I., et al. 2022, MNRAS, 515, 749, doi: 10.1093/mnras/stac1744 * Gandhi et al. (2020) Gandhi, S., Madhusudhan, N., & Mandell, A. 2020b, AJ, 159, 232, doi: 10.3847/1538-3881/ab845e * Gandhi et al. (2023) Gandhi, S., Kesseli, A., Zhang, Y., et al. 2023, AJ, 165, 242, doi: 10.3847/1538-3881/accd65 * Gao & Powell (2021) Gao, P., & Powell, D. 2021, ApJ, 918, L7, doi: 10.3847/2041-8213/ac139f * Gao et al. (2021) Gao, P., Wakeford, H. R., Moran, S. E., & Parmentier, V. 2021, Journal of Geophysical Research (Planets), 126, e06655, 10.1029/2020JE006655 * Gao et al. (2020) Gao, P., Thorngren, D. P., Lee, E. K. H., et al. 2020, Nature Astronomy, 4, 951, doi: 10.1038/s41550-020-1114-3 * Gaudi (2022) Gaudi, B. S. 2022, in Astrophysics and Space Science Library, Vol. 466, Demographics of Exoplanetary Systems, Lecture Notes of the 3rd Advanced School on Exoplanetary Science, ed. K. Biazzo, V. Bozza, L. Mancini, & A. Sozzetti,237-291, doi: 10.1007/978-3-030-88124-5 * Gaudi et al. (2021) Gaudi, B. S., Meyer, M., & Christiansen, J. 2021, in ExoFrontiers; Big Questions in Exoplanetary Science, ed. N. Madhusudhan, 2-1, doi: 10.1088/2514-3433/abfa8fch2 * Giacalone et al. (2021) Giacalone, S., Dressing, C. D., Jensen, E. L. N., et al. 2021, AJ, 161, 24, doi: 10.3847/1538-3881/abc6af * Giacobbe et al. (2021) Giacobbe, P., Brogi, M., Gandhi, S., et al. 2021, Nature, 592, 205, doi: 10.1038/s41586-021-03381-x * Gibson et al. (2022) Gibson, N. P., Nugroho, S. K., Lothringer, J., Maguire, C., & Sing, D. K. 2022, MNRAS, 512, 4618, doi: 10.1093/mnras/stac091 * Gibson et al. (2020) Gibson, N. P., Merritt, S., Nugroho, S. K., et al. 2020, MNRAS, 493, 2215, doi: 10.1093/mnras/staa228 * Gillon et al. (2017) Gillon, M., Triaud, A. H. M. J., Demory, B.-O., et al. 2017, Nature, 542, 456, doi: 10.1038/nature21360 * Ginzburg et al. (2018) Ginzburg, S., Schlichting, H. E., & Sari, R. 2018, MNRAS, 476, 759, doi: 10.1093/mnras/sty290 * Goyal et al. (2021) Goyal, J. M., Lewis, N. K., Wakeford, H. R., MacDonald, R. J., & Mayne, N. J. 2021, ApJ, 923, 242, doi: 10.3847/1538-4357/ac27b2 * Grant et al. (2023) Grant, D., Lothringer, J. D., Wakeford, H. R., et al. 2023, ApJ, 949, L15, doi: 10.3847/2041-8213/acd544 * Greene et al. (2023) Greene, T. P., Bell, T. J., Ducrot, E., et al. 2023, Nature, 618, 39, doi: 10.1038/s41586-023-05951-7 * Guilluy et al. (2019) Guilluy, G., Sozzetti, A., Brogi, M., et al. 2019, A&A, 625, A107, doi: 10.1051/0004-6361/201834615 * Guilluy et al. (2022) Guilluy, G., Giacobbe, P., Carleo, I., et al. 2022, A&A, 665, A104, doi: 10.1051/0004-6361/202243854 * Gully-Santiago et al. (2023) Gully-Santiago, M., Morley, C. V., Luna, J., et al. 2023, arXiv e-prints, arXiv:2307.08959, doi: 10.48550/arXiv.2307.08959 * Guo et al. (2020) Guo, X., Crossfield, I. J. M., Dragomir, D., et al. 2020, AJ, 159, 239, doi: 10.3847/1538-3881/ab8815 * Gupta & Schlichting (2019) Gupta, A., & Schlichting, H. E. 2019, MNRAS, 487, 24, doi: 10.1093/mnras/stz1230 * Hansen (2008) Hansen, B. M. S. 2008, ApJS, 179, 484, doi: 10.1086/591964 * Harbach et al. (2021) Harbach, L. M., Moschou, S. P., Garraffo, C., et al. 2021, ApJ, 913, 130, doi: 10.3847/1538-4357/abf63a * Hardegree-Ullman et al. (2019) Hardegree-Ullman, K. K., Cushing, M. C., Muirhead, P. S., & Christiansen, J. L. 2019, AJ, 158, 75, doi: 10.3847/1538-3881/ab21d2 * Hardegree-Ullman et al. (2020) Hardegree-Ullman, K. K., Zink, J. K., Christiansen, J. L., et al. 2020, ApJS, 247, 28, doi: 10.3847/1538-4365/ab7230 * He et al. (2018) He, C., H'orst, S. M., Lewis, N. K., et al. 2018, AJ,156, 38, doi: 10.3847/1538-3881/aac883 * Meszaros et al. (2020) Meszaros, M., et al., 2020,, doi: 10.3847/1538-3881/aac883 * Meszaros et al. (2020) Meszaros, M., et al., 2020,, doi: 10.3847/1538-3881/ab1a4 * Meszaros et al. (2020) Meszaros, M., et al., 2020,, doi: 10. Release Science Team, Ahrer, E.-M., Alderson, L., et al. 2023, Nature, 614, 649, doi:10.1038/s41586-022-05269-w * Kaltenegger & Sasselov (2010) Kaltenegger, L., & Sasselov, D. 2010, ApJ, 708, 1162, doi:10.1088/0004-637X/708/2/1162 * Kasper et al. (2021) Kasper, D., Bean, J. L., Line, M. R., et al. 2021, ApJ, 921, L18, doi:10.3847/2041-8213/ac30e1 * Kasper et al. (2023) --. 2023, AJ, 165, 7, doi:10.3847/1538-3881/ac9f40 * Kataria et al. (2014) Kataria, T., Showman, A. P., Fortney, J. J., Marley, M. S., & Freedman, R. S. 2014, ApJ, 785, 92, doi:10.1088/0004-637X/785/2/92 * Kawashima & Ikoma (2018) Kawashima, Y., & Ikoma, M. 2018, ApJ, 853, 7, doi:10.3847/1538-4357/aaa0c5 * Kawashima & Ikoma (2019) --. 2019, ApJ, 877, 109, doi:10.3847/1538-4357/ab1b1d * Keating et al. (2019) Keating, D., Cowan, N. B., & Dang, L. 2019, Nature Astronomy, 3, 1092, doi:10.1038/s41550-019-0859-z * Kempton et al. (2017) Kempton, E. M. R., Bean, J. L., & Parmentier, V. 2017, ApJ, 845, L20, doi:10.3847/2041-8213/aa84ac * Kempton et al. (2014) Kempton, E. M. R., Perna, R., & Heng, K. 2014, ApJ, 795, 24, doi:10.1088/0004-637X/795/1/24 * Kempton et al. (2018) Kempton, E. M. R., Bean, J. L., Louie, D. R., et al. 2018, PASP, 130, 114401, doi:10.1098/1538-3873/aadf6f * Kempton et al. (2023) Kempton, E. M. R., Zhang, M., Bean, J. L., et al. 2023, Nature, 620, 67, 10.1038/s41586-023-06159-5 * Kesseli et al. (2022) Kesseli, A. Y., Snellen, I. A. G., Casasayas-Barris, N., Molli'ere, P., & Sanchez-L'opez, A. 2022, AJ, 163, 107, doi:10.3847/1538-3881/ac4336 * Khare et al. (1984) Khare, B. N., Sagan, C., Arakawa, E. T., et al. 1984, Icarus, 60, 127, doi:10.1016/0019-1035(84)90142-8 * Kimura & Ikoma (2022) Kimura, T., & Ikoma, M. 2022, Nature Astronomy, 6, 1296, doi:10.1038/s41550-022-01781-1 * King & Wheatley (2021) King, G. W., & Wheatley, P. J. 2021, MNRAS, 501, L28, doi:10.1093/mnrasl/slaa186 * Kirk et al. (2020) Kirk, J., Alam, M. K., L'opez-Morales, M., & Zeng, L. 2020, AJ, 159, 115, doi:10.3847/1538-3881/ab6e66 * Kitzmann et al. (2023) Kitzmann, D., Stock, J. W., & Patzer, A. B. C. 2023, arXiv e-prints, arXiv:2309.02337, doi:10.48550/arXiv.2309.02337 * Knutson et al. (2008) Knutson, H. A., Charbonneau, D., Allen, L. E., Burrows, A., & Megeath, S. T. 2008, ApJ, 673, 526, doi:10.1086/523894 * Knutson et al. (2009) Knutson, H. A., Charbonneau, D., Burrows, A., O'Donovan, F. T., & Mandushev, G. 2009, ApJ, 691, 866, doi:10.1088/0004-637X/691/1/866 * Knutson et al. (2007) Knutson, H. A., Charbonneau, D., Allen, L. E., et al. 2007, Nature, 447, 183, doi:10.1038/nature05782* Knutson et al. (2014) Knutson, H. A., Dragomir, D., Kreidberg, L., et al. 2014, ApJ, 794, 155, doi: 10.1088/0004-637X/794/2/155 * Koll (2022) Koll, D. D. B. 2022, ApJ, 924, 134, 10.3847/1538-4357/ac3b48 * Koll et al. (2019) Koll, D. D. B., Malik, M., Mansfield, M., et al. 2019, ApJ, 886, 140, doi: 10.3847/1538-4357/ab4c91 * Komacek & Showman (2016) Komacek, T. D., & Showman, A. P. 2016, ApJ, 821, 16, doi: 10.3847/0004-637X/821/1/16 * Komacek et al. (2017) Komacek, T. D., Showman, A. P., & Tan, X. 2017, ApJ, 835, 198, doi: 10.3847/1538-4357/835/2/198 * Komacek et al. (2022) Komacek, T. D., Tan, X., Gao, P., & Lee, E. K. H. 2022, ApJ, 934, 79, doi: 10.3847/1538-4357/ac7723 * Kreidberg et al. (2014a) Kreidberg, L., Bean, J. L., D'esert, J.-M., et al. 2014a, Nature, 505, 69, doi: 10.1038/nature12888 * Koll & Parmentier (2014b) --. 2014b, ApJ, 793, L27, doi: 10.1088/2041-8205/793/2/L27 * Kreidberg et al. (2018) Kreidberg, L., Line, M. R., Parmentier, V., et al. 2018, AJ, 156, 17, doi: 10.3847/1538-3881/aac3df * Kreidberg et al. (2019) Kreidberg, L., Koll, D. D. B., Morley, C., et al. 2019, Nature, 573, 87, doi: 10.1038/s41586-019-1497-4 * Krenn et al. (2023) Krenn, A. F., Lendl, M., Patel, J. A., et al. 2023, A&A, 672, A24, doi: 10.1051/0004-6361/202245016 * Krissansen-Totton et al. (2018) Krissansen-Totton, J., Garland, R., Irwin, P., & Catling, D. C. 2018, AJ, 156, 114, doi: 10.3847/1538-3881/aad564 * Lally & Vanderburg (2022) Lally, M., & Vanderburg, A. 2022, AJ, 163, 181, 10.3847/1538-3881/ac53a8 * Lamp'on et al. (2020) Lamp'on, M., L'opez-Puertas, M., Lara, L. M., et al. 2020, A&A, 636, A13, doi: 10.1051/0004-6361/201937175 * Lamp'on et al. (2021) Lamp'on, M., L'opez-Puertas, M., Czesla, S., et al. 2021, A&A, 648, L7, doi: 10.1051/0004-6361/202140423 * Lamp'on et al. (2023) Lamp'on, M., L'opez-Puertas, M., Sanz-Forcada, J., et al. 2023, A&A, 673, A140, doi: 10.1051/0004-6361/202245649 * Langeveld et al. (2022) Langeveld, A. B., Madhusudhan, N., & Cabot, S. H. C. 2022, MNRAS, 514, 5192, doi: 10.1093/mnras/stac1539 * Lavie et al. (2017) Lavie, B., Ehrenreich, D., Bourrier, V., et al. 2017, A&A, 605, L7, doi: 10.1051/0004-6361/201731340 * Lavvas & Arfaux (2021) Lavvas, P., & Arfaux, A. 2021, MNRAS, 502, 5643, doi: 10.1093/mnras/stab456 * Lavvas et al. (2019) Lavvas, P., Koskinen, T., Steinrueck, M. E., Garc'ia Mun'oz, A., & Showman, A. P. 2019, ApJ, 878, 118, doi: 10.3847/1538-4357/ab204e * Lecavelier Des Etangs et al. (2010) Lecavelier Des Etangs, A., Ehrenreich, D., Vidal-Madjar, A., et al. 2010, A&A, 514, A72, doi: 10.1051/0004-6361/200913347* Lee (2019) Lee, E. J. 2019, ApJ, 878, 36, doi: 10.3847/1538-4357/ab1b40 * Lee et al. (2022) Lee, E. J., Karalis, A., & Thorngren, D. P. 2022, ApJ, 941, 186, doi: 10.3847/1538-4357/ac9c66 * Lesjak et al. (2023) Lesjak, F., Nortmann, L., Yan, F., et al. 2023, arXiv e-prints, arXiv:2307.11627, 10.48550/arXiv.2307.11627 * Libby-Roberts et al. (2020) Libby-Roberts, J. E., Berta-Thompson, Z. K., D'esert, J.-M., et al. 2020, AJ, 159, 57, doi: 10.3847/1538-3881/ab5d36 * Libby-Roberts et al. (2022) Libby-Roberts, J. E., Berta-Thompson, Z. K., Diamond-Lowe, H., et al. 2022, AJ, 164, 59, doi: 10.3847/1538-3881/ac75de * Lim et al. (2023) Lim, O., Benneke, B., Doyon, R., et al. 2023, arXiv e-prints, arXiv:2309.07047, doi: 10.48550/arXiv.2309.07047 * Lincowski et al. (2011) Lincowski, A. P., Meadows, V. S., Zieba, S., et al. 2023, arXiv e-prints, arXiv:2308.05899, doi: 10.48550/arXiv.2308.05899 * Line et al. (2011) Line, M. R., Vasisht, G., Chen, P., Angerhausen, D., & Yung, Y. L. 2011, ApJ, 738, 32, doi: 10.1088/0004-637X/738/1/32 * Line et al. (2013) Line, M. R., Wolf, A. S., Zhang, X., et al. 2013, ApJ, 775, 137, doi: 10.1088/0004-637X/775/2/137 * Line et al. (2016) Line, M. R., Stevenson, K. B., Bean, J., et al. 2016, AJ, 152, 203, doi: 10.3847/0004-6256/152/6/203 * Line et al. (2021) Line, M. R., Brogi, M., Bean, J. L., et al. 2021, Nature, 598, 580, doi: 10.1038/s41586-021-03912-6 * Linssen & Oklop'ci'c (2023) Linssen, D. C., & Oklop'ci'c, A. 2023, A&A, 675, A193, doi: 10.1051/0004-6361/202346583 * Linssen et al. (2022) Linssen, D. C., Oklop'ci'c, A., & MacLeod, M. 2022, A&A, 667, A54, doi: 10.1051/0004-6361/202243830 * Lodders & Fegley (2002) Lodders, K., & Fegley, B. 2002, Icarus, 155, 393, 10.1006/icar.2001.6740 * Lothringer et al. (2018) Lothringer, J. D., Barman, T., & Koskinen, T. 2018, ApJ, 866, 27, doi: 10.3847/1538-4357/aadd9e * Lothringer et al. (2021) Lothringer, J. D., Rustamkulov, Z., Sing, D. K., et al. 2021, ApJ, 914, 12, doi: 10.3847/1538-4357/abf8a9 * Lothringer et al. (2022) Lothringer, J. D., Sing, D. K., Rustamkulov, Z., et al. 2022, Nature, 604, 49, doi: 10.1038/s41586-022-04453-2 * Louden & Wheatley (2015) Louden, T., & Wheatley, P. J. 2015, ApJ, 814, L24, doi: 10.1088/2041-8205/814/2/L24 * Lovelock (1965) Lovelock, J. E. 1965, Nature, 207, 568, doi: 10.1038/207568a0 * Lozovsky et al. (2018) Lozovsky, M., Helled, R., Dorn, C., & Venturini, J. 2018, ApJ, 866, 49, doi: 10.3847/1538-4357/aadd09 * Luque & Pall'e (2022) Luque, R., & Pall'e, E. 2022, Science, 377, 1211, doi: 10.1126/science.abl7164* Lustig-Yaeger et al. (2019) Lustig-Yaeger, J., Meadows, V. S., & Lincowski, A. P. 2019, AJ, 158, 27, doi: 10.3847/1538-3881/ab21e0 * Lustig-Yaeger et al. (2023) Lustig-Yaeger, J., Fu, G., May, E. M., et al. 2023, arXiv e-prints, arXiv:2301.04191, doi: 10.48550/arXiv.2301.04191 * MacDonald & Batalha (2023) MacDonald, R. J., & Batalha, N. E. 2023, Research Notes of the American Astronomical Society, 7, 54, doi: 10.3847/2515-5172/acc46a * MacLeod & Oklop'ci'c (2022) MacLeod, M., & Oklop'ci'c, A. 2022, ApJ, 926, 226, doi: 10.3847/1538-4357/ac46ce * Madhusudhan (2012) Madhusudhan, N. 2012, ApJ, 758, 36, 10.1088/0004-637X/758/1/36 * Madhusudhan (2019) --. 2019, ARA&A, 57, 617, doi: 10.1146/annurev-astro-081817-051846 * Madhusudhan et al. (2016) Madhusudhan, N., Agu'ndez, M., Moses, J. I., & Hu, Y. 2016, Space Sci. Rev., 205, 285, doi: 10.1007/s11214-016-0254-3 * Madhusudhan et al. (2014a) Madhusudhan, N., Amin, M. A., & Kennedy, G. M. 2014a, ApJ, 794, L12, doi: 10.1088/2041-8205/794/1/L12 * Madhusudhan et al. (2017) Madhusudhan, N., Bitsch, B., Johansen, A., & Eriksson, L. 2017, MNRAS, 469, 4102, doi: 10.1093/mnras/stx1139 * Madhusudhan et al. (2014b) Madhusudhan, N., Crouzet, N., McCullough, P. R., Deming, D., & Hedges, C. 2014b, ApJ, 791, L9, doi: 10.1088/2041-8205/791/1/L9 * Madhusudhan et al. (2023) Madhusudhan, N., Sarkar, S., Constantinou, S., et al. 2023, arXiv e-prints, arXiv:2309.05566, doi: 10.48550/arXiv.2309.05566 * Madhusudhan & Seager (2009) Madhusudhan, N., & Seager, S. 2009, ApJ, 707, 24, doi: 10.1088/0004-637X/707/1/24 * Maguire et al. (2023) Maguire, C., Gibson, N. P., Nugroho, S. K., et al. 2023, MNRAS, 519, 1030, doi: 10.1093/mnras/stac3388 * Majeau et al. (2012) Majeau, C., Agol, E., & Cowan, N. B. 2012, ApJ, 747, L20, doi: 10.1088/2041-8205/747/2/L20 * Malik et al. (2019) Malik, M., Kempton, E. M. R., Koll, D. D. B., et al. 2019, ApJ, 886, 142, doi: 10.3847/1538-4357/ab4a05 * Mansfield et al. (2019) Mansfield, M., Kite, E. S., Hu, R., et al. 2019, ApJ, 886, 141, doi: 10.3847/1538-4357/ab4c90 * Mansfield et al. (2018) Mansfield, M., Bean, J. L., Line, M. R., et al. 2018, AJ, 156, 10, 10.3847/1538-3881/aac497 * Mansfield et al. (2020) Mansfield, M., Bean, J. L., Stevenson, K. B., et al. 2020, ApJ, 888, L15, doi: 10.3847/2041-8213/ab5b09 * Mansfield et al. (2021) Mansfield, M., Line, M. R., Bean, J. L., et al. 2021, Nature Astronomy, 5, 1224, doi: 10.1038/s41550-021-01455-4 * Marley & Robinson (2015) Marley, M. S., & Robinson, T. D. 2015, ARA&A, 53, 279, doi: 10.1146/annurev-astro-082214-122522 * May et al. (2022) May, E. M., Stevenson, K. B., Bean, J. L., et al. 2022, AJ, 163, 256, doi: 10.3847/1538-3881/ac6261 * Mayor & Queloz (1995) Mayor, M., & Queloz, D. 1995, Nature, 378, 355, doi: 10.1038/378355a0 * Mazeh et al. (2016) Mazeh, T., Holczer, T., & Faigler, S. 2016, A&A,589, A75, doi: 10.1051/0004-6361/201528065 * Mbarek & Kempton (2016) Mbarek, R., & Kempton, E. M. R. 2016, ApJ, 827, 121, doi: 10.3847/0004-637X/827/2/121 * McCullough & MacKenty (2012) McCullough, P., & MacKenty, J. 2012, Considerations for using Spatial Scans with WFC3, Instrument Science Report WFC3 2012-08, 17 pages * McCullough et al. (2014) McCullough, P. R., Crouzet, N., Deming, D., & Madhusudhan, N. 2014, ApJ, 791, 55, doi: 10.1088/0004-637X/791/1/55 * Meier Vald'es et al. (2023) Meier Vald'es, E. A., Morris, B. M., Demory, B. O., et al. 2023, arXiv e-prints, arXiv:2307.06085, doi: 10.48550/arXiv.2307.06085 * Menou (2012) Menou, K. 2012, ApJ, 745, 138, 10.1088/0004-637X/745/2/138 * Mercier et al. (2022) Mercier, S. J., Dang, L., Gass, A., Cowan, N. B., & Bell, T. J. 2022, AJ, 164, 204, doi: 10.3847/1538-3881/ac8f22 * Mikal-Evans et al. (2021) Mikal-Evans, T., Crossfield, I. J. M., Benneke, B., et al. 2021, AJ, 161, 18, doi: 10.3847/1538-3881/abc874 * Mikal-Evans et al. (2022) Mikal-Evans, T., Sing, D. K., Barstow, J. K., et al. 2022, Nature Astronomy, 6, 471, doi: 10.1038/s41550-021-01592-w * Mikal-Evans et al. (2023a) Mikal-Evans, T., Madhusudhan, N., Dittmann, J., et al. 2023a, AJ, 165, 84, doi: 10.3847/1538-3881/aca90b * Mikal-Evans et al. (2023b) Mikal-Evans, T., Sing, D. K., Dong, J., et al. 2023b, ApJ, 943, L17, doi: 10.3847/2041-8213/acb049 * Miles et al. (2023) Miles, B. E., Biller, B. A., Patapis, P., et al. 2023, ApJ, 946, L6, doi: 10.3847/2041-8213/acb04a * Miller-Ricci & Fortney (2010) Miller-Ricci, E., & Fortney, J. J. 2010, ApJ, 716, L74, doi: 10.1088/2041-8205/716/1/L74 * Miller-Ricci et al. (2009) Miller-Ricci, E., Seager, S., & Sasselov, D. 2009, ApJ, 690, 1056, doi: 10.1088/0004-637X/690/2/1056 * Miller-Ricci Kempton & Rauscher (2012) Miller-Ricci Kempton, E., & Rauscher, E. 2012, ApJ, 751, 117, doi: 10.1088/0004-637X/751/2/117 * Miller-Ricci Kempton et al. (2012) Miller-Ricci Kempton, E., Zahnle, K., & Fortney, J. J. 2012, ApJ, 745, 3, doi: 10.1088/0004-637X/745/1/3 * Montet et al. (2014) Montet, B. T., Crepp, J. R., Johnson, J. A., Howard, A. W., & Marcy, G. W. 2014, ApJ, 781, 28, doi: 10.1088/0004-637X/781/1/28 * Moran et al. (2023) Moran, S. E., Stevenson, K. B., Sing, D. K., et al. 2023, ApJ, 948, L11, doi: 10.3847/2041-8213/accb9c * Morley et al. (2013) Morley, C. V., Fortney, J. J., Kempton, E. M. R., et al. 2013, ApJ, 775, 33, doi: 10.1088/0004-637X/775/1/33 * Morley et al. (2015) Morley, C. V., Fortney, J. J., Marley, M. S., et al. 2015, ApJ, 815, 110, doi: 10.1088/0004-637X/815/2/110 * Morley et al. (2015) Morley, C. V., Kreidberg, L., Rustamkulov, Z.,Robinson, T., & Fortney, J. J. 2017, ApJ, 850, 121, doi: 10.3847/1538-4357/aa927b * Morris et al. (2022) Morris, B. M., Heng, K., Jones, K., et al. 2022, A&A, 660, A123, doi: 10.1051/0004-6361/202142135 * Morton et al. (2016) Morton, T. D., Bryson, S. T., Coughlin, J. L., et al. 2016, ApJ, 822, 86, doi: 10.3847/0004-637X/822/2/86 * Mousis et al. (2020) Mousis, O., Deleuil, M., Aguichine, A., et al. 2020, ApJ, 896, L22, doi: 10.3847/2041-8213/ab9530 * Mugnai et al. (2021) Mugnai, L. V., Modirrousta-Galian, D., Edwards, B., et al. 2021, AJ, 161, 284, doi: 10.3847/1538-3881/abf3c3 * Mulders et al. (2015) Mulders, G. D., Pascucci, I., & Apai, D. 2015, ApJ, 814, 130, doi: 10.1088/0004-637X/814/2/130 * Neil et al. (2022) Neil, A. R., Liston, J., & Rogers, L. A. 2022, ApJ, 933, 63, doi: 10.3847/1538-4357/ac609b * Ninan et al. (2020) Ninan, J. P., Stefansson, G., Mahadevan, S., et al. 2020, ApJ, 894, 97, 10.3847/1538-4357/ab8559 * Nortmann et al. (2018) Nortmann, L., Pall'e, E., Salz, M., et al. 2018, Science, 362, 1388, doi: 10.1126/science.aat5348 * Oberg et al. (2011) Oberg, K. I., Murray-Clay, R., & Bergin, E. A.-2011, ApJ, 743, L16, doi: 10.1088/2041-8205/743/1/L16 * Oklop'ci'c & Hirata (2018) Oklop'ci'c, A., & Hirata, C. M. 2018, ApJ, 855, L11, doi: 10.3847/2041-8213/aaada9 * Orell-Miquel et al. (2022) Orell-Miquel, J., Murgas, F., Pall'e, E., et al. 2022, A&A, 659, A55, doi: 10.1051/0004-6361/202142455 * Orell-Miquel et al. (2023) Orell-Miquel, J., Lamp'on, M., L'opez-Puertas, M., et al. 2023, arXiv e-prints, arXiv:2307.05191, doi: 10.48550/arXiv.2307.05191 * Owen (2019) Owen, J. E. 2019, Annual Review of Earth and Planetary Sciences, 47, 67, doi: 10.1146/annurev-earth-053018-060246 * Owen & Adams (2014) Owen, J. E., & Adams, F. C. 2014, MNRAS, 444, 3761, doi: 10.1093/mnras/stu1684 * Owen & Wu (2017) Owen, J. E., & Wu, Y. 2017, ApJ, 847, 29, doi: 10.3847/1538-4357/aa890a * Owen et al. (2023) Owen, J. E., Murray-Clay, R. A., Schreyer, E., et al. 2023, MNRAS, 518, 4357, doi: 10.1093/mnras/stac3414 * Pai Asnodkar et al. (2022) Pai Asnodkar, A., Wang, J., Eastman, J. D., et al. 2022, AJ, 163, 155, doi: 10.3847/1538-3881/ac51d2 * Palle et al. (2020a) Palle, E., Oshagh, M., Casasayas-Barris, N., et al. 2020a, A&A, 643, A25, doi: 10.1051/0004-6361/202038583 * Palle et al. (2020b) Palle, E., Nortmann, L., Casasayas-Barris, N., et al. 2020b, A&A, 638, A61, 10.1051/0004-6361/202037719 * Parmentier et al. (2021) Parmentier, V., Showman, A. P., & Fortney, J. J. 2021, MNRAS, 501, 78, doi: 10.1093/mnras/staa3418 * Parmentier et al. (2018) Parmentier, V., Line, M. R., Bean, J. L., et al. 2018, A&A, 617, A110, doi: 10.1051/0004-6361/201833059* Pelletier et al. (2021) Pelletier, S., Benneke, B., Darveau-Bernier, A., et al. 2021, AJ, 162, 73, doi: 10.3847/1538-3881/ac0428 * Pelletier et al. (2023) Pelletier, S., Benneke, B., Ali-Dib, M., et al. 2023, Nature, 619, 491, doi: 10.1038/s41586-023-06134-0 * Perez-Becker & Showman (2013) Perez-Becker, D., & Showman, A. P. 2013, ApJ, 776, 134, doi: 10.1088/0004-637X/776/2/134 * Perez-Benz'alez et al. (2023) Perez-Benz'alez, J., Greklek-McKeon, M., Vissapragada, S., et al. 2023, arXiv e-prints, arXiv:2307.09515, doi: 10.48550/arXiv.2307.09515 * Perna et al. (2010) Perna, R., Menou, K., & Rauscher, E. 2010, ApJ, 719, 1421, doi: 10.1088/0004-637X/719/2/1421 * Petigura et al. (2018) Petigura, E. A., Marcy, G. W., Winn, J. N., et al. 2018, AJ, 155, 89, doi: 10.3847/1538-3881/aaa54c * Petigura et al. (2022) Petigura, E. A., Rogers, J. G., Isaacson, H., et al. 2022, AJ, 163, 179, doi: 10.3847/1538-3881/ac51e3 * Pidhorodetska et al. (2020) Pidhorodetska, D., Fauchez, T. J., Villanueva, G. L., Domagal-Goldman, S. D., & Kopparapu, R. K. 2020, ApJ, 898, L33, 10.3847/2041-8213/aba4a1 * Pinhas et al. (2019) Pinhas, A., Madhusudhan, N., Gandhi, S., & MacDonald, R. 2019, MNRAS, 482, 1485, doi: 10.1093/mnras/sty2544 * Piso et al. (2016) Piso, A.-M. A., Pegues, J., & Oberg, K. I. 2016,\" ApJ, 833, 203, doi: 10.3847/1538-4357/833/2/203 * Plavchan et al. (2020) Plavchan, P., Barclay, T., Gagn'e, J., et al. 2020, Nature, 582, 497, doi: 10.1038/s41586-020-2400-z * Polman et al. (2023) Polman, J., Waters, L. B. F. M., Min, M., Miguel, Y., & Khorshid, N. 2023, A&A, 670, A161, doi: 10.1051/0004-6361/202244647 * Pont et al. (2008) Pont, F., Knutson, H., Gilliland, R. L., Moutou, C., & Charbonneau, D. 2008, MNRAS, 385, 109, doi: 10.1111/j.1365-2966.2008.12852.x * Pontoppidan et al. (2022) Pontoppidan, K. M., Barrientes, J., Blome, C., et al. 2022, ApJ, 936, L14, doi: 10.3847/2041-8213/ac8a4e * Powell et al. (2019) Powell, D., Louden, T., Kreidberg, L., et al. 2019, ApJ, 887, 170, doi: 10.3847/1538-4357/ab55d9 * Prinoth et al. (2022) Prinoth, B., Hoeijmakers, H. J., Kitzmann, D., et al. 2022, Nature Astronomy, 6, 449, doi: 10.1038/s41550-021-01581-z * Prinoth et al. (2023) Prinoth, B., Hoeijmakers, H. J., Pelletier, S., et al. 2023, arXiv e-prints, arXiv:2308.04523, doi: 10.48550/arXiv.2308.04523 * Rackham et al. (2018) Rackham, B. V., Apai, D., & Giampapa, M. S. 2018, ApJ, 853, 122, doi: 10.3847/1538-4357/aaa08c * Rackham et al. (2023) Rackham, B. V., Espinoza, N., Berdyugina, S. V., et al. 2023, RAS Techniques and Instruments, 2, 148, doi: 10.1093/rasti/rzad009* Radica et al. (2023) Radica, M., Welbanks, L., Espinoza, N., et al. 2023, MNRAS, 524, 835, doi: 10.1093/mnras/stad1762 * Rauscher et al. (2018) Rauscher, E., Suri, V., & Cowan, N. B. 2018, AJ, 156, 235, doi: 10.3847/1538-3881/aae57f * Reggiani et al. (2022) Reggiani, H., Schlaufman, K. C., Healy, B. F., Lothringer, J. D., & Sing, D. K. 2022, AJ, 163, 159, doi: 10.3847/1538-3881/ac4d9f * Richardson et al. (2007) Richardson, L. J., Deming, D., Horning, K., Seager, S., & Harrington, J. 2007, Nature, 445, 892, doi: 10.1038/nature05636 * Rockcliffe et al. (2023) Rockcliffe, K. E., Newton, E. R., Youngblood, A., et al. 2023, AJ, 166, 77, doi: 10.3847/1538-3881/ace536 * Rogers et al. (2021) Rogers, J. G., Gupta, A., Owen, J. E., & Schlichting, H. E. 2021, MNRAS, 508, 5886, doi: 10.1093/mnras/stab2897 * Rogers et al. (2023) Rogers, J. G., Schlichting, H. E., & Owen, J. E. 2023, ApJ, 947, L19, doi: 10.3847/2041-8213/acc86f * Rogers & Seager (2010) Rogers, L. A., & Seager, S. 2010, ApJ, 716, 1208, doi: 10.1088/0004-637X/716/2/1208 * Rogers (2017) Rogers, T. M. 2017, Nature Astronomy, 1, 0131, doi: 10.1038/s41550-017-0131 * Roman et al. (2021) Roman, M. T., Kempton, E. M. R., Rauscher, E., et al. 2021, ApJ, 908, 101, doi: 10.3847/1538-4357/abd549 * Roth et al. (2021) Roth, A., Drummond, B., H'ebrard, E., et al. 2021, MNRAS, 505, 4515, doi: 10.1093/mnras/stab1256 * Rowe et al. (2008) Rowe, J. F., Matthews, J. M., Seager, S., et al. 2008, ApJ, 689, 1345, doi: 10.1086/591835 * Roy et al. (2022) Roy, P.-A., Benneke, B., Piaulet, C., et al. 2022, ApJ, 941, 89, doi: 10.3847/1538-4357/ac9f18 * Rustamkulov et al. (2023) Rustamkulov, Z., Sing, D. K., Mukherjee, S., et al. 2023, Nature, 614, 659, doi: 10.1038/s41586-022-05677-y * Savel et al. (2023) Savel, A. B., Kempton, E. M. R., Rauscher, E., et al. 2023, ApJ, 944, 99, doi: 10.3847/1538-4357/acb141 * Schaefer & Fegley (2010) Schaefer, L., & Fegley, B. 2010, Icarus, 208, 438, doi: 10.1016/j.icarus.2010.01.026 * Schreyer et al. (2023) Schreyer, E., Owen, J. E., Spake, J. J., Bahroloom, Z., & Di Giampasquale, S. 2023, arXiv e-prints, arXiv:2302.10947, doi: 10.48550/arXiv.2302.10947 * Schwartz & Cowan (2015) Schwartz, J. C., & Cowan, N. B. 2015, MNRAS, 449, 4192, doi: 10.1093/mnras/stv470 * Schwartz et al. (2017) Schwartz, J. C., Kashner, Z., Jovmir, D., & Cowan, N. B. 2017, ApJ, 850, 154, doi: 10.3847/1538-4357/aa9567 * Schwarz et al. (2015) Schwarz, H., Brogi, M., de Kok, R., Birkby, J., & Snellen, I. 2015, A&A, 576, A111, doi: 10.1051/0004-6361/201425170 * Seager (2010) Seager, S. 2010, Exoplanet Atmospheres: Physical Processes * Seager & Deming (2009) Seager, S., & Deming, D. 2009, ApJ, 703, 1884, doi: 10.1088/0004-637X/703/2/1884* Seager & Sasselov (2000) Seager, S., & Sasselov, D. D. 2000, ApJ, 537, 916, doi: 10.1086/309088 * Seager et al. (2000) Seager, S., Whitney, B. A., & Sasselov, D. D. 2000, ApJ, 540, 504, doi: 10.1086/309292 * Seidel et al. (2021) Seidel, J. V., Ehrenreich, D., Allart, R., et al. 2021, A&A, 653, A73, doi: 10.1051/0004-6361/202140569 * Selsis et al. (2023) Selsis, F., Leconte, J., Turbet, M., Chaverot, G., & Bolmont, E. 2023, Nature, 620, 287, doi: 10.1038/s41586-023-06258-3 * Sheppard et al. (2017) Sheppard, K. B., Mandell, A. M., Tamburo, P., et al. 2017, ApJ, 850, L32, doi: 10.3847/2041-8213/aa9ae9 * Showman et al. (2010) Showman, A. P., Cho, J. Y. K., & Menou, K. 2010, in Exoplanets, ed. S. Seager, 471-516, doi: 10.48550/arXiv.0911.3170 * Showman et al. (2013) Showman, A. P., Fortney, J. J., Lewis, N. K., & Shabram, M. 2013, ApJ, 762, 24, doi: 10.1088/0004-637X/762/1/24 * Showman et al. (2009) Showman, A. P., Fortney, J. J., Lian, Y., et al. 2009, ApJ, 699, 564, doi: 10.1088/0004-637X/699/1/564 * Showman et al. (2020) Showman, A. P., Tan, X., & Parmentier, V. 2020, SSRv, 216, 139, doi: 10.1007/s11214-020-00758-8 * Sing et al. (2016) Sing, D. K., Fortney, J. J., Nikolov, N., et al. 2016, Nature, 529, 59, doi: 10.1038/nature16068 * Sing et al. (2019) Sing, D. K., Lavvas, P., Ballester, G. E., et al. 2019, AJ, 158, 91, doi: 10.3847/1538-3881/ab2986 * Snellen et al. (2010) Snellen, I. A. G., de Kok, R. J., de Mooij, E. J. W., & Albrecht, S. 2010, Nature, 465, 1049, doi: 10.1038/nature09111 * Southworth et al. (2017) Southworth, J., Mancini, L., Madhusudhan, N., et al. 2017, AJ, 153, 191, 10.3847/1538-3881/aa6477 * Spake et al. (2021) Spake, J. J., Oklop'ci'c, A., & Hillenbrand, L. A. 2021, AJ, 162, 284, doi: 10.3847/1538-3881/ac178a * Spake et al. (2018) Spake, J. J., Sing, D. K., Evans, T. M., et al. 2018, Nature, 557, 68, doi: 10.1038/s41586-018-0067-5 * Spake et al. (2022) Spake, J. J., Oklop'ci'c, A., Hillenbrand, L. A., et al. 2022, ApJ, 939, L11, doi: 10.3847/2041-8213/ac88c9 * Steinrueck et al. (2021) Steinrueck, M. E., Showman, A. P., Lavvas, P., et al. 2021, MNRAS, 504, 2783, doi: 10.1093/mnras/stab1053 * Stevenson et al. (2010) Stevenson, K. B., Harrington, J., Nymeyer, S., et al. 2010, Nature, 464, 1161, doi: 10.1038/nature09013 * Stevenson et al. (2014) Stevenson, K. B., D'esert, J.-M., Line, M. R., et al. 2014, Science, 346, 838, doi: 10.1126/science.1256758 * Suissa et al. (2020) Suissa, G., Mandell, A. M., Wolf, E. T., et al. 2020, ApJ, 891, 58, doi: 10.3847/1538-4357/ab72f9 * Swain et al. (2021) Swain, M. R., Estrela, R., Roudier, G. M., et al. 2021, AJ, 161, 213, doi: 10.3847/1538-3881/abe879* Tabernero et al. (2021) Tabernero, H. M., Zapatero Osorio, M. R., Allart, R., et al. 2021, A&A, 646, A158, doi: 10.1051/0004-6361/202039511 * Tan & Komacek (2019) Tan, X., & Komacek, T. D. 2019, ApJ, 886, 26, doi: 10.3847/1538-4357/ab4a76 * Thorngren et al. (2016) Thorngren, D. P., Fortney, J. J., Murray-Clay, R. A., & Lopez, E. D. 2016, ApJ, 831, 64, doi: 10.3847/0004-637X/831/1/64 * Thorngren et al. (2019) Thorngren, D. P., Marley, M. S., & Fortney, J. J. 2019, Research Notes of the American Astronomical Society, 3, 128, doi: 10.3847/2515-5172/ab4353 * Todorov et al. (2010) Todorov, K., Deming, D., Harrington, J., et al. 2010, ApJ, 708, 498, doi: 10.1088/0004-637X/708/1/498 * Todorov et al. (2012) Todorov, K. O., Deming, D., Knutson, H. A., et al. 2012, ApJ, 746, 111, doi: 10.1088/0004-637X/746/1/111 * Todorov et al. (2013) --. 2013, ApJ, 770, 102, doi: 10.1088/0004-637X/770/2/102 * Tsai et al. (2023) Tsai, S.-M., Lee, E. K. H., Powell, D., et al. 2023, Nature, 617, 483, doi: 10.1038/s41586-023-05902-2 * Tsiaras et al. (2019) Tsiaras, A., Waldmann, I. P., Tinetti, G., Tennyson, J., & Yurchenko, S. N. 2019, Nature Astronomy, 3, 1086, doi: 10.1038/s41550-019-0878-9 * Tsiaras et al. (2018) Tsiaras, A., Waldmann, I. P., Zingales, T., et al. 2018, AJ, 155, 156, doi: 10.3847/1538-3881/aaaf75 * Turbet et al. (2020) Turbet, M., Bolmont, E., Ehrenreich, D., et al. 2020, A&A, 638, A41, doi: 10.1051/0004-6361/201937151 * Van Eylen et al. (2018) Van Eylen, V., Agentoft, C., Lundkvist, M. S., et al. 2018, MNRAS, 479, 4786, doi: 10.1093/mnras/sty1783 van Sluijs, L., Birkby, J. L., Lothringer, J., et al. 2023, MNRAS, 522, 2145, 10.1093/mnras/stad1103 * Venturini et al. (2016) Venturini, J., Alibert, Y., & Benz, W. 2016, A&A, 596, A90, doi: 10.1051/0004-6361/201628828 * Vidal-Madjar et al. (2003) Vidal-Madjar, A., Lecavelier des Etangs, A., D'esert, J. M., et al. 2003, Nature, 422, 143, doi: 10.1038/nature01448 * Vidal-Madjar et al. (2004) Vidal-Madjar, A., D'esert, J. M., Lecavelier des Etangs, A., et al. 2004, ApJ, 604, L69, doi: 10.1086/383347 * Vissapragada et al. (2022) Vissapragada, S., Knutson, H. A., Greklek-McKeon, M., et al. 2022, AJ, 164, 234, doi: 10.3847/1538-3881/ac92f2 * Wakeford & Sing (2015) Wakeford, H. R., & Sing, D. K. 2015, A&A, 573, A122, doi: 10.1051/0004-6361/201424207 * Wakeford et al. (2013) Wakeford, H. R., Sing, D. K., Deming, D., et al. 2013, MNRAS, 435, 3481, doi: 10.1093/mnras/stt1536* Wakeford et al. (2017) Wakeford, H. R., Stevenson, K. B., Lewis, N. K., et al. 2017, ApJ, 835, L12, doi: 10.3847/2041-8213/835/1/L12 * Wakeford et al. (2018) Wakeford, H. R., Sing, D. K., Deming, D., et al. 2018, AJ, 155, 29, doi: 10.3847/1538-3881/aa9e4e * Wallack et al. (2021) Wallack, N. L., Knutson, H. A., & Deming, D. 2021, AJ, 162, 36, doi: 10.3847/1538-3881/abdbb2 * Wang & Dai (2021) Wang, L., & Dai, F. 2021a, ApJ, 914, 98, doi: 10.3847/1538-4357/abf1ee * Wang & Dai (2021) --. 2021b, ApJ, 914, 99, doi: 10.3847/1538-4357/abf1ed * Wardenier et al. (2023) Wardenier, J. P., Parmentier, V., Line, M. R., & Lee, E. K. H. 2023, arXiv e-prints, arXiv:2307.04931, doi: 10.48550/arXiv.2307.04931 * Welbanks et al. (2019) Welbanks, L., Madhusudhan, N., Allard, N. F., et al. 2019, ApJ, 887, L20, doi: 10.3847/2041-8213/ab5a89 * Whittaker et al. (2022) Whittaker, E. A., Malik, M., Ih, J., et al. 2022, AJ, 164, 258, doi: 10.3847/1538-3881/ac9ab3 * Williams et al. (2006) Williams, P. K. G., Charbonneau, D., Cooper, C. S., Showman, A. P., & Fortney, J. J. 2006, ApJ, 649, 1020, doi: 10.1086/506468 * Winn (2010) Winn, J. N. 2010, in Exoplanets, ed. S. Seager, 55-77 * Winn et al. (2008) Winn, J. N., Holman, M. J., Shporer, A., et al. 2008, AJ, 136, 267, doi: 10.1088/0004-6256/136/1/267 * Woitke et al. (2018) Woitke, P., Helling, C., Hunter, G. H., et al. 2018, A&A, 614, A1, doi: 10.1051/0004-6361/201732193 * Wong et al. (2020) Wong, I., Shporer, A., Daylan, T., et al. 2020, AJ, 160, 155, doi: 10.3847/1538-3881/ababad * Wordsworth & Kreidberg (2022) Wordsworth, R., & Kreidberg, L. 2022, ARA&A, 60, 159, doi: 10.1146/annurev-astro-052920-125632 * Wyttenbach et al. (2015) Wyttenbach, A., Ehrenreich, D., Lovis, C., Udry, S., & Pepe, F. 2015, A&A, 577, A62, doi: 10.1051/0004-6361/201525729 * Wyttenbach et al. (2020) Wyttenbach, A., Molli'ere, P., Ehrenreich, D., et al. 2020, A&A, 638, A87, doi: 10.1051/0004-6361/201937316 * Yan et al. (2022a) Yan, D., Seon, K.-i., Guo, J., Chen, G., & Li, L. 2022a, ApJ, 936, 177, doi: 10.3847/1538-4357/ac8793 * Yan & Henning (2018) Yan, F., & Henning, T. 2018, Nature Astronomy, 2, 714, doi: 10.1038/s41550-018-0503-3 * Yan et al. (2020) Yan, F., Pall'e, E., Reiners, A., et al. 2020, A&A, 640, L5, doi: 10.1051/0004-6361/202038294 * Yan et al. (2022b) Yan, F., Reiners, A., Pall'e, E., et al. 2022b, A&A, 659, A7, doi: 10.1051/0004-6361/202142395 * Yan et al. (2023) Yan, F., Nortmann, L., Reiners, A., et al. 2023, A&A, 672, A107, doi: 10.1051/0004-6361/202245371 * Zahnle & Catling (2017) Zahnle, K. J., & Catling, D. C. 2017, ApJ, 843,122, doi: 10.3847/1538-4357/aa7846 * Zamyatina et al. (2023) Zamyatina, M., H'ebrard, E., Drummond, B., et al. 2023, MNRAS, 519, 3129, doi: 10.1093/mnras/stac3432 * Zhang et al. (2020) Zhang, M., Chachan, Y., Kempton, E. M. R., Knutson, H. A., & Chang, W. H. 2020, ApJ, 899, 27, doi: 10.3847/1538-4357/aba1e6 * Zhang et al. (2023a) Zhang, M., Dai, F., Bean, J. L., Knutson, H. A., & Rescigno, F. 2023a, ApJ, 953, L25, doi: 10.3847/2041-8213/aced51 * Zhang et al. (2023b) Zhang, M., Knutson, H. A., Dai, F., et al. 2023b, AJ, 165, 62, doi: 10.3847/1538-3881/aca75b * Zhang et al. (2022a) Zhang, M., Knutson, H. A., Wang, L., Dai, F., & Barrag'an, O. 2022a, AJ, 163, 67, doi: 10.3847/1538-3881/ac3fa7 * Zhang et al. (2022b) Zhang, M., Knutson, H. A., Wang, L., et al. 2022b, AJ, 163, 68, doi: 10.3847/1538-3881/ac3f3b * Zhang et al. (2022c) --. 2022c, AJ, 163, 68, doi: 10.3847/1538-3881/ac3f3b * Zhang et al. (2017) Zhang, X., & Showman, A. P. 2017, ApJ, 836, 73, doi: 10.3847/1538-4357/836/1/73 * Zhang et al. (2018) Zhang, Z., Zhou, Y., Rackham, B. V., & Apai, D. 2018, AJ, 156, 178, doi: 10.3847/1538-3881/aade4f * Zhang et al. (2023c) Zhang, Z., Morley, C. V., Gully-Santiago, M., et al. 2023c, Science Advances, 9, eadf8736, doi: 10.1126/sciadv.adf8736 * Zieba et al. (2022) Zieba, S., Zilinskas, M., Kreidberg, L., et al. 2022, A&A, 664, A79, doi: 10.1051/0004-6361/202142912 * Zieba et al. (2023) Zieba, S., Kreidberg, L., Ducrot, E., et al. 2023, arXiv e-prints, arXiv:2306.10150, doi: 10.48550/arXiv.2306.10150
The field of exoplanet atmospheric characterization has recently made considerable advances with the advent of high-resolution spectroscopy from large ground-based telescopes and the commissioning of the James Webb Space Telescope (JWST). We have entered an era in which atmospheric compositions, aerosol properties, thermal structures, mass loss, and three-dimensional effects can be reliably constrained. While the challenges of remote sensing techniques imply that individual exoplanet atmospheres will likely never be characterized to the degree of detail that is possible for solar system bodies, exoplanets present an exciting opportunity to characterize a diverse array of worlds with properties that are not represented in our solar system. This review article summarizes the current state of exoplanet atmospheric studies for transiting planets. We focus on how observational results inform our understanding of exoplanet properties and ultimately address broad questions about planetary formation, evolution, and diversity. This review is meant to provide an overview of the exoplanet atmospheres field for planetary- and geo-scientists without astronomy backgrounds, and exoplanet specialists, alike. We give special attention to the first year of JWST data and recent results in high-resolution spectroscopy that have not been summarized by previous review articles.
Give a concise overview of the text below.
242
arxiv-format/2012_03057v1.md
Urban Crowdsensing using Social Media: An Empirical Study on Transformer and Recurrent Neural Networks Jerome Heng1, Junhua Liu1 and Kwan Hui Lim1 1Information Systems Technology and Design Pillar, Singapore University of Technology and Design Email: {jerome_heng, junhua_liu}@mymail.sutd.edu.sg, [email protected] ###### ### _Events Dataset_ Table I shows the summary statistics of our event detection dataset. This dataset comprises a set of tweets and the corresponding list of events. As a proxy for events label, we utilize a list of all events in Melbourne that had an event permit issued1. As a further pre-processing, we filtered out events that were likely to not be public, such as film/movie shoots. Using the Twitter API, we then extracted the tweets posted during each event and within 100m of the event location, as well as all tweets in Melbourne from the first event date to the last event date. Following which, this resulted in a dataset of 1.39 million tweets, of which 22k were tweets related to an event Footnote 1: [https://data.melbourne.vic.gov.au/Events/Event-permits-2014-2018-including-film-shoots-phot/sex6-6426](https://data.melbourne.vic.gov.au/Events/Event-permits-2014-2018-including-film-shoots-phot/sex6-6426) Footnote 2: [https://data.melbourne.vic.gov.au/Transport/Pedestrian-Counting-System-2009-to-Present-counts-fb2ak-ttpp](https://data.melbourne.vic.gov.au/Transport/Pedestrian-Counting-System-2009-to-Present-counts-fb2ak-ttpp) ### _Crowd Level Dataset_ Table II summarizes the main statistics of our crowd level prediction dataset. There are two components to this dataset, namely a list of pedestrian sensors and their pedestrian counts and the geo-tagged tweets in the vicinity of these sensors. The pedestrian sensor component comprises 3.1 million pedestrian count readings from 66 sensors, retrieved from Melbourne Open Data3. In addition, we collected 266k geo-tagged tweets posted within Melbourne City between 2010 to 2018. These tweets were then mapped to individual pedestrian sensor locations if their distances differ by less than 100m. Footnote 3: [https://www.census.gov.au/Transport/Pedestrian-Counting-System-2009-to-Present-counts-fb2ak-ttpp](https://www.census.gov.au/Transport/Pedestrian-Counting-System-2009-to-Present-counts-fb2ak-ttpp) Upon inspecting the resultant dataset, we realized that many of the tweets had duplicate coordinates. A likely cause of this issue is the tendency for users to tag tweets with Twitter \"places\" [9], which are polygonal areas corresponding to places of interest that have a standard set of coordinates; in contrast to having their phone GPS report their accurate coordinates. To address this issue, we wrote a script to check for duplicate lat/long coordinates. We set a threshold of 100 maximum duplicates, which left us with a filtered dataset 10% of its original size. ## III Event Detection For the event detection task, one concern is with the imbalance distribution of classes. To address this issue, we first performed undersampling as our dataset is very imbalanced with a minority of non-events class [10]. After this undersampling process, we obtain a dataset with approximately 30% events and 70% non-events, comprising a total of 74,136 tweets. Using this set of tweets, we used 90% for training and 5% each for our development and test set. In our experiments, we empirically evaluate the following neural network based models, namely: Long Short-Term Memory (LSTM) [11], Gated Recurrent Unit (GRU) [12], Stacked LSTM (S-LSTM) and DistilBERT [13]. In these recurrent neural networks, we used GloVe embedding [14] to represent the individual words in the tweets. We also empirically select the best performing values for the learning rate, dropout rate, hidden states, batch size on the development set and report the performance on the test set. For our evaluation, we use the commonly used metrics of precision, recall and F1-score. Table III shows the results of our preliminary experiments in terms of precision, recall and F1-score for each class as well as the weighted mean. In terms of Weighted F1-score, LSTM is the best performer, followed by GRU, S-LSTM and DistilBERT. We believe that the smaller training dataset after the undersampling process contributed to a poorer performance, particularly for DistilBERT. Moving forward, we intend to further augment this dataset with more tweets that correspond to the \"Event\" class and repeat the experiments on similar datasets. ## IV Crowd Level Exploratory Analytics By plotting the tweet and pedestrian count by sensor location in a specific year, we observe that there is some correlation between the two; where there are high pedestrian counts near a \\begin{table} \\begin{tabular}{l c c c c c} \\hline \\hline & & Number of & Number of & Number of \\\\ Name & Date & Data Points & crowd sensors & unique users \\\\ \\hline Sensor Counts & 2009-05-01 to 2020-04-30 & 3,132,346 & 66 & - \\\\ Tweets (Sensor-related) & 2010-09-12 to 2018-06-04 & 266,931 & - & 20,176 \\\\ Tweets (max 100 duplicate coords.) & 2010-09-21 to 2018-02-02 & 24,638 & 42 & 5,659 \\\\ Tweets (max 10 duplicate coords.) & 2010-09-21 to 2018-02-02 & 12,801 & 42 & 3,785 \\\\ Flickr (near to Town Hall (West)) & 2010-07-08 to 2020-03-29 & 6,076 & - & 852 \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE II: Dataset for Crowd Level Prediction \\begin{table} \\begin{tabular}{l c c c c c c c c c} \\hline \\hline & \\multicolumn{2}{c}{Weighted Average} & \\multicolumn{3}{c}{“Event” Class} & \\multicolumn{3}{c}{“Non-Event” Class} \\\\ Model & Precision & Recall & F1 & Precision & Recall & F1 & Precision & Recall & F1 \\\\ \\hline LSTM & 0.7080 & 0.7292 & 0.6863 & 0.6267 & 0.2489 & 0.3563 & 0.7431 & 0.9361 & 0.8285 \\\\ GRU & 0.7230 & 0.7205 & 0.6400 & 0.7299 & 0.1141 & 0.1974 & 0.7200 & 0.9818 & 0.8308 \\\\ S-LSTM & 0.7130 & 0.7078 & 0.6042 & 0.7260 & 0.0476 & 0.0894 & 0.7074 & 0.9923 & 0.8260 \\\\ DistilBERT & 0.4884 & 0.6989 & 0.5750 & 0.0000 & 0.0000 & 0.0000 & 0.6989 & 1.0000 & 0.8227 \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE III: Results for the Event Detection Taskparticular sensor location, the corresponding number of tweets is likely to also be high. An example is shown in Figure 1. The exceptions to this trend are largely due to some sensors not having any linked tweets, especially when there are many sensors close together (e.g in the city center). These filtering algorithms left us with a very sparse twitter dataset compared to our pedestrian count dataset. For instance, taking the counting sensor at Town Hall (West) in 2017, we had a tweet range from 0-500 per hour and a pedestrian count range from 200-1000 per hour. Based on a scatter plot of the tweets and pedestrian hourly counts, we observed a large proportion of data points where there were no tweets and a varying number of pedestrian counts. This makes it challenging to find meaningful correlations of the tweet counts to the pedestrian counts from this scatter plot. This was confirmed by an initial pass through of the data to a few logistic regression and linear regression algorithms which showed little to any usable results - the average accuracy was below 10% for the test set. A dataset used for crowd level prediction must have reduced sparsity, for instance by including data from other sources. Flickr is one such source that we looked at. Flickr is particularly suitable for our purposes, because geo-location data in Flickr comes from the camera's metadata. We used the Flickr API to crawl posts from the same timeframe as the Twitter dataset. Thus, we can use Flickr descriptions to augment the text data from tweets. However, a detailed method of extracting meaningful data patterns from a combined data source requires more investigation. ## V Conclusion Our main contribution is in curating two datasets, one for event detection and one for crowd level prediction. We identify the difficulties of creating a dataset for crowd level regression - particularly the issues relating to data sparsity and geo-location accuracy. We also discussed some preliminary results on the two problems of event detection and crowd level prediction. For the event detection problem, we experimented with an initial set of neural network models, such as LSTM, GRU, Staked LSTM and DistilBERT, and discuss preliminary results and their limitations. For the crowd level prediction problem, we performed exploratory data analytics to identify trends between social media data and pedestrian sensors. ## VI Acknowledgement This research is funded in part by the Singapore University of Technology and Design under grant SRG-ISTD-2018-140. ## References * [1]K. Al-Kodmany (2013) Crowd management and urban design: new scientific approaches. Urban Design International18 (4), pp. 282-295. Cited by: SSI. * [2]S. B. Ranneries, M. E. Kalst, S. A. Nielsen, L. N. Dalgaard, L. D. Christensen, and N. Kanhabua (2016) Wisdom of the local crowd: detecting local events using social media data. In Proceedings of the 8th ACM Conference on Web Science, pp. 352-354. Cited by: SSI. * [3]S. B. Ranneries, M. E. Kalst, S. A. Nielsen, L. N. Dalgaard, L. D. Christensen, and N. Kanhabua (2016) Wisdom of the local crowd: detecting local events using social media data. In Proceedings of the 8th ACM Conference on Web Science, pp. 352-354. Cited by: SSI. * [4]S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation9 (8), pp. 1735-1780. Cited by: SSII-A. * [5]S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation9 (8), pp. 1735-1780. Cited by: SSII-A. * [6]S. Hochreiter (1997) Learning from imbalanced data sets. Springer. Cited by: SSII-A. * [7]S. Hochreiter (1997) Learning from imbalanced data sets. Springer. Cited by: SSII-A. * [8]S. Hochreiter (1997) Learning from imbalanced data sets. Springer. Cited by: SSII-A. * [9]S. Hochreiter (1997) Learning from imbalanced data sets. Springer. Cited by: SSII-A. * [10]S. Hochreiter (1997) Learning from imbalanced data sets. Springer. Cited by: SSII-A. * [11]S. Hochreiter (1997) Learning from imbalanced data sets. Springer. Cited by: SSII-A. * [12]S. B. Ranneries, M. E. Kalst, S. A. Nielsen, L. N. Dalgaard, L. D. Christensen, and N. Kanhabua (2016) Wisdom of the local crowd: detecting local events using social media data. In Proceedings of the 8th ACM Conference on Web Science, pp. 352-354. Cited by: SSI. * [13]S. B. Ranneries, M. E. Kalst, S. A. Nielsen, L. N. Dalgaard, L. D. Christensen, and N. Kanhabua (2016) Wisdom of the local crowd: detecting local events using social media data. In Proceedings of the 8th ACM Conference on Web Science, pp. 352-354. Cited by: SSI. * [14]S. B. Ranneries, M. E. Kalst, S. A. Nielsen, L. N. Dalgaard, L. D. Christensen, and N. Kanhabua (2016) Wisdom of the local crowd: detecting local events using social media data. In Proceedings of the 8th ACM Conference on Web Science, pp. 352-354. Cited by: SSI. * [15]S. B. Ranneries, M. E. Kalst, and S. A. Nielsen (2016) An efficient crowd-based
An important aspect of urban planning is understanding crowd levels at various locations, which typically require the use of physical sensors. Such sensors are potentially costly and time consuming to implement on a large scale. To address this issue, we utilize publicly available social media datasets and use them as the basis for two urban sensing problems, namely event detection and crowd level prediction. One main contribution of this work is our collected dataset from Twitter and Flickr, alongside ground truth events. We demonstrate the usefulness of this dataset with two preliminary supervised learning
Give a concise overview of the text below.
104
isprs/20ae3568_2746_433a_a400_b5f624387163.md
**INTERACTIVE ANALYSIS OF POLARITRIC SIR-C AND LANDSAT-TM DATA FOR THE SPECTRAL AND TEXUTURAL CHARACTERIZATION OF THE LAND COVER IN SW AMAZONIA, BRAZIL** Joao Roberto dos Santos \\({}^{1}\\) Hermann Johann Heinrich Kux \\({}^{1}\\) Manfred Keil \\({}^{2}\\) Maria Silvia Pardi Lacruz \\({}^{1}\\) Dominic R. Scales \\({}^{2}\\) \\({}^{1}\\) Instituto Nacional de Pesquisas Espaciais - INPE, Sao Jose dos Campos, Brazil [email protected] \\({}^{2}\\) German Aerospace Research Establishment (DLR), Oberpfaffenhofen, Germany [email protected] Commission VII, Working Group 2 **KEYWORDS: Land use, Forestry, SAR, Regrowth, Amazonia** ## 1 Introduction In the frame of studies related to global change, there is a need to monitor tropical forests, not only to estimate the yearly deforestation rate, but also to follow the dynamics of land occupation by man. The frequent cloud coverage in certain tropical areas made data collection with optical spaceborne sensors an impossible task. Imaging radar offers the potential to map vegetation and land use classes in such areas, because its' independence of weather conditions. A cooperation program between the National Brazilian Institute of Space Research (INPE) and the German Aerospace Research Establishment (DLR) is under development, with the objective to evaluate the capabilities of Remote Sensing techniques to control the environmental impact of deforestation in Amazonia and to map rainforest formations and land use (Keil et al., 1995). As a result of this cooperation, the main objective of the present study is to evaluate the spectral and textural capabilities of SIR-C (Shuttle Imaging Radar) data at different polarizations (HH, VV, HV, LL and Total Power), specially at L-band, to characterize the changes of land cover in a section of West Brazilian Amazonia, in Aere State. An interactive analysis of Landsat-TM data, using the six optical bands, was also performed to identify the main land cover units along the Rio Branco-Sena Madureira road (Aere State). ## 2 Area under study The area under study includes a section along the road Rio Branco - Sena Madureira (BR-364). This area is covered by different tropical vegetation formations and disturbed areas like pastureland and natural secondary succession. Since the opening of the BR-364 in the early 70's, large deforestation activities took place, mainly for cattle raising activities, but also for colonization projects. In the last few years, since the suspension of governmental incentives, the speed of forest clearance decreased. According to Santos et al. (1994) the annual rate of gross deforestation in Aere State in the timeframe 1991 to 1992 was of about 327 km\\({}^{2}\\), and it is believed that this value is valid until today, due to the strong environmentalist pressure. The plantation of pasture, the improvements obtained in burning and logging, as well as the vegetal successions of deforested areas, strongly influence the equilibrium of the ecosystem in SW Amazonia. ## 3 Materials and Methods The main database of reference were Landsat TM images, bands 1 - 5 and 7, from July 1994. A SIR-C dataset of the April '94 Mission was used. The SPRING software package, developed at INPE, as well as EBIS, from DLR, were used. In July 1994 a field survey wasperformed with assistance of the Technological Foundation of Aere (FUNTAC) and University of Aere (UFAC). During the field campaign, several land use classes were observed, and several vegetation profiles (arboreal and bush individuals along sections of 60 m x 2 m size) were made at natural vegetation regrowth including: the description and collection of flora composition, DBH above 3 cm, total height, percentage of crown cover. In this study, the general allometric equation for secondary forest, In Y = -2.17 + 1.02 ln \\(\\left(\\textbf{DBH}\\right)^{2}+0.39\\) x ln **Height** according to Uhl et al. (1988), was used to estimate the biomass values, mainly of regrowth areas. An overview of the entire area, as well as 35 mm photos were obtained during an overflight. The TM images were analyzed considering the following steps: atmospheric and geometric corrections, image segmentation based on the algorithm for the growth of regions (similarity threshold = 6; area threshold = 10); labeling of segment samples of thematic classes: Forest (F), Initial (IR) and Intermediate (AR) Regrowth, Overgrown Pasture (OP), Fresh Pasture (FP), Pasture with Bare Soil (PS); application of the ISOSEG Classifier; and generation of a thematic map. The SIR-C data were analyzed according to the following procedure: speckle reduction filtering (MAP filter) and merging of SAR data at different polarizations (HH, VV, HV, LL and Total Power) with TM images, application of a new version of the EBIS texture classifier (Evidence Based Interpretation of Satellite Images, developed at DLR) and also plotting of the mean backscatter values as a function of wavelength and polarization for each land cover identified, and generation of a thematic map. EBIS is a algorithm used for classifying textures, based on co-occurrence feature vectors, that are modeled as multinomial density functions (Lohmann, 1991). Additionally to the classes mentioned above, for the SIR-C data, the floodplain forest (V) was included. ## 4 Results Figures 1 and 2 are histograms showing the behavior of land use classes at both SIR-C/L-band and TM-Landsat data. In the signature plots of these figures, the original data have been used in association with the variance values of the training areas. The combination of bands TM 3 and 5 present the best performance, as compared to the other bands for the general discrimination of the set of thematic classes, i.e. discrimination of forest, regrowth and pasture areas. During the thematic classification, we could observe that TM images alone did not allow the discrimination of \"Terra Firme\" (Uplands) from \"Varzea\" (Floodplain) Forests. In contrast to that, L-band images, due to the several polarizations available and textural characteristics, allowed a very good discrimination among these two important forest types. It is known that it is possible to separate different forest types as well as logged forest using texture information, and in this case, the different moisture and relief conditions of these two environments, are the main reasons for its' discrimination. Regrowth areas are best discriminated with TM band 4 data, in the succession \"Initial\" and \"Intermediate\", while SIR-C/L-band data does not allow this discrimination. The same finding applies for the discrimination among \"regrowth\" and \"overgrown pasture\" classes. For the identification \"initial\" regrowth and overgrown pasture, TM band 3 was a better indication. The misclassification in SIR-C data (L band) is mainly due to larger amounts of shrubs and palms, like _Maximiliana maripa_ and _Orbignia martimiana_, which abound in these former pastures and lead to higher backscatter values (sometimes as corner reflectors) as well as to texture variances. For the discrimination among \"pasture with bare soil\" and \"fresh pasture\" classes, the L-band image presented a similar performance as TM-Landsat data. The soil type and the physical-structural conditions are also an important factor for this discrimination analysis. Generally, optical sensors are most sensitive to plant structure at micrometer scales, whereas radar interacts mainly at centimeter scales. Of special interest to radar are the vertical stems of plants and the trunks of trees, because wave propagation and backscattering through these media are polarization-dependent (NASA, 1989). Being so, the different polarizations provide different views of the canopy's structure. From another overview, Figure 1, shows that the HV polarization of L-band is more sensitive to the vegetation growth and it is appropriated for land occupation/management studies. As far as backscattering from vegetated surface is concerned, three scattering components must be considered: vegetation, soil and also the interaction component of the L-band data studied here. Leaves and stalks of vegetation are relatively transparent to radiation and so the significance of the soil and the interaction component predominate. The variability found in the sample areas in each of the thematic classes of L-band data, shows a certain sensitivity of this sensor to the horizontal and vertical polarizations, but a lower sensitivity to Total Power (TP) polarization. Taking into account the actual interest of the scientific community on specific studies of regrowth areas, it is to say that at these areas, the contribution of the horizontal polarization is moderately higher than the vertical one. The first one is of interest to define parameters such as the spectral and textural amplitude of the physiognomic-structural variations of secondary succession areas (Figure 3). Considering the vegetation data inventoriedduring the field survey and using the **allometric equation** mentioned, a regrowth value of 128,09 ton/ha was calculated, which can be considered as typical for those areas at **intermediate** stage, taking into account the local management conditions. For **initial regrowth** the result was 45,78 ton/ha, which refers to areas that suffered normally 2 clearcuts. At this \"initial\" phase, the regrowth is characterized by a canopy with thin veritiellate branching and horizontal crowns, in vertical direction and horizontal crowns on two well-defined strata. In the lower stratum there is a higher concentration of herbaceous species, normally with large leaves. At the \"intermediate regrowth\", the canopy is more homogeneous with large crowns, with at least 3 not well defined strata. The lower stratum here is frequently missing, including species that present a higher tolerance to shadow. Presently there are more detailed studies being made in this direction, including the association among biomass content and textural information of SIR-C/L-band, merged with TM-Landsat data. ## 5 Conclusions Within the objectives of this study, it was verified that the segmentation technique for region growth is an adequate way to separate at TM-Landsat scene the land use/land cover classes. These classes are spectral and textural information sources at the interactive process of image classification. After a certain similarity threshold is adopted, the sensitivity of this segmentation algorithm would be useful for the identification of thematic classes from SIR-C, L-band data. Based on that, it is possible to analyze the contributions of textural information for the different polarizations, associating the amplitude of radiometric variations with the physiognomic-structural characteristics of the targets under investigation. The experience obtained in this study indicates that, in order to obtain an improvement of the thematic classes studied, multiseasonal radar data must be used to monitor the phenological conditions of pasture, regrowth areas and some types of tropical forests. The option for textural classification of the EBIS version in the image processing environment was considered as adequate for the analysis of radar data. ## References * [1] Keil, D.; Scales, D.; Winter, R.; Kux, H.; Santos, J.R. 1995. Tropical Rainforest Investigation in Brazil using Multitemporal ERS-1 SAR data. In: Second ERS-1 Application Workshop, London. * [2] Lohmann, G., 1991. An Evidential Reasoning Approach to the Classification of Satellite Images. Doctoral Thesis Technische Universitat Munchen, DLR-FB 91-29, Oberpfaffenhofen. * Instrumental Panel Report. vol. lIf., 233p * [4] Santos, J.R.; Kux, H.; Pedreira, B.C.G.; Almeida, C.A.; Keil, M.; Silveira, M. 1994. Mapping areas of regrowth in tropical rainforest using a multisensor approach: a case study in Ace. In: International Symposium on Resource and Environmental Monitoring. ISPRS Commission VII, Rio de Janeiro, vol. 30 part 7a. pp. 364-367. * [6] Figure 1: Histogram of land use/land cover classes for the different polarizations of SIR-C/L-band. Figure 2 - Spectral responses for land use/land cover classes in the section Rio Branco - Sena Madureira. Forest (F), Initial Regrowth (IR), Intermediate Regrowth (AR), Overgrown Pasture (OP), Fresh Pasture (FP), Pasture with Bare Soil (PS). Figure 3 - Intermediate regrowth profile at a section close to road Rio Branco - Sena Madureira. 1) _Cecropia leucocoma_; 2) _Apeiba echinata_; 3) _Apeiba echinata_; 4) _Cecropia leucocoma_; 5) _Cecropia leucocoma_; 6) _Apeiba echinata_; 7) _Apeiba echinata_; 8) _Apeiba echinata_; 9) _Guadua sp._; 10) _Cecropia leucocoma_; 11) _Apeiba echinata_; 12) _Sapium sp._; 13) _Apeiba echinata_; 14) _Cecropia sp1_; 15) _Apeiba echinata_; 16) _Rollinia sp._; 17) _Sapium sp._; 18) _Cecropia leucocoma_; 19) _Sapium sp2_; 20) _Urtica sp._; 21) _Cecropia leucocoma_; 22) _Pipper sp._; 23) _Cecropia leucocoma_; 24) _Cecropia leucocoma_; 25) _Visene glutanensis_; 26) _Inga sp._; 27) _Cecropia leucocoma_; 28) _Cecropia leucocoma_; 29) _Cecropia leucocoma_; 30) _Cecropia leucocoma_; 31) _Cacipia sp._; 32) _Sapium sp1._; 33) _Acalida sp._; 34) _Zantoxylin triofofillium_; 35) _Inga sp1_._
The objective of this study, is to analyze SIR-C, L-band data at several polarizations, combined with TM-Landsat data, to characterize land use/land cover features in SW Amazonia (Aere State, Brazil). Segmentation techniques of TM-Landsat and textural classifiers for SIR-C were used for the identification of six land use/land cover classes. The spectral and textural characteristics of forest, regrowth and pasture types are briefly discussed. This interactive analysis of the optical and microwave sensor data will certainly contribute to monitor the dynamics of land use in Amazonia.
Give a concise overview of the text below.
121
arxiv-format/2008_05133v1.md
# An Inter- and Intra-Band Loss for Pansharpening Convolutional Neural Networks Jiajun CAI, Bo Huang J. Cai and B. Huang are with the Department of Geography and Resource Management, The Chinese University of Hong Kong, Shatin, Hong Kong (e-mail: cai, [email protected], [email protected]). ## I Introduction Pansharpening is one of the fundamental techniques to improve the quality of remote sensing images. Due to the difficulties in obtaining satellite images with the both high spatial and spectral resolution, sensors equipped in the satellites will synchronously generate pairs of a low spatial resolution multispectral (MS) image and a high spatial resolution panchromatic (PAN) image captured in the same areas. Pansharpening is thus designed to generate pan-sharpened MS images that keep the same spatial resolution as PAN images. A lot of pansharpening methods have been proposed in recent years. Most of these methods can be divided into three categories: component substitution-based, multiresolution analysis-based, and learning-based. The classical component substitution-based methods usually adopt the Brovey transform (BT) [1], the intensity-hue-saturation (IHS) [2], or principal component analysis (PCA) [3] to extract the main component of MS image and replace it by the PAN image to generate a pansharpened result. Multiresolution analysis-based methods use the various wavelet transforms [4]-[6] to decompose MS and PAN images into a series of sub-bands, and the fusion procedure is performed on these corresponding sub-bands from source images. The biggest advantage of aforementioned two categories is their computation efficiency, but they also easily render spectral distortion which will lose actual spectral information from original MS images. Before deep learning is applied to pansharpening, variational optimization and dictionary learning are representative learning-based approaches that have been widely studied. Variational optimization-based methods involve a loss function and some prior regularization terms [7][8] to iteratively optimize fusion results. Dictionary learning-based methods [9][10] will firstly study dictionaries from training samples, and then replace original images by representation coefficients based on these dictionaries to perform fusion process. With a lot of successful applications of deep learning, especially the convolutional neural network (CNN), in computer vision areas, many scholars also proposed various CNN architectures to deal with the pansharpening task. According to the similarities between pansharpening and super-resolution, pansharpening CNN (PNN) [11], which has a similar structure as super-resolution CNN (SRCNN) [12], is firstly proposed to bridge the deep learning and pansharpening. Combining domain knowledge, PanNet [13] is developed to perform the learning process in high-frequency bands using a ResNet-like structure [14]. Yuan et al. [15] incorporated the multi-scale and multi-depth idea into CNN and proposed MSDCNN. Following the development of deep learning, the architectures of networks become deeper and more complex. Since most pansharpening CNNs still use L2 loss (Mean Squared Error) to minimize differences between fusion results and simulated ground truth MS images, it only calculates and optimizes the error between bands with the same wavelength. However, remote sensing images contain abundant spectral information, and adjacent bands are highly correlated. Obviously, these inter-band relations are not considered in current L2 loss. In this letter, we propose a novel loss function based on the original L2 loss, named intra- and inter-band (IIB) loss. Our IIB loss includes two parts to regulate each band in the fused images: an intra-band loss which emphasizes keeping it the same as the corresponding band in the target MS image, and an inter-band loss which focuses on reconstructing the same inter-band relations as the target MS image. Fig. 1 adopts three bands images as an example to show an overall framework of a pansharpening CNN with our proposed IIB loss. ### _Proposed IIB Loss_ Since the proposed IIB loss can be directly incorporated with existed pansharpening CNNs, we firstly define the universal CNN-based pansharpening process as \\[F=g(\\downarrow M,\\downarrow P;\\theta) \\tag{1}\\] Where \\(g\\) is the pansharpening CNN which is parametrized by \\(\\theta\\). If L2 loss is adopted to optimize parameters in the network, the optimal \\(\\theta\\) is obtained by \\[\\theta=\\underset{\\theta}{\\text{argmin}}\\sum_{i=1}^{l}\\sum_{b=1}^{B}\\left\\|f_{ b}^{(i)}-m_{b}^{(i)}\\right\\|_{2}^{2} \\tag{2}\\] where \\(f^{(i)}=g(\\downarrow m_{i},\\downarrow p_{i};\\theta)\\), and \\((\\downarrow m_{i},\\downarrow p_{i},m_{i})\\) is the \\(i\\)th training sample. \\(B\\) indicates the total number of bands. Observing the form of L2 loss, we can find it will only calculate the differences between the same band within fusion results and target images. Prior and concurrent works [11]-[15] have proven the reliability of adopting L2 loss to optimize the whole network and then generate convincing fusion results. Therefore, we also use the original L2 loss to maintain intra-band relations in our designed IIB loss, \\[L_{intra}=\\sum_{i=1}^{l}\\sum_{b=1}^{B}\\left\\|f_{b}^{(i)}-m_{b}^{(i)}\\right\\|_{2}^ {2} \\tag{3}\\] For the inter-band relations, inspired by the QNR (quality with no reference) [17], we propose an inter-band loss which supports the training of pansharpening CNNs as follows: \\[L_{Inter}=\\sum_{i=1}^{l}\\sum_{l=1}^{B-1}\\sum_{n=1+1}^{B}\\left\\|Q(f_{l}^{(i)},f_ {n}^{(i)})-Q(m_{l}^{(i)},m_{n}^{(i)})\\right\\|_{2}^{2} \\tag{4}\\] where \\(Q\\) is the universal image quality index [18] which is calculated by \\[Q(x,y)=\\frac{4\\sigma_{xy}\\cdot\\vec{x}\\cdot\\vec{y}}{(\\sigma_{x}^{2}+\\sigma_{y }^{2})(\\vec{x}^{2}+\\vec{y}^{2})}=\\frac{\\sigma_{xy}}{\\sigma_{x}\\cdot\\sigma_{y} }\\times\\frac{2\\cdot\\vec{x}\\cdot\\vec{y}}{\\vec{x}^{2}+\\vec{y}^{2}}\\times\\frac{2 \\cdot\\sigma_{x}\\cdot\\sigma_{y}}{\\sigma_{x}^{2}+\\sigma_{y}^{2}} \\tag{5}\\] in which \\(x\\) and \\(y\\) are images that need to be measured, and \\(\\vec{x}\\) and \\(\\vec{y}\\) are their corresponding means. \\(\\sigma_{x}^{2}\\) and \\(\\sigma_{y}^{2}\\) are the variance of \\(x\\) and \\(y\\), and \\(\\sigma_{xy}\\) denotes the covariance between \\(x\\) and \\(y\\). As shown in the right of equation (5), \\(Q\\) can be decomposed into three factors. The first factor measures the correlation coefficient between \\(x\\) and \\(y\\), and it has a value range of [-1,1]. The second and third factors measure the luminance and contrast between \\(x\\) and \\(y\\), and they both have a value range of [0,1]. Therefore, \\(Q\\) will equal to 1 if and only if \\(x\\)=\\(y\\). In order to contain local statistics into consideration, \\(Q\\) is calculated with a \\(W\\times W\\) sliding window, and the global score is averaged by these local values. Then, by combining \\(L_{intra}\\) and \\(L_{Inter}\\), the proposed IIB loss can be written as \\[L_{IIB}=L_{intra}+\\alpha\\cdot L_{Inter} \\tag{6}\\] where \\(\\alpha\\) controls the importance of inter-band constraint, which is empirically set to 1. ## II Experimental Results and Discussion ### _Datasets and Experimental Setting_ We prepared two different datasets which include images from QuickBird (QB) and Worldview-3 (WV3), respectively. The spatial resolution of MS and PAN images from QB is 2.8m and 0.7m. The QB MS images include four bands: Infrared, Red, Green, and Blue. The spatial resolution of MS and PAN images from WV3 is 1.24m and 0.31m, while its MS bands covered by the wavelength of the panchromatic band are selected, which includes Infrared, Red, Yellow, Green, and Blue. Both QB and WV3 datasets consist of 7000 extracted training MS patches of size \\(64\\times 64\\) and their corresponding PAN patches of size 256\\(\\times\\)256. We also prepared 200 MS patches of size 256\\(\\times\\)256 and their corresponding PAN patches of size 1024\\(\\times\\)1024 for testing. The training datasets will be preprocessed according to Wald's protocol mentioned in Section II-A. The testing datasets can be organized in two forms. The first form consists of images prepared according to Wald's protocol, which is called simulated data. The second form directly uses original images, so it is named as actual data. For simulated data, due to the existence of target images, we adopt indicators, including SAM [19], ERGAS [20], and UIQI [18], which need a full reference to evaluate the performance of different settings. Since there is no reference for actual data, QNR [17] with the spectral distortion index \\(D_{A}\\) and spatial distortion index \\(D_{S}\\) are used for evaluating pansharpening results. The effectiveness of our proposed IIB loss is proved by applying it to three representative pansharpening CNNs: PNN [11], DiCNN [21], and PanNet [13]. All deep learning-based methods are implemented on the GPU (NVIDIA GeForce RTX 2080Ti) through an open deep learning framework Tensorflow [22]. Fig. 1: The framework of a pansharpening CNN with our proposed IIB loss ### _Comparisons and Analysis_ In this subsection, we will apply our proposed IIB loss to different pansharpening CNNs and observe their corresponding performances. Tables I and II summarize the objective evaluation based on QB and WV3 datasets at simulated and actual scales, where the up or down arrow indicates the higher or lower the better. We can notice that Tables I and II show a similar pattern. Observing original performances, PanNet obtains the best results in both simulated and actual scale. For the simulated datasets, the original L2 loss can obtain better UIQI, SAM, and ERGAS values since these indicators are averaged based on band-by-band results, but results generated by our IIB loss can still obtain close values. For the actual datasets, it can be found that the inter-band relations studied in the simulated scale have been successfully transferred to the actual scale. The values of \\(D_{3}\\), \\(D_{s}\\), and QNR get dramatic improvement after applying the proposed IIB loss. Fig. 2 shows the visual results of different settings, where the first and second rows are QB images, and the third and fourth rows are WV3 images. If we compare the pansharpening results generated by L2 loss and original MS images first, spectral preservation achieved by different networks is not satisfying enough. From the spatial perspective, the proposed IIB loss Fig. 2: Fused results on the actual QB and WV3 data (zoomed in views). does not change spatial details comparing to results obtained by L2 loss, which means the intra-band restriction contained in IIB loss has the relatively same ability to generate pan-sharpened MS images. However, the spectral information is saliently adjusted when we apply IIB loss to pansharpening CNNs. The obvious spectral distortion can be observed in results generated by PNN and DiCNN when they are trained by L2 loss. The spectral residual module is widely adopted in the pansharpening CNNs, like DiCNN and PanNet, which are proposed after PNN to directly obtain spectral information from input LMS images. Although DiCNN shows better results than PNN, it still cannot avoid spectral distortion if we observe red and blue rooftops in the WV3 dataset. When the inter-band restriction is added to network training, we can find the spectral information is corrected even in the network without spectral residual module (PNN). This phenomenon highlights the importance of including the maintenance of inter-band relations in the spectral preservation strategy. ## III Conclusion In this letter, we propose an inter- and intra-band (IIB) loss for pansharpening CNNs. The biggest superiority of IIB loss is that it inherits the advantages of intra-band loss, e.g. L2 loss, and considers the inter-band restriction when we train a specific pansharpening CNN. Experimental results prove the effectiveness of preserving both intra- and inter-band relations by applying IIB loss. ## References * [1] A. R. Gillespie, A. B. Kahle, and R. E. Walker, \"Color enhancement of highly correlated images--II. Channel ratio and \"Chromaticity\" transform techniques,\" _Remote Sens. Environ._, vol. 22, no. 3, pp. 343-365, Aug. 1987. * [2] W. J. Carper, T. M. Lillesand, and R. W. Kiefer, \"The use of intensity-hue-saturation transformations for merging SPOT panchromatic and multispectral image data,\" _Photogramm. Eng. Remote Sens._, vol. 56, no. 4, pp. 459-467, Apr. 1990. * [3] V. P. Shah, N. H. Younan, and R. L. King, \"An efficient pan-sharpening method via a combined adaptive PCA approach and contourlets,\" _IEEE Trans. Geosci. Remote Sens._, vol. 46, no. 5, pp. 1323-1335, Apr. 2008. * [4] Y. Kim, C. Lee, D. Han, Y. Kim, and Y. Kim, \"Improved additive-wavelet image fusion,\" _IEEE Geosci. Remote Sens. Lett._, vol. 8, no. 2, pp. 263-267, Mar. 2011. * [5] Y. Zhang, and G. Hong, \"An IHS and wavelet integrated approach to improve pan-sharpening visual quality of natural colour IKONOS and Quickbird images,\" _Inf. Fusion_, vol.6, no.3, pp. 225-234, Sep. 2005. * [6] V. P. Shah, N. H. Younan, and R. L. King, \"An efficient pan-sharpening method via a combined adaptive PCA approach and contourlets,\" _IEEE Trans. Geosci. Remote Sens._, vol. 46, no. 5, pp. 1323-1335, May. 2008. * [7] C. Ballester, V. Caselles, L. Igual and J. Verdera, \"A variational model for P+XS image fusion\", _Int. J. Comput. Vis._, vol. 69, no. 1, pp. 43-58, Apr. 2006. * [8] P. Liu, L. Xiao, and S. Tang, \"A new geometry enforcing variational model for pan-sharpening\", _IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens._, vol. 9, no. 12, pp. 5726-5739, Dec. 2016. * [9] S. Li and B. Yang, \"A new pan-sharpening method using a compressed sensing technique,\" _IEEE Trans. Geosci. Remote Sens._, vol. 49, no. 2, pp. 738-746, Feb. 2011. * [10] S. Li, H. Yin, and L. Fang, \"Remote sensing image fusion via sparse representations over learned dictionaries,\" _IEEE Trans. Geosci. Remote Sens._, vol. 51, no. 9, pp. 4779-4789, Sep. 2013. * [11] G. Masi, D. Cozzolino, L. Verdoliva, and G. Scarpa, \"Pansharpening by convolutional neural networks,\" _Remote Sens._, vol. 8, no. 7, pp. 594, Jul. 2016. * [12] C. Dong, C. C. Loy, K. He, and X. Tang, \"Image super-resolution using deep convolutional networks,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 38, no. 2, pp. 295-307, Feb. 2016. * [13] J. Yang, X. Fu, Y. Hu, Y. Huang, X. Ding, and J. Paisley, \"PanNet: A deep network architecture for pan-sharpening,\" in _Pro. ICCV_, Venice, Italy, 2017, pp. 5449-5457 * [14] K. He, X. Zhang, S. Ren, and J. Sun, \"Deep residual networks for single image super-resolution,\" in _Proc. CVPR_, Las Vegas, NV, USA, pp. 770-778, 2016. * [15] Q. Yuan, Y. Wei, X. Meng, H. Shen, and L. Zhang, \"A Multiscale and Multistepful Convolutional Neural Network for Remote Sensing Imagery Pan-Sharpening,\" _IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens._, vol. 11, no. 3, pp. 978-989, Mar. 2018. * [16] L. Wald, T. Ranchin, and M. Mangolini, \"Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images,\" _Photogrammetric Engineering and Remote Sensing._, vol. 63, pp. 691-699, Nov. 1997. * [17] L. Alparone, B. Aiazzi, S. Baronti, A. Garzelli, F. Nencini, and M. Selva, \"Multispectral and panchromatic data fusion assessment without reference,\" _Photogramm. Eng. Remote Sensing_, no. 2, pp. 193-200, Feb. 2008. * [18] Z. Wang and A. C. Bovik, \"A universal image quality index,\" _IEEE Signal Process. Lett._, vol. 9, no. 3, pp. 81-84, Mar. 2002. * [19] R. Yuhas, A. F. H. Goetz, and J. W. Boardman, \"Description among semi-arid landscape endmembers using the Spectral Angle Mapper (SAM) algorithm,\" _Sum. Third Annus. JPL Airborne Geosci. Work. JPL Publ._, vol. 1, pp. 147-149, Jun. 1992. * [20] L. Wald. (2000, Jan.) Quality of high resolution synthesised images: Is there a simple criterion?. presented at the Third Conference \"Fusion of Earth data: merging point measurements, raster maps and remotely sensed images. * [21] L. He, Y. Rao, Jun. Li, J. Chanussot, A. Plaza, J. Zhu, and B. Li, \"Pansharpening via detail injection based convolutional neural networks,\" _IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens._, vol.12, no. 4, pp. 1188-1204, Apr. 2019. * [22] M. Abadi et al., \"TensorFlow: Large-scale machine learning on heterogeneous distributed systems,\" arXiv: 1603.04467 [cs], Mar. 2016.
Pansharpening aims to fuse panchromatic and multispectral images from the satellite to generate images with both high spatial and spectral resolution. With the successful applications of deep learning in the computer vision field, a lot of scholars have proposed many convolutional neural networks (CNNs) to solve the pansharpening task. These pansharpening networks focused on various distinctive structures of CNNs, and most of them are trained by L2 loss between fused images and simulated desired multispectral images. However, L2 loss is designed to directly minimize the difference of spectral information of each band, which does not consider the inter-band relations in the training process. In this letter, we propose a novel inter- and intra-band (IIB) loss to overcome the drawback of original L2 loss. Our proposed IIB loss can effectively preserve both inter- and intra-band relations and can be directly applied to different pansharpening CNNs. Pansharpening, Deep Learning, Convolutional Neural Network, Loss Function
Provide a brief summary of the text.
208
arxiv-format/1407_5738v1.md
**The EPN-TAP protocol for the Planetary Science Virtual Observatory** **S. Erard (\\({}^{1}\\)), B. Cecconi (\\({}^{1}\\)), P. Le Sidaner (\\({}^{2}\\)), J. Berthier (\\({}^{3}\\)), F. Henry (\\({}^{1}\\)), M. Molinaro (\\({}^{4}\\)), M. Giardino (\\({}^{5}\\)), N. Bourrel (\\({}^{6}\\)), N. Andre (\\({}^{6}\\)), M. Gangloff (\\({}^{6}\\)), C. Jacquey (\\({}^{6}\\)), F. Topf (\\({}^{7}\\))** _(\\({}^{a}\\)) LESIA, Observatoire de Paris/CNRS/UPMC/Univ. Paris-Diderot 5 pl. J. Janssen 92195 Meudon, France. email: [email protected]_ _(\\({}^{2}\\)) DIO-VO, UMS2201 CNRS, Observatoire de Paris 61 av. de l'Observatoire, 75014 Paris, France_ _(\\({}^{a}\\)) IMCCE, Observatoire de Paris/CNRS 61 av. de l'Observatoire, 75014 Paris, France_ _(\\({}^{a}\\)) INAF - Osservatorio Astronomico di Trieste via G.B. Tiepolo 11, 34143 Trieste, Italy_ _(\\({}^{b}\\)) INAF - Istituto di Astrofisica e Planetologia Spaziali (IAPS) Via del Fosso del Cavaliere 100, 00133 Roma Italy_ _(\\({}^{a}\\)) CDPP, IRAP/CNRS/Univ. Paul Sabatier 9 avenue du colonel Roche, 31068 Toulouse, France_ _(\\({}^{r}\\)) Space Research Institute, Austrian Academy of Sciences Schmiedlstrasse 6, A - 8042 Graz, Austria_ _Corresponding author: S. Erard, LESIA, Observatoire de Paris 5 pl. J. Janssen 92195 Meudon, France. email: [email protected]. Tel : (33) 1 45 07 78 19_ ## 1 - Introduction EPN-TAP is a VO data access protocol designed to support Planetary Science data in the broadest sense. It is intended to access data services of various content, including space-borne, ground-based, experimental (laboratory), and simulated data. It is designed to describe many fields, from surface imaging to spectroscopy, atmospheric structure, electro-magnetic fields, and particle measurements. EPN-TAP is an essential part of the Planetary Science Virtual Observatory (VO), because no prexisting protocol was able to access such a large realm of data (see Erard et al. this issue, companion paper). The EPN-TAP protocol is directly derived from IVOA's Table Access Protocol (TAP) [6], a protocol to access data organized in tables, here adapted to Planetary Science. EPN-TAP is an extension of TAP with extra characterization derived from a Data Model -- similarly to ObsTAP, which is an extension based on the ObsCore Data Model [7]. The Europlanet Data Model was defined to describe many types of Planetary Science data with a standard terminology [5]. EPN-TAP uses a subset of this terminology to define standard query parameters. This subset of the Europlanet Data Model is called EPNCore. Some of these parameters are adapted from the PDAP protocol of IPDA [1] and from the SPASE protocol (Space Physics Archive Search and Extract). Since EPN-TAP is TAP compliant, the discovery of all EPN-TAP data services can be performed using an IVOA registry. EPN-TAP services are described accurately by IVOA registries that include the TAPRegExt extension (see companion paper). Once declared in a registry, EPN-TAP compliant data services are most efficiently queried with a specific EPN-TAP client such as the VESPA tool at VO-Paris. This paper provides a synthetic description of EPN-TAP and discusses the choice made during its definition. EPN-TAP definition includes: - A general framework to implement data services (SQL database, the presence of the epn_core view, etc). - A set of parameters describing the resources and their content (the EPNCore DM), plus optional parameters and attributes. - A convention to provide numeric parameters in standard form (units/scales, etc) for the query mechanism. - A set of reference sources to encode the string parameters (e. g. target names, etc). - A set of UCDs defining the parameters in use in the VO context. ## 2 - Main concepts of EPN-TAP EPN-TAP is an extension of IVOA TAP and is compliant with the TAP standard. It typically uses the TAP mechanism [6] with synchronous or asynchronous queries; VOSI for capability and metadata access [3]. TAP is a protocol dedicated to access relational database tables. It uses ADQL (the Astronomical Data Query Language, [10]) to query the databases. To allow similar queries on all EPN-TAP services, we will assume that the EPNCore data model is implemented in the databaseas a view (i. e., a table presenting the parameters). In order to be accessed through EPN-TAP, all databases must therefore include a view called epn_core, which contains at least all the parameters described in section 3.1. The epn_core view mainly contains a list of the \"granules\" available in the database, typically an entry/line for each data element, and is used as a catalog of the accessible content. The parameters describing the granules are mostly related to data description and to the main axes of variation. ### Axes description In practice, the user writes his query on a client interface. The client sends a formatted query to the server. The server in turn looks for matches in the epn_core view and sends back an answer. This process is illustrated in Fig. 1. A standard situation is to search data located in space, time or wavelength, therefore to issue a query based on axis coordinates. In order to handle the multiplicity of situations, most parameters are normalized in the protocol, regardless of the content of the databases. For instance, a spectroscopy database may provide measurements on a wavelength scale in microns, while the user wants to query the data on a wavenumber scale in cm-1 (Fig. 1). A common description must therefore be used, which should not interfere with the way the data are described, nor with the way the user wants to query the data. The EPN-TAP query standard defines the scales and units used for parameters -- e. g., the spectral axes are always described on a frequency scale in Hz. Since the databases do not necessarily use the standard scales/units internally, the epn_core view also has the function to provide the parameters in the expected units once for all. This avoids on-the-fly conversions on the database side, while the data themselves may remain in native form (Fig. 1). This view is used as an interface for the client, and can remain hidden to the user. Similarly, the client interface may propose a variety of scales/units to the user, and convert them in Hz to write the query. It is therefore essential that such transforms are exactly reciprocal on both sides of the query system. A similar system is used for many parameters, e. g., time scales are provided in Julian Days. The EPN-TAP protocol is closely related to the TAP protocol, and mainly differs by the definition of its core parameters. The server side relies on a general framework for TAP, while the client performs most EPN-specific operations and turns them into fully TAP-compliant queries, which Figure 1: Practical implementation for EPN-TAP queries. On the service side, only the epn_core view is converted to standard scales and units. can be handled directly by the service framework through ADQL. Parameter names are mostly used as tags to pass the values between the client and the server. Since they are used to handle a variety of situations, science fields, etc, they may not reflect the exact meaning of the parameters in the frame of a specific database. This again is not an issue, since parameter names are not normally seen by the users (depending on the client interface). A particular situation arises with the spatial coordinates, because of the extreme diversity encountered in Planetary Science. In order to simply formulate a query, the general type of coordinate system (e.g. celestial coordinates, geographical coordinates, Cartesian coordinates in a volume, etc) must be known in advance. For this reason the description (provided by the spatial_frame_type parameter) must be included in the column description of the TAP response [6] and in the metadata returned by the service. In the future, this will used to select services in advance. However, only the parameters of the epn_core view can be used for data selection in a TAP query; therefore important service attributes are best stored as parameters even when they remain constant throughout the table (e. g., the same spatial_frame_type parameter can be used to select granules individually). ### Data description Apart from the data description, the epn_core view may include the data itself, or links to data files. The data structure is not necessarily constant among all granules in the epn_core view, and a service can contain a mix of images, spectra, etc. In addition to the granules defined above, at least one \"dataset\" entry is required for each service. Parameters describing \"datasets\" provide the range encompassed by their elements/granules, e.g. coordinates or observing dates. \"Datasets\" and \"granules\" entries are identified using the resource_type parameter. A query on \"dataset\" may be used to return only global information on a service, without a long list of available data products, and is therefore the preferred access mode in discovery phase. For this reason, an EPN-TAP client will preferably default to resource_type = dataset. In the epn_core view, datasets are best located at the beginning for visibility: most VO clients only load a limited number of entries by default, so the last ones are often not displayed. Additional \"datasets\" can be defined inside the epn_core view. Such datasets consist in subsets of granules selected according to various criteria by the data provider. A complex PDS data set for example can be sliced into several subsets accessed independently through EPN-TAP, e. g., to identify different processing levels. This allows data providers to make their data available in EPN-TAP without going through the burden of generating alternative versions of their databases. An important part of the service design is related to the identification of the granules, and is left to the data provider. The simplest situation corresponds to one entry per data file, but complex situations may call for other solutions. For instance, if an image contains both Mars and Phobos, the basic approach is to have one granule with the two target names stored in the target_name parameter. Alternatively, if the target is considered as the main entry, there could be two granules (Mars and Phobos) pointing to the same image file; this permits to provide the coordinates relative to each body with no ambiguity (a similar situation may occur when the data files contain several data products of different types). A third possibility would be to combine the first two, and to define three granules pointing to the same image. Although there is no mandatory rule, this third possibility is in general not desirable: redundancy in the epn_core view will result in duplicate answers, which may be both confusing and unpractical for the user. Data providers will in general want to give answers as explicit as possible. ### Writing and matching queries Altogether, the epn_core view is composed of many fields (Fig. 2): all mandatory EPN-TAP parameters; possibly optional or extra parameters; data access information, either data embedded in the view or access information to data files. In the most general case, queries are written from a client and sent to all accessible services. Queries must therefore respect the standard: only mandatory parameters can be queried, and are used as filters. Services receiving unknown parameters would respond with an error code. Conversely, parameters not present in the query are not used to filter the response. When receiving a query, the server looks for matching lines in the epn_core view (Fig. 2). The answer is an excerpt of the view containing all its columns, including the EPNCore parameters and possibly the data, embedded in a VOTable. Data access is therefore provided according to the table definition. When only one service is addressed, the VOSI mechanism provides access to the list of fields in the epn_core view. Once this is known, any table field can be queried with TAP, including optional and non-standard parameters, plus the data themselves when they are contained in the epn_core view. This mechanism provides a complete access to the data service content (in contrast to the PDAP protocol v1, for example). ### ObsTAP versus EPN-TAP Fig. 2: Query of the epn_core view and returned valuesClose inspection would confirm deep similarities between the two protocols. Since EPN-TAP parameters accept more values than ObsTAP, it could be interpreted as an enlarged version of ObsTAP. However this is not the way it is intended, and we stress the need to implement ObsCore on simple but essential data services, which are also valuable for Planetary Science. Examples of such applications include the following use cases: - The user needs to retrieve a list of the brightest celestial IR sources in the whole sky to check for stellar occultation from a spacecraft. Practically, he has to identify a catalogue in Vizier including the adequate quantity, say K magnitude. The catalogue may be sorted in Vizier web interface (not always possible) or transferred to TOPCAT for visualization and analysis (although there may be difficulties e.g. with coordinate format). - The user needs to get scaled IR spectra of reference stars. A current solution is to get a list of reference stars, to retrieve their spectral type and magnitude in a given band, to grab spectra of similar spectral types either at ESO or IRTF, and to scale them to the magnitude of the targets. Although simple, such use cases may be remarkably difficult to implement for the casual user. One of the hard points is the difficulty to identify services distributing the required physical quantities, the reason why ObsTAP was initially setup. The target_class and o_ucd parameters in ObsTAP help solve this problem, as their counterparts would in EPN-TAP. ObsCore is obviously very efficient to distribute simple services with mainly one quantity documented for many targets, and its use for astronomical services in support of Solar System observations is encouraged. We cannot stress enough the need to implement use cases or services in Astronomy to support the observation of the Solar System. ## 3 - EPNCore / EPN-TAP parameters The TAP mechanism is used here with a set of specific parameters. The mandatory EPN-TAP parameters constituting the EPNCore have been defined on the basis of real use cases in various fields, so as to handle most data services related to Planetary Science (see list of first services in companion paper). EPN-TAP can also query parameters not included in the EPNCore. Some of these parameters are defined precisely but are relevant only to very specific data services. Those are not mandatory, but they must be implemented as defined in the standard when present. Beside, the names of optional parameters are reserved for this particular usage and must not be used to introduce other quantities. In addition, several optional attributes can be used to define general properties of the service itself, such as a detailed description of coordinate systems in use, the processing level of the data, or a description of the service. Such attributes can also be used as parameters describing all the granules of a service so they can be grabbed by TAP, but their values are in general expected to remain constant. Although EPN-TAP bears many similarities with the ObsTAP protocol, large variations are used to handle the specificities of Planetary Science data. In some instances the ObsCore parameter names have been preserved, but in general the acceptable values are different or constrained differently, and the meaning of the parameters is therefore slightly different. Their names have then been changed to avoid any confusion, since in principle both EPN-TAP and ObsTAP servicescan be queried with the same client. Other concepts have been adapted from the PDS and from SPASE. ### Parameters EPN-TAP parameters can be grouped in several categories: axis ranges, data description, and data access. - Axis range parameters provide the data coordinates in space, time, spectral domain, and photometric domain. They allow the user to focus on particular ranges along these axes. - Data description parameters document the data in a more general way, providing target description (name and type), data origin (instrument and facility plus references), and basic description of the data themselves (data and measurement types). The latter two parameters allow the user to find particular types of data, e. g. surface images, vertical atmospheric profiles, or spectral cubes. - Data access parameters provide links to the data files, or in some cases the data themselves. All those are optional parameters. The complete list of parameters is given in Table 1. All EPN-TAP parameters are documented with a numerical type, unit, UCD, and description (free field), according to VOSI specifications. This information is available in the service response and can be used by the client. Expected values are listed in Table 1. ### Axis parameters Spatial axes are of course more intricate than in Astronomy, given the wide variety of coordinate systems in use. A particularity of EPN-TAP is to use a spatial_frame_type parameter that provides the \"flavor\" of the coordinate system and defines the nature of the spatial coordinates (Table 2): either celestial (right ascension and declination + possibly distance), body-fixed (longitude and latitude + possibly elevation), Cartesian (distances), cylindrical, spherical, or healpix for more general situations. The 3 spatial coordinates are defined according to the previous parameters, i.e. their meaning and physical dimension are context-dependent. In addition to the coordinates, the spatial resolution is provided. The exact coordinate system in use is documented through optional parameters spatial_coordinate_description and spatial_origin. Although the meaning of the latter is rather straightforward, the spatial_coordinate_description is more tricky: it is expected to provide complete reference to the system in use and its properties, including: target body; reference ellipsoid or shape model; control point network; latitude definition (planetocentric vs planetographic); orientation (east- vs west-handed). In practice, the use of a comprehensive list of possible systems in preferable. A simple acronym such as Mars_IAU2000 would then define the coordinate frame completely. Although the IVOA STC [13] also aims at providing standard references to coordinate frames, and actually includes some frames in the Solar System, it was not found flexible enough for the need of our community. A specific reference list of frames is therefore being compiled by the Europlanet group from existing international standards. When ready, it will be submitted to IAU for approval. Other data axes are handled more simply. Time is accompanied by a time_sampling_step and an exposure_time parameters; the latter provides the time resolution of observations while the former is used to document regular time series found e.g. in plasma measurements. An optional time_origin parameter can used to specify the place where time is measured, to account for light-path differences when comparing event-based measurements (the definition of \"UTC\" does not cover this). This is in particular required when comparing ground-based and space borne data. Parameter spectral_range is accompanied by a spectral_sampling_step and a spectral_resolution parameters, the latter providing the Full Width at Half Maximum of the measurements. Documenting both resolution and sampling step of the axis parameters is important to handle generic queries in different fields. Similar optional keywords are also reserved to provide spectral values for particles (e. g. mass spectroscopy, which uses different units). Photometric axes are defined in a similar way by documenting the range of the main three angles (incidence, emergence and phase). These parameters will allow the user to search data related to surface reflectance and emissivity, or radiative transfer in the atmospheres, including simulation and laboratory measurements. All axis parameters exist in two versions to provide minimum and maximum values, so as to define a range for searches. Whenever only one value is relevant, it should fill both min/max parameters. ### Data description parameters Targets are referred to by name and class. Possible target classes are limited to a finite number of values, the list of which is part of the standard (Table 3). Target classes not only allow the user to search for a class of object, but also remove any ambiguity between homonyms (e.g. lo can be either a satellite or an asteroid). In contrast with ObsCore, the target name is a crucial search parameter for EPN-TAP, since it is the only possible way to identify a Solar System object. Its use is enlarged to samples when unambiguous (e.g. for meteorites, lunar samples, and other samples for space missions such as Stardust). The dataproduct_type parameter is similar to that of ObsCore, and provides the high-level science organization of the data (Table 4). Although larger than ObsCore's, the list of possible values is again limited and is a part of the standard. The measurement_type parameter is more similar to the o_ucd parameter of ObsCore, since it also provides the UCD of the main physical quantity in the data service. This is not always defined for Planetary Science (see below). Two other descriptive parameters provide references to the instrument generating the data: one for the instrument itself, the other for the \"instrument host\", i. e., either a spacecraft, a telescope, or a ground-based facility. The resource_type parameter distinguishes between granules and datasets, i.e. sets of granules. Description parameters for datasets provide the complete set of values for their granules. When several datasets are present, the dataset_id parameter will permit to restrain queries to identified datasets, and will provide cross-reference between datasets and individual granules. The index parameter provides a unique line number in the epn_core view. It is introduced as an EPN-TAP parameter so as to permit cross-references in the database after a first query. This solution is preferred over an internal database index, which may not remain constant when updating the content. ### Data access parameters A set of optional parameters is available to provide URLs to data files. They are related to the formatted data files, to previews in standard image formats, or to the original files whenever those are distributed in unusual formats. Additional parameters are available to provide file size, format and name. Although the data formats must be described in the eqn_core view, support of data formats is not part of the protocol and is left to the visualization tools. The VESPA client uses the preview URL for quick-look visualization of the data in its web interface, and may select destination tools according to data format in some cases. Finally, the file_name parameter is intended to provide reference to the granule, but also to search services that encode information in the file names themselves. ### Other parameters Other optional parameters are available for various purposes. Right Ascension and Declination are always available to provide celestial coordinates in addition to the main coordinate system, whereas it is redundant or used to provide other coordinates (e. g. coordinates of the target in celestial images, whereas the main coordinates are used to identify the region observed). This may be handy for instance when sending data to TOPCAT, which uses them directly in plots. A target_distance parameter is also available to document the observer's distance, be it a spacecraft or an Earth-based telescope. The Ls parameter may be used to store the heliocentric longitude of the target, which is a standard measurement of the season, and is particularly useful to study atmospheric phenomena. The processing_level parameter is similar to ObsCore calib_level. In EPN-TAP however, 6 calibration levels are defined to accommodate derived products, especially in imaging and mapping. They follow the CODMAC nomenclature used in space data archives (Table 5). The element_name parameter can be used to introduce feature names on a planetary surface, while target_region is reserved for generic names. They are similar to target_name and target_class for local features, including global regions in Solar System bodies such as \"atmosphere\", \"ionosphere\", etc The species parameter can be used to introduce simple molecular formula, such as H2O or CO2, typically when providing chemical abundances in an atmosphere. Case must follow the standard chemical syntax. For more demanding purposes, InChik Keys may for instance be provided in a specific parameter, but their use is not supported in EPN-TAP. Finally, the \"reference\" parameter can be used to introduce bibliographic references, typically with a Bibcode. ### Units The eqn_core view must provide all quantities in the EPN-TAP conventional scales and units to make universal queries possible across different datasets -- this is mostly relevant to axes definition. Honoring this convention does not involve any conversion in the database itself, though. On the other side, an EPN-TAP client should ideally allow the user to enter his preferredscales and units, and convert them to the EPN-TAP standard using the symmetrical conversions. Concerning spectral quantities, the EPN-TAP convention is to provide spectral quantities as frequencies measured in Hz, assuming propagation in vacuum. Whenever convenient, the native values can be provided through specific parameters. This may prove helpful when the data are passed to specialized tools such as CASSIS. Beyond unit conventions, it may be stressed however that few VO spectral tools are currently adapted to the need of Planetary Science, which often deals with reflectance spectra. Spatial coordinate have units related to the type of frame coordinate in use. They are usually provided in degrees/minutes/seconds or Astronomical Units. Longitude and latitude ranges and north-pole orientation follow the IAU convention (which is not intuitive for small bodies); longitudes are always increasing eastwards [12]. Times differences are provided in seconds, while dates are provided in Julian days as double precision floats to maintain acceptable accuracy (\\(\\sim\\)1 ms). ### References for string values Non-quantitative EPN-TAP parameters cannot take arbitrary values either. There are typically two cases: - Values have to be selected in short lists related to the standard definition. This is the case for target_class, dataproduct_type, or resource_type. - The parameter is associated to a reference list, in which values must be selected. This is the case for target_name, which values should be retrieved from the official IAU nomenclature. General parameters are referred to the IAU thesaurus (target_region) or IAU nomenclature [16, 17] (target_name, element_name). Target names are particularly sensitive since most objects have several names. The official name is expected in the target_name parameter, with proper case. Case is therefore currently an issue, because the ADQL standard does not support case variations. In practice however, case-sensitivity is supported in some VO frameworks (in particular in DaCHS) and the EPN-TAP standard imposes it. Honoring the ADQL constraints would otherwise force data providers to use lower cases and therefore include non-standard names in their epn-core views. As a help to query writers and data providers, the SSODnet name resolver at IMCCE [11] provides the official IAU name of many Solar System bodies with the correct case; the SSODnet database is constantly updated with new object names [16] and by integrating older archives. Currently, services containing data of interest might not be visible if they do not use the recommended IAU nomenclature for planetary bodies. In the long run, a system completely avoiding cases on both sides (client and server) is required to support all situations. Another general issue related to the current version of ADQL is the difficulty to handle multiple-valued string fields. Other parameters are difficult to refer correctly at present, including instrument_name and instrument_host. Although several institutions provide lists of applicable values (e.g., IAU, PDS, Spice, NSSDC), these sources are not comprehensive at present. For instance, the IAU list of ground-based observatories does not include radio-telescopes, and the PDS list does not include orbital telescopes or spacecraft mostly devoted to Astronomy or Earth observation. A project for the Planetary Science VO is therefore to complement these lists, provide a conversion table between various sources (e.g., Spice and NSSDC), and submit them to IAU for approval. Using the CCSDS registry for space missions (SANA) may be an alternative. As mentioned above, the measurement_type parameter introduces a UCD for the main quantity in the data service. EPN-TAP uses \"UCD1+\" from the current IVOA list [8]. However, UCDs in Planetary Science are often not defined, e. g., those related to reflected light or in-situ measurements. A possible enlargement of this list is currently discussed in the IVOA and IPDA working groups. Another difficulty encountered here is that the UCD is related to the physical quantity, not to the type of observation performed (e. g. phys.absorption;em.opt.I is eligible, while stellar_occultation is not). Therefore, some types of measures cannot be searched currently. This may be solved in the future by adopting an observation_type parameter, which is however difficult to define. ### 3.4 Data structure An EPN-TAP service can contain four types of data: (a) scalar data fields in limited number; (b) data contained in one separated file (image, table, etc); (c) data spread on several separated files; (d) data computed on the fly. These situations may be handled as follows: (a) The data may be included in the epn_core view as separated columns with specific, non-standard parameter names. In general dataproduct_type = ca (catalog) is appropriate, and no access_* parameter is needed in the epn_core view. Units and dimensions may be provided in the response VOTable (e. g., using the q.rd definition file of DaCHS). (b) A URL to the external file is provided on each line through the access_url parameter, so that the client can easily download the selected files. This description may be completed by the access_format, access_etssize, and preview_url parameters. The dataproduct_type parameter must be filled according to the data organization type (e.g., image, time_series, etc). (c) A \"main data product\" must be identified, which is described as in (b). Additional data products are linked and described using parameter names derived consistently from the standard ones. The parameter preview_url is actually a common example of such a situation. Other examples include images with associated ancillary data in separated files, referred to as e.g., ancillarydata_access_url, and alternative output format referred to as native_access_url. (d) The access_url must point to a computing system that will process the query, e.g. forwarding a query to a computing service with adapted parameters. ## 4 - Setup ### 4.1 Service implementation EPN-TAP services may be implemented in various ways. The first ones have been installed using the GAVO/DaCHS framework; some have been installed successfully on VO-Dance. In addition to the DaCHS installation document [4], tutorials to install EPN-TAP services using DaCHS are available [2]. DaCHS normally expects a PostgreSQL database, but can support MySQL or NoSQL databases through PostGreSQL's foreign data wrapper. The database does not have to be located on the same machine as the framework. This allows the data provider to quickly set up a service from an existing database, with no conversion or duplication (this has been done for the HELIO services and M4ast). When starting from scratch however, building a PostgreSQL database is the most convenient way. Several methods have been used for the test services at VO-Paris, mostly based on IDL/GDL routines, which provide the only versatile interface with PDS3. In many cases the data files must be opened and read to retrieve information about the granules (PDS3 or FITS headers). A dataset catalogue, if complete enough, may suffice to build the database and the epn_core view. A VOTable can be also be used to build the database, e. g., through TOPCAT jdbc extension. Again, all EPNCore mandatory parameters must be present (but can be left empty) and provided in the correct unit. At least one dataset line is required, which summarizes the whole database. IDL routines writing the database and view, together with templates containing generic definitions of the mandatory parameters, are available to help defining new services. In the DaCHS framework, services are defined in a file \"q.rd\" that maps the epn_core view. It contains the list of parameters present in the view, each associated to its attributes: numerical type, unit, UCD, and a short description string. Units are defined according to IVOA [15] and IAU [12] standards. Like every IVOA service, EPN-TAP services are identified in an xml file providing the declaration to the registry. This file contains a description of the service and its content, and references to the TAP standard [14]. It is used by the client to connect to the available services and to display some indication of their content. This information is not reachable by the TAP mechanism and is not included in the service response. ### 4.2 Clients EPN-TAP services can currently be queried in several ways: (a) The VO-Paris VESPA client [20] can be used to query services declared in the OV-Paris registry, or to access local services not yet registered by providing their URL. The EPNCore parameters are entered in the user's preferred unit scales, and converted to EPN-TAP standard. Selected results can be sent to IVOA visualization tools through SAMP. (b) The TOPCAT tool may be used as a low-level client to send general TAP queries to individual databases, visualize data, and make data available to other clients through SAMP [19]. (c) The DaCHS framework includes a client (ADQL query page) which permits to send general TAP queries to local databases individually. This is mostly intended as a maintenance facility for local databases. An EPN-TAP client may set a default value for some parameters, in particular for resource_type. A query using a single parameter resource_type = granule would reply with the complete list of granules / data files in the service, which is not optimal for resource exploration. Setting the \"dataset\" value alone will return a limited number of matches per service (but at least one) and is the preferred way to list the available services; it may be the client's default. VESPA implements various query modes. The standard one is to provide a generic EPN-TAP interface to write and send queries to all EPN-TAP registered; only mandatory parameters are supported, so that all services are expected to answer correctly. The result of a general query is a page displaying the number of answers from all reachable services; currently the user has to select one service to access its specific answers. An \"Advanced query form\" is also available on the service results page. For a given service, it provides the same interface completed with optional and specific parameters, which are retrieved through the VOSI mechanism. This allows the user to query a specific database using all its parameters. Finally, the \"Custom resource\" mode of VESPA can connect a non-registered service given the server URL and the schema name. This allows for testing a service that is not yet registered. ## 5 - Queries and responses ### EPN-TAP queries A TAP query consists in looking for certain values of the parameters in the data table. Its arguments are therefore the parameters/columns of this table. Such queries use parameters as filters on the database contents, and return only the lines of the table matching the arguments. The client must use the HTTP GET or POST protocols to send queries to services. The query is composed of the URL of the service, and ADQL language [10] is used to express the request. The TAP query is very generic and there is no mandatory parameter associated with it. A typical query is the following: _http://<server address>/tap/sync/request=doquery & lang=adql & query=select * from epn_core where time_min > '2455197.5' and time_max < '2455927.5'_ This will return all kind of data from 2455197.5 (01/01/2010) to 2455927.5 (01/01/2012) in Julian days (target is not specified). Some parameters can be multivalued in the sense that the epn_core view can accommodate several values, in particular when related to datasets. The separator between values is always a space. To query such parameters, the \"like\" operator must always be used instead of the \"=\" operator. These fields include: target_name, target_class, instrument_name, instrument_host_name, measurement_type. _http://<server address>/tap/sync/request=doquery & lang=adql & query=select * from epn_core where time_min between '2455197.5' and '2455927.5' and target_class like 'comet' and target_name like '1P'_ The service will return all data of any type for comet Halley (1P) from 2455197.5 (01/01/2010) to 2455927.5 (01/01/2012) in Julian days. Similarly, a single query can introduce multiple values for a given parameter. ADQL provides standard operations on parameters to combine possible conditions (and, or, like ) as well as parentheses. Standard ADQL wildcards are also implemented. _target_name like 'Mars' or target_name like 'Venus'_ Return data on either Mars or Venus. The current ADQL standard is however causing troubles here: the query on 1P above will also provide results on 11P, 21P, etc, which are different comets. Another limitation of ADQL forces to provide and query most parameters in lower-case, which leads to inconsistencies as detailed in section 3.3. Case sensitive parameters are: target names, URLs, filenames, \"species\" and all non-standard parameters (i.e., defined for a particular service and not listed here). Those are currently handled via the ivo_nocasematch function in DaCHS, and there are plans to implement a similar system in VO-Dance. ### Service response The response of the service is formatted as a VOTable, which must comply with the VOTable standard, version 1.2 or higher. Following the TAP protocol, the response contains information about the service, the query, and the epn_core view; it also contains the data itself or links to data files. The VOTable must contain a RESOURCE element with the attribute _type=\"results\"_ containing a single TABLE element with the results of the query. Additional RESOURCE elements may be present, but the usage of any such element is not defined here and the TAP client may not use them. The Resource element includes INFO elements providing: the URL of the data server, the EPN-TAP query and its status, descriptions of the service and table, and a credit note. The content of the INFO elements is a message suitable for display to the user. The Resource element also includes a TABLE element providing a description of the epn_core view columns, with the fields name, data type, unit, and UCD. This is followed by a data area containing the subset of rows from the epn_core view that match the query. All parameters in the view are therefore available to the client. The data itself is either linked with an access URL or directly embedded in the response VOTable, depending on the service view. The issue of possible format conversion is left to the client or visualization tools. If no result fulfills the query, the TABLE element must be present and empty (i.e., the TABLE element has no DATA element). Otherwise, it may be encoded in binary using the base64 scheme. ## Conclusion The EPN-TAP protocol provides a consistent way to query many services of interest for Planetary Science in the fields of observations, simulations, and laboratory measurements. Although similar to ObsCore in many respects, EPN-TAP has broader focus but is not intended to replace ObsCore - rather to complement it to distribute Planetary Science content. At the moment of writing, the protocol is still in test phase but very close to completion. It is already discussed in IVOA and IPDA working groups, and will be the default protocol implemented on coming Europlanet services. Future steps of development will include: - The improvement of reference lists for the string parameters. In some cases, these lists will be elaborated from existing but incomplete or contradicting references. Coordinate systems in use in the Solar System and instrument hosts appear to be the most sensitive. - Specific UCDs are required to describe the quantities routinely measured in this field, in particular concerning measurements in reflected light and particle properties. This is currently discussed in the IVOA working groups. - An evolution of ADQL to overcome present difficulties related to case handling and multiple valued fields. ## Acknowledgements This work has been conducted in the frame of Europlanet-RI JRA4 work package. The EuroPlaNet-RI project is funded by the European Commission under the 7th Framework Program, grant #228319 \"Capacities Specific Programme\". Additional funding was provided in France by the Association Specifique Observatoire Virtual / INSU. \\begin{table} \\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \\hline **Name** & **Class** & **Unit** & **Description** & **UCD** \\\\ \\hline **Mandatory** & & & & & Must be present \\\\ **parameters** & & & & & \\\\ \\hline index & Long & Internal table row index & meta.id \\\\ \\hline resource type & String & Can be dataset or granule & meta.id;class \\\\ \\hline dataset\\_id & String & Dataset identification \\& & meta.id;meta.dataset \\\\ & & granule reference & & \\\\ \\hline dataproduct\\_type & String & Organization of the data product, from enumerated list & meta.id;class \\\\ \\hline target\\_name & String & Standard name of target (from a list depending on target type), case sensitive & meta.id;src \\\\ \\hline target\\_class & String & Type of target, from enumerated list & src.class \\\\ \\hline time\\_min & Float /double & d & Acquisition start time (in JD) & time.start \\\\ \\hline time\\_max & Float /double & d & Acquisition stop time (in JD) & time.end \\\\ \\hline time\\_sampling\\_step\\_min & Float & s & Min time sampling step & time.interval;stat.min \\\\ \\hline time\\_sampling\\_step\\_max & Float & s & Max time sampling step & time.interval;stat.max \\\\ \\hline time exp\\_min & Float & s & Min integration time & time.duration;stat.min \\\\ \\hline time\\_exp\\_max & Float & s & Max integration time & time.duration;stat.max \\\\ \\hline spectral\\_range\\_min & Float & Hz & Min spectral range (frequency) & em.freq;stat.min \\\\ \\hline spectral\\_range\\_max & Float & Hz & Max spectral range (frequency) & em.freq;stat.max \\\\ \\hline spectral\\_sampling\\_st & Float & Hz & min spectral sampling step & em.freq.step;stat.min (not standard) \\\\ \\hline spectral\\_sampling\\_st & Float & Hz & Max spectral sampling step & em.freq.step;stat.max (not standard) \\\\ \\hline spectral\\_resolution\\_m in & Float & Hz & Min spectral resolution & spect.resolution;stat.min \\\\ \\hline spectral\\_resolution\\_m & Float & Hz & Max spectral resolution & spect.resolution;stat.max \\\\ \\hline c1min & Float & deg & Min of first coordinate & Pos;stat.min \\\\ \\hline c1max & Float & deg & Max of first coordinate & Pos;stat.max \\\\ \\hline c2min & Float & deg & Min of second coordinate & Pos;stat.min \\\\ \\hline c2max & Float & deg & Max of second coordinate & Pos;stat.max \\\\ \\hline c3min & Float & & Min of third coordinate & Pos;stat.min \\\\ \\hline c3max & Float & & Max of third coordinate & Pos;stat.max \\\\ \\hline c1\\_resol\\_min & Float & deg & Min resolution in first coordinate & Pos.resolution;stat.min (not standard) \\\\ \\hline c1\\_resol\\_max & Float & deg & Max resolution in first coordinate & pos.resolution;stat.max (not standard) \\\\ \\hline c2\\_resol\\_min & Float & deg & Min resolution in second coordinate & pos.resolution;stat.min (not standard) \\\\ \\hline c2\\_resol\\_max & Float & deg & Max resolution in second coordinate & pos.resolution;stat.max (not standard) \\\\ \\hline c3\\_resol\\_min & Float & deg & Max resolution in third coordinate & pos.resolution;stat.max (not standard) \\\\ \\hline c3\\_resol\\_max & Float & & Min resolution in third coordinate & pos.resolution;stat.max (not standard) \\\\ \\hline c3\\_resol\\_max & Float & & Max resolution in third coordinate & pos.resolution;stat.max (not standard) \\\\ \\hline spatial\\_frame\\_type & String & & Flavor of coordinate system, defines the nature of coordinates & pos.frame \\\\ \\hline \\end{tabular} \\end{table} Table 1: EPNCore parameters \\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \\hline incidence\\_min & float & Min incidence angle (solar zenithal angle) & pos.incidenceAng;stat.min (not standard) \\\\ \\hline incidence\\_max & float & Max incidence angle (solar zenithal angle) & pos.incidenceAng;stat.max (not standard) \\\\ \\hline emergence\\_min & float & Min emergence angle & pos.emergenceAng;stat.min (not standard) \\\\ \\hline emergence\\_max & float & Max emergence angle & pos.emergenceAng;stat.max (not standard) \\\\ \\hline phase\\_min & float & Min phase angle & pos.phaseAng;stat.min (not standard) \\\\ \\hline phase\\_max & String & Max incidence angle & pos.phaseAng;stat.max (not standard) \\\\ \\hline instrument\\_host\\_nam & String & Standard name of the observatory or spacecraft & meta.class \\\\ \\hline instrument\\_name & String & Standard name of instrument & meta.id;instr \\\\ \\hline measurement\\_type & String & UCD(s) defining the data & meta.ucd \\\\ \\hline **Optional parameters** & & & Must be used in this sense if present \\\\ \\hline access\\_url & String & URL of the data file, case sensitive & meta.ref.url \\\\ \\hline access\\_format & String & File format type & meta.code.mime \\\\ \\hline access\\_etsize & Integer & kB & Estimate file size in kB & phys.size;meta.file \\\\ \\hline preview\\_url & Integer & URL of a preview image & meta.id;meta.file \\\\ \\hline native\\_access\\_url & String & URL of the data file in native form, case sensitive & meta.ref.url \\\\ \\hline native\\_access\\_format & String & File format type in native form & meta.code.mime \\\\ \\hline file\\_name & String & Name of the data file only, case sensitive & meta.ref.url \\\\ \\hline species & String & Identifies a chemical species, case sensitive & phys.composition.species (not standard) \\\\ \\hline element\\_name & String & Secondary name (can be standard name of region of interest) & meta.id \\\\ \\hline Reference & String & Bibcode or other bilbio id & meta.bib \\\\ \\hline ra & Float & Right ascension & pos.eq.ra;meta.main \\\\ \\hline dec & Float & Declination & pos.eq.dec;meta.main \\\\ \\hline ls & Float & Solar longitude & \\\\ \\hline target distance & Float & km & Observer-target distance & pos.distance \\\\ \\hline particle\\_spectral\\_type & String & & \\\\ \\hline particle\\_spectral\\_ran & & & \\\\ \\hline particle\\_min & & & \\\\ \\hline particle\\_spectral\\_ran & Float & & \\\\ \\hline particle\\_spectral\\_sam & & & \\\\ \\hlinepling\\_step\\_min & Float & & \\\\ \\hline particle\\_spectral\\_sam & & & \\\\ \\hlinepling\\_step\\_max & & & \\\\ \\hline particle\\_spectral\\_reso & & & \\\\ \\hline lubrication\\_min & & & \\\\ \\hline particle\\_spectral\\_reso & & & \\\\ \\hline lubrication\\_max & & & \\\\ \\hline **Relative to service / Table header** & & & Can be used as optional parameters \\\\ \\hline processing level & Integer & CODMAC calibration level & meta.code;obs.calib \\\\ \\hline publisher & String & Resource publisher & meta.name \\\\ \\hline reference & String & Reference publication & meta.bib \\\\ \\hline service\\_title & String & Title of resource & meta.id \\\\ \\hline spatial\\_coordinate\\_de & String & Indicates exact spatial frame & \\\\ \\hline spatial\\_origin & String & Defines the frame origin & \\\\ \\hline \\end{tabular} \\begin{table} \\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \\hline **EPN-TAP value** & **Type** & **Description** \\\\ \\hline im & image & Scalar field with two spatial axes, or association of several such fields. Maps of planetary surfaces are considered as images. \\\\ \\hline sp & spectrum & Measurements organized primarily along a spectral axis, e.g., a series of radiance spectra. \\\\ \\hline ds & dynamic\\_spectrum & Consecutive spectral measurements through time, organized as a time series. \\\\ \\hline sc & spectral\\_cube & Sets of spectral measurements with 1 or 2D spatial coverage, e.g., imaging spectroscopy. \\\\ \\hline pr & profile & Scalar or vectorial measurements along 1 spatial dimension, e.g., atmospheric profiles, atmospheric paths, sub-surface profiles, etc \\\\ \\hline vo & volume & Other measurements with 3 spatial dimensions, e.g., internal or atmospheric structures. \\\\ \\hline **mo** & **movie** & Sets of chronological 2D spatial measurements \\\\ \\hline cu & cube & Multidimensional data with 3 or more axes, e.g., all that is not described by other 3D data types such as spectral cube or volume. \\\\ \\hline \\end{tabular} \\end{table} Table 4: Data Product Types \\begin{table} \\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \\hline **celestial** & 2D angles on the sky: Right Ascension c1 and Declination c2 + possibly distance from origin c3. Although this is a special case of spherical frame, the order is different. \\\\ \\hline **body** & 2D angles on a rotating body longitude: c1 and latitude c2 + possibly altitude/depth c3. Default is IAU 2009 planetocentric convention, east-handed [12]. \\\\ \\hline **cartesian** & (x,y,z) as (c1,c2,c3). This includes spatial coordinates given in pixels. \\\\ \\hline **cylindrical** & (r, theta, z) as (c1,c2,c3). Angles are defined in degrees. \\\\ \\hline **spherical** & (r, theta, phi) as (c1,c2,c3). Angles are defined as in usual spherical systems (E longitude, zenithal angle/colatitude), in degrees. If the data are related to the sky, “celestial” coordinates with RA/Dec must be used. \\\\ \\hline **healpix** & (H, K) as (c1,c2) \\\\ \\hline \\end{tabular} \\end{table} Table 2: Spatial Frame Types \\begin{table} \\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \\hline **EPN-TAP value** & **Type** & **Description** \\\\ \\hline im & image & Scalar field with two spatial axes, or association of several such fields. Maps of planetary surfaces are considered as images. \\\\ \\hline sp & spectrum & Measurements organized primarily along a spectral axis, e.g., a series of radiance spectra. \\\\ \\hline ds & dynamic\\_spectrum & Consecutive spectral measurements through time, organized as a time series. \\\\ \\hline sc & spectral\\_cube & Sets of spectral measurements with 1 or 2D spatial coverage, e.g., imaging spectroscopy. \\\\ \\hline pr & profile & Scalar or vectorial measurements along 1 spatial dimension, e.g., atmospheric profiles, atmospheric paths, sub-surface profiles, etc \\\\ \\hline vo & volume & Other measurements with 3 spatial dimensions, e.g., internal or atmospheric structures. \\\\ \\hline **mo** & **movie** & Sets of chronological 2D spatial measurements \\\\ \\hline cu & cube & Multidimensional data with 3 or more axes, e.g., all that is not described by other 3D data types such as spectral cube or volume. \\\\ \\hline \\end{tabular} \\end{table} Table 3: Target Types \\begin{table} \\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \\hline ts & time\\_series & Measurements organized primarily as a function of time (with exception of dynamical spectra and movies, i.e. usually a scalar quantity). \\\\ \\hline ca & catalog & Lists of events, catalogs of object parameters, lists of features The primary key may be a qualitative parameter (name, ID, etc). \\\\ \\hline sv & spatial\\_vector & List of summit coordinates defining a vector, e.g., vector information from a GIS, spatial footprints, etc \\\\ \\hline \\end{tabular} \\end{table} Table 5: Processing levels \\begin{table} \\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \\hline **CODMAC level / EPN-TAP value** & PSA **NASA level** & **PRODUCT\\_TYPE (PDS/PSA)** & **ObsTAP** & **Description** \\\\ \\hline 1 & 1a & UDR & Level 0 & Unprocessed Data Record (low-level encoding, e.g. telemetry from a spacecraft instrument. Normally available only to the original team) \\\\ \\hline 2 & 1b & 0 & EDR & Level 1 & Experiment Data Record (often referred to as “raw data”: decommutated, but still affected by instrumental effects) \\\\ \\hline 3 & 2 & 1A & RDR & Level2 & Reduced Data Record (“calibrated” in physical units) \\\\ \\hline 4 & & 1B & REFDR & Reformated Data Record (mosaics or composite of several observing sessions, involving some level of data fusion) \\\\ \\hline 5 & 3 & 2-5 & DDR & Level3 & Derived Data Record (result of data analysis, directly usable by other communities with no further processing) \\\\ \\hline 6 & & ANCDR & Ancillary Data Record (ancillary) & (extra data specifically supporting a data set, such as coordinates, geometry, etc) \\\\ \\hline \\end{tabular} \\end{table} Table 5: Processing levels ## References * [1] Planetary data access protocol (PDAP). IPDA draft 1.0 (16 April 2013) [https://planetarydata.org/projects/previous-projects/copy_of_2011-2012-projects/PDAP](https://planetarydata.org/projects/previous-projects/copy_of_2011-2012-projects/PDAP) Core Specification /pdap-v1-0-16-04-2013/view * [2] Use cases and tutorial for the Planetary Science VO are available on this page: [http://voparis-europlanet.obspm.fr/docum.shtml](http://voparis-europlanet.obspm.fr/docum.shtml) * [3] Virtual Observatory Support Interface (VOSI) [http://ivoa.net/Documents/VOSI/20101206/index.html](http://ivoa.net/Documents/VOSI/20101206/index.html) * [4] GAVO/ DaCHS implementation: [http://vo.ari.uni-heidelberg.de/docs/DaCHS/](http://vo.ari.uni-heidelberg.de/docs/DaCHS/) * [5] EPN data model version 1.18a (last version to date) can be found here: [http://www.europlanet-idis.fi/documents/public_documents/Data_Model_v1.18a.pdf](http://www.europlanet-idis.fi/documents/public_documents/Data_Model_v1.18a.pdf) EPN-TAP full documentation: [http://voparis-europlanet.obspm.fr/xml/TAPCore/](http://voparis-europlanet.obspm.fr/xml/TAPCore/) * [6] TAP protocol [http://ivoa.net/Documents/TAP/](http://ivoa.net/Documents/TAP/) * [7] ObsTAP and ObsCore [http://ivoa.net/Documents/ObsCore/](http://ivoa.net/Documents/ObsCore/) * [8] UCD + UType concept [http://ivoa.net/Documents/cover/UCDlist-20070402.html](http://ivoa.net/Documents/cover/UCDlist-20070402.html) * [10] IVOA Astronomical Data Query Language Version 2.00 [http://ivoa.net/Documents/latest/ADQ_html](http://ivoa.net/Documents/latest/ADQ_html) * [11] Name resolver returning body official names and astronomical coordinates at a specific time: [http://vo.imcce.fr/webservices/ssodnet/?resolver](http://vo.imcce.fr/webservices/ssodnet/?resolver) * [12] Report of the IAU Working Group on Cartographic Coordinates and Rotational Elements: 2009 B. A. Archinal et al, Celest Mech Dyn Astr (2011) 109:101-135. + Erratum to: Reports of the IAU Working Group on Cartographic Coordinates and Rotational Elements: 2006 & 2009. B. A. Archinal et al, Celest Mech Dyn Astr (2011) 110:401-403. SEE IAU Working Group on Cartographic Coordinates and Rotation Elements of the Planets and Satellites: [http://astrogeology.usgs.gov/Page/groups/name/IAU-WGCCRE](http://astrogeology.usgs.gov/Page/groups/name/IAU-WGCCRE) * [13] Space time and coordinate in IVOA [http://ivoa.net/Documents/latest/SIC_html](http://ivoa.net/Documents/latest/SIC_html) * [14] IVOA Registry interface [http://ivoa.net/Documents/RegistryInterface/](http://ivoa.net/Documents/RegistryInterface/) * [15] Unit in the IVOA [http://ivoa.net/Documents/VOLnits/](http://ivoa.net/Documents/VOLnits/) * [16] IAU's Committee on Small Body Nomenclature handles Minor Planet Names and Designations, Comet Names and Designations, Cross Listed Objects: [http://www.ss.astro.umd.edu/IAU/csbn/](http://www.ss.astro.umd.edu/IAU/csbn/) * [17] In addition, IAU's Working Group for Planetary System Nomenclature (WGPSN) defines feature nameson planetary surfaces: [http://planetarynames.wr.usqs.gov/](http://planetarynames.wr.usqs.gov/) * [18] IAU nomenclature for object types: [http://planetarynames.wr.usqs.gov/Page/Planets](http://planetarynames.wr.usqs.gov/Page/Planets) * [19] EPN-TAP services: using TOPCAT as a client [http://voparis-europlanet.obspm.fr/utilities/Tuto_TopCat.pdf](http://voparis-europlanet.obspm.fr/utilities/Tuto_TopCat.pdf) * [20] VESPA client: [http://vespa.obspm.fr](http://vespa.obspm.fr)
A Data Access Protocol has been set up to search and retrieve Planetary Science data in general. This protocol will allow the user to select a subset of data from an archive in a standard way, based on the IVOA Table Access Protocol (TAP). The TAP mechanism is completed by an underlying Data Model and reference dictionaries. This paper describes the principle of the EPN-TAP protocol and interfaces, underlines the choices that have been made, and discusses possible evolutions. Keywords: Virtual Observatory, Planetary Science, Solar System, Data services, Standards
Write a summary of the passage below.
114
arxiv-format/2207_07189v2.md
# Current Trends in Deep Learning for Earth Observation: An Open-source Benchmark Arena for Image Classification Ivica Dimitrovski\\({}^{a,b}\\), Ivan Kitanovski\\({}^{a,b}\\), Dragi Kocev\\({}^{a,c}\\), Nikola Simidjievski\\({}^{a,c,d}\\) \\({}^{a}\\)Bias Variance Labs, d.o.o., Ljubljana, Slovenia \\({}^{b}\\)Faculty of Computer Science and Engineering, University Ss Cyril and Methodius, Skopje, N. Macedonia \\({}^{c}\\)Department of Knowledge Technologies, Jozef Stefan Institute, Ljubljana, Slovenia \\({}^{d}\\)Department of Computer Science and Technology, University of Cambridge, Cambridge,United Kingdom Correspondence to: [ivica,ivan,dragi,nikola]@bvlabs.ai ## 1 Introduction Recent trends in machine learning (ML) have ushered in a new era of image-data analyses, repeatedly achieving great performance across a variety of computer-vision tasks in different domains [1; 2]. Deep learning (DL) approaches have been at the forefront of these efforts - leveraging novel, modular and scalable deep neural network (DNN) architectures able to process large amounts of data. The inherent capabilities of these approaches also extend to various areas of remote sensing, in particular Earth Observation (EO), employed for analyzing different types of large-scale satellite data [3]. Many of these contributions are instances of image-scene classification, such as land-use and/or land-cover (LULC) identification tasks, focusing on image-scene analyses, characterizations, and classifications of changes in the landscape caused either by human activities or by the elements. Historically, from the perspective of ML, many of these tasks have been addressed mostly through the paradigms of either pixel-level [4, 5] or object-level classification tasks [6]. The former refers to classification tasks focusing on each pixel in the image, associating it with the appropriate semantic label. Such approaches typically do not scale well on high-resolution images, but more importantly, many times struggle to capture more high-level patterns in the image that can span over many pixels [7]. The latter, object-level classification methods, focus on analyzing distinguishable and meaningful objects in the image (as a collection of pixels) rather than independent pixels. This generally allows for better scalability and performance; however, such approaches may struggle with images containing more diverse and hardly-distinguishable objects, which prevail in most high-resolution remote-sensing data. Methods based on pixel-level and object-level paradigms have shown decent performance and are still actively researched, mostly as instances of image segmentation and object detection tasks. More recently, however, methods based on a new paradigm of scene-level classification [8, 9] have shown significant performance improvements, focusing on learning semantically meaningful representations of more sophisticated patterns in an image by leveraging the capabilities of deep learning. Deep learning approaches have been successfully applied in various remote-sensing scenarios, be it learning models from scratch or via transfer learning[10, 11], in a fully supervised or self-supervised setting [12, 13], exploiting the heterogeneity [14] and temporal properties [15] of the available data. As a result, this synergy of accurate DL approaches, on the one hand, and accessible high-resolution aerial/satellite imagery, on the other, has led to important contributions in various domains ranging from agriculture [16, 17, 18], ecology [19, 20], geology [21] and meteorology [22, 23, 11] to urban mapping/planning[24, 25, 26] and archaeology [27]. Nevertheless, most of these efforts typically focus on very narrow tasks stemming from domain-specific and/or spatially constrained datasets. As a result, models have been evaluated in different settings and under different conditions [28] - hardly reproducible and comparable. These persistent challenges, akin to a lack of standardized and consistent validation and evaluation of novel approaches, have also been identified by the community [29]. Citing the lack of available documentation on the design and evaluation of the employed machine learning approaches, the community highlights the urgent need for standardized benchmarks that will not only enable proper and fair model comparison across datasets and similar tasks but will also facilitate faster progress in designing better and more accurate modeling approaches. Motivated by this, in this work, we introduce _AiTLAS: Benchmark Arena_ - an _open-source EO benchmark suite_ for evaluating state-of-the-art DL approaches for EO image classification. To this end, we present extensive comparative analyses of models derived from ten different state-of-the-art architectures, comparing them on a variety of multi-class and multi-label classification tasks from 22 datasets with different sizes and properties. We benchmark models trained from scratch and in the context of transfer learning, leveraging pre-trained model variants as it is typically performed in practice. While in this work, we mainly focus on EO-image classification tasks, such as LULC, all presented approaches are general and easily extendable to other remote-sensing image classification tasks. More importantly, to ensure reproducibility, facilitate better usability, and further exploitation of the results from our work, we provide _all_of the experimental resources_ - freely available on our repository1. The repository includes the complete study details, such as the trained models, model parameters, train/evaluation configurations, and measured performance scores, as well as the details on all of the datasets and their prepossessed versions (with the appropriate train/validation/test splits) used for training and evaluating the models. Footnote 1: [https://github.com/biasvariancelabs/aitlas-arena](https://github.com/biasvariancelabs/aitlas-arena) To our knowledge, we present a unique systematic review and evaluation of different state-of-the-art DL methods in the context of EO image classification across many classification problems - benchmarked in the same conditions and using the same hardware. Related efforts, while relevant, have mainly focused on evaluating approaches on particular datasets [8, 28, 30, 31]; evaluating different aspects of method-design [32, 14] relevant to remote-sensing classification tasks; or providing a more general overview of the common tasks at hand [33, 34]. In particular, Cheng et al. [8] introduce a dataset and surveys several ML representation-learning approaches commonly used for remote-sensing classification tasks, comparing their performance when combined with traditional convolutional neural network (CNN) architectures. Xia et al. [31] also introduce a benchmark dataset for aerial-image classification, providing a comparison similar to Cheng et al. [8] of representation-learning approaches combined with three deep networks. Another, more recent study [28], discusses and compares more recent DL approaches and surveys several applications on three different datasets. In particular, the authors showcase the performance of the different methods for each dataset, as reported in the respective papers. The underlying, persistent conclusions from these studies show that model performances are associated with a particular dataset and study design, presenting difficulties for fair and general model comparisons. This is expected, but in our work, we seek to remedy this issue by training and evaluating all models under the same conditions. In this context, our work is related to one of Zhai et al. [32], which presents a large-scale study on more recent representation-learning approaches, benchmarking different aspects of method design and model parameters. However, Zhai et al. [32] considers a relatively broad scope of different datasets with only a few relevant to remote-sensing and LULC classification. Neumann et al. [14] present a large-scale study on five different benchmark datasets; however, they investigate the effect of transfer learning on these tasks. More specifically, they evaluate different variants of the same model architecture, trained under different circumstances, rather than comparing different model architectures. Another related study by Stewart et al. [35] reports on the comparison of different variants of ResNets on EO-image classification tasks from four datasets. More recently, and arguably most related to our work in terms of the number of evaluated models, Papoutsis et al. [30] present an extensive empirical evaluation of different state-of-the-art DL architectures suitable for EO-image classification tasks, specifically LULC tasks, focusing exclusively on the BigEarthNet [36] dataset. Namely, the authors benchmark different classes of model architectures across different criteria and introduce an efficient and well-performing model tailored specifically for BigEarthNet. In this work, we go beyond all the aforementioned studies, significantly extending the scope of research in two directions: the number of model architectures (and model variants) being evaluated and the datasets being considered. This results in assessing more than 500 different models with different architectures, varying designs, and learning paradigms across 22 datasets. We provide essential study-design principles and model training details that will aid in more systematic and rigorous experiments in future work. The proposed _AiTLAS: Benchmark Arena_ builds on the AiTLAS toolbox [37]2 - a recent open-source library for exploratory and predictive analysis of satellite imaginary pertaining to different remote-sensing tasks. AiTLAS implements various methods and libraries for data handling, processing, and analysis, with PyTorch [38] as a backbone for constructing and learning DL models. By having all of the methods and datasets under the same umbrella, we provide the means for a fair, unbiased, and reproducible comparison of approaches across different criteria that include: overall model performance, data- and task-dependent model performance, model size, and learning efficiency as well as the effect of transfer learning via model pre-training. Footnote 2: [https://aitlas.bvlabs.ai](https://aitlas.bvlabs.ai) The results, summarized in Figure 1, show that many of the current state-of-the-art architectures for vision tasks can lead to decent predictive performance when applied to EO image classification tasks. While, in some cases, Figure 1: **Overview of the study**: We benchmarked more than 500 models from 10 different model architectures on tasks from **(a)** 22 datasets with different sizes and properties; comparing them on **(b)** multi-label and **(c)** multi-class classification tasks. We evaluate two versions of each model architecture: (i) trained from scratch (denoted with _darker shading_) and (ii) pre-trained on ImageNet-1K (denoted with _lighter shading_). Note the varying scales in (b) and (c), made purposely for better visibility. Detailed results are presented in Section 4 and Appendices B, C and D. training models from scratch can lead to satisfactory performance, using pre-trained models and fine-tuning them on each dataset leads to the best performance overall. We observed this in all cases, regardless of the dataset properties, the type of classification tasks, or the model architecture. We found more considerable performance gains on tasks from smaller datasets, which, as expected, benefited more from the pre-training process than models trained on larger datasets. In terms of model architectures, our experiments showed that pre-trained Transformer models, i.e. both Vision Transformer [39] and Swin Transformer [40] models, were, in general, able to achieve the best performance. Specifically, Vision Transformer models showed the best performance on various multi-classification tasks, while Swin Transformer models led to much better performance on multi-label tasks, albeit at the cost of much longer training time. Throughout the paper, we further evidence and discuss these findings. In summary, in this paper, we make several contributions. Specifically, we: * an open-source benchmark suite that enables standardized evaluation of machine learning models for Earth Observation (EO) applications; * Provide study-design principles for training and evaluating state-of-the-art deep learning models on various supervised EO image classification tasks from 22 datasets with different sizes and properties; * Implement and benchmark more than 500 models stemming from 10 state-of-the-art architectures, including models trained from scratch and their pre-trained variants; * Investigate models' generalization abilities to unseen in-domain datasets; * Evaluate different pre-training strategies that relate to pre-training models from in-domain EO datasets and investigate their effect on the downstream predictive performance; * Discuss common issues that typically affect the models' performance, specifically in the context of EO tasks. * Provide open-source access to all experimental details, including trained models, dataset details, train/evaluation configurations, and detailed performance scores. ## 2 Data & models ### Data description With the ever-growing availability of remote sensing data, there has been a significant effort by many research groups to prepare, label, and provide proper datasets that will support the development and evaluation of sophisticated machine learning methods. While there are many such datasets, both proprietary and publicly available, in this work, we focus on the latter - open-access publicly available dataset. Given this criterion, we select 22 open-access datasets usually considered in different EO studies for benchmarking DL approaches. The selected datasets have varying sizes (number of images), varying image types, image sizes, and formats, and, more importantly, related to different classification tasks. Namely, we consider datasets related to multi-class and multi-label classification tasks, mainly addressing LULC applications. The objective of _multi-class classification_ tasks is to predict one (and only one) class (label) from a set of predefined classes for each image in a dataset. _Multi-label classification_, on the other hand, refers to predicting multiple labels from a predefined set of labels for each image in the dataset [41] (e.g., an image can belong to more than one class simultaneously). In our experimental study, we consider 15 multi-class and seven multi-label datasets. Tables 1 and 2 summarizes the properties of the considered multi-class (MCC) and multi-label (MLC) classification datasets, respectively. The number of images across datasets is quite diverse, ranging from datasets with \\(\\sim 2K\\) \\begin{table} \\begin{tabular}{l l l l l l l l} \\hline \\hline **Name** & **multirow{2}{*}{**multirow{2}{*}{**MCC**}} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} \\\\ \\cline{1-1} \\cline{6-10} UC Merced [9] & Aerial RGB & 2100 & 256\\(\\times\\)256 & 0.3m & 21 & No & tif \\\\ \\hline WHU-RS19 [42] & Aerial RGB & 1005 & 600\\(\\times\\)600 & 0.5m & 19 & No & jpg \\\\ \\hline AID [31] & Aerial RGB & 10000 & 600\\(\\times\\)600 & 0.5m - 8m & 30 & No & jpg \\\\ \\hline Eurosat [43] & Sat. Multispectral & 27000 & 64\\(\\times\\)64 & 10m & 10 & No & jpg/tif \\\\ \\hline PatternNet [44] & Aerial RGB & 30400 & 256\\(\\times\\)256 & 0.06m - 4.69m & 38 & No & jpg \\\\ \\hline Resisc45 [8] & Aerial RGB & 31500 & 256\\(\\times\\)256 & 0.2m - 30m & 45 & No & jpg \\\\ \\hline RSI-CB256 [45] & Aerial RGB & 24747 & 256\\(\\times\\)256 & 0.3 - 3m & 35 & No & tif \\\\ \\hline RSSCN7 [46] & Aerial RGB & 2800 & 400\\(\\times\\)400 & n/a & 7 & No & jpg \\\\ \\hline SAT6 [47] & RGB + NIR & 405000 & 28\\(\\times\\)28 & 1m & 6 & Yes & mat \\\\ \\hline Siri-Whu [48] & Aerial RGB & 2400 & 200\\(\\times\\)200 & 2m & 12 & No & tif \\\\ \\hline CLRS [49] & Aerial RGB & 15000 & 256\\(\\times\\)256 & 0.26m - 8.85m & 25 & No & tif \\\\ \\hline RSD46-WHU [50] & Aerial RGB & 116893 & 256\\(\\times\\)256 & 0.5m - 2m & 46 & Yes & jpg \\\\ \\hline Optimal 31 [51] & Aerial RGB & 1860 & 256\\(\\times\\)256 & n/a & 31 & No & jpg \\\\ \\hline Brazilian Coffee Scenes (BSC) [52] & Aerial RGB & 2876 & 64\\(\\times\\)64 & 10m & 2 & No & jpg \\\\ \\hline So2Sat [53] & Sat. Multispectral & 400673 & 32\\(\\times\\)32 & 10m & 17 & Yes & h5 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Summary of the multi-class classification (MCC) datasets. \\begin{table} \\begin{tabular}{l l l l l l l l l} \\hline \\hline **Name** & **multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} \\\\ \\cline{1-1} \\cline{6-10} UC Merced (mlc) [54] & Aerial RGB & 2100 & 256\\(\\times\\)256 & 0.3m & 17 & 3.3 & No & tif \\\\ \\hline MLRSNet [55] & Aerial RGB & 109161 & 256\\(\\times\\)256 & 0.1m - 10m & 60 & 5.0 & No & jpg \\\\ \\hline DFC15 [56] & Aerial RGB & 3342 & 600\\(\\times\\)600 & 0.05m & 8 & 2.8 & Yes & png \\\\ \\hline AID (mlc)[57] & Aerial RGB & 3000 & 600\\(\\times\\)600 & 0.5m - 8m & 17 & 5.2 & Yes & jpg \\\\ \\hline PlanetUAS [58] & Aerial RGB & 40479 & 256\\(\\times\\)256 & 3m & 17 & 2.9 & No & jpg/tif \\\\ & & & 200\\(\\times\\)20 & 60m & & & & \\\\ & & & 60x60 & 20m & & & & \\\\ BigEarthNet 19 [36] & Sat. Multispectral & 519284 & 120x120 & 10m & 19 & 2.9 & Yes & tif, json \\\\ \\hline \\hline \\end{tabular} \\begin{tabular}{l l l l l l l l} \\hline \\hline **Name** & **multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} & \\multirow{2}{*}{**MCC**} \\\\ \\cline{1-1} \\cline{6-10} UC Merced (mlc) [54] & Aerial RGB & 2100 & 256\\(\\times\\)256 & 0.3m & 17 & 3.3 & No & tif \\\\ \\hline MLRSNet [55] & Aerial RGB & 109161 & 256\\(\\times\\)256 & 0.1m - 10m & 60 & 5.0 & No & jpg \\\\ \\hline DFC15 [56] & Aerial RGB & 3342 & 600\\(\\times\\)600 & 0.05m & 8 & 2.8 & Yes & png \\\\ \\hline AID (mlc)[57] & Aerial RGB & 3000 & 600\\(\\times\\)600 & 0.5m - 8m & 17 & 5.2 & Yes & jpg \\\\ \\hline PlanetUAS [58] & Aerial RGB & 40479 & 256\\(\\times\\)256 & 3m & 17 & 2.9 & No & jpg/tif \\\\ & & & 200\\(\\times\\)20 & 60m & & & & \\\\ & & & 60x60 & 20m & & & & \\\\ BigEarthNet 19 [36] & Sat. Multispectral & 519284 & 120x120 & 10m & 19 & 2.9 & Yes & tif, json \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Summary of the multi-label classification (MLC) datasets. images to datasets with \\(\\sim 500K\\) images. This also extends toward the number of labels per image, ranging from \\(2\\) to \\(60\\). Figure 1a visualizes the datasets with respect to their sizes, with the x-axis denoting the number of images (on a log scale) and the y-axis indicating the number of labels (with marker size denoting the number of labels per image) for each of the different datasets. Most of the datasets consist of Aerial RGB images (with only a few comprised of satellite multi-spectral data) that are different in spatial resolution, size, and format. Finally, we note the datasets that include predefined splits (for training, validation, and testing) given by the original authors and provide the splits for the ones that are missing, as further discussed in Section 3.1. An extended description of each dataset is given in Appendix D. ### Model architectures Current trends in EO image classification leverage the capabilities of DL architectures for computer vision, learning data representations that often lead to superior predictive performance. We recognize that there are many different approaches stemming from different model architectures and model variants. These can differ in various 'finer' details (e.g., number and width of layers, hyper-parameter values, and learning regimes), often developed for a particular task. Rather than seeking a state-of-the-art performance for each EO problem/dataset, in this study, we are interested in providing a more general evaluation framework and benchmarking models by analyzing their characteristics and unique properties through the lens of their predictive performance and learning efficiency across all datasets. Therefore, our model-architecture (and parameter) choices are motivated by different architecture 'classes', such as the traditional convolutional architectures and the more recent attentional and multilayer-perceptron (MLP) architectures. This renders models with different sizes, training/inference time, different abilities in a transfer-learning setting, etc. More specifically, we investigate several architectures which have been traditionally used for EO image classification tasks, such as: AlexNet [60], VGG16 [61], ResNet [62] and DenseNet [63]. Moreover, we investigate more recent architectures, which include EfficientNet [64], ConvNeXt [65], Vision Transformer [39], Swin Transformer [40] and MLPMixer [66], that have shown state-of-the-art performance in various vision tasks. In the following, we provide a brief overview of these architectures, highlighting their properties in Table 3. \\begin{table} \\begin{tabular}{l r r r r r} \\hline \\hline **Model** & **Year** & **\\#Layers** & **\\#Parameters** & **FLOPS** & **Based on** \\\\ \\hline AlexNet [60] & 2012 & 8 & \\(\\sim\\) 57\\(\\cdot 10^{6}\\) & 0.72 G & [67] \\\\ VGG16 [61] & 2014 & 16 & \\(\\sim\\) 134.2\\(\\cdot 10^{6}\\) & 15.47 G & [67] \\\\ ResNet50 [62] & 2015 & 50 & \\(\\sim\\) 23.5\\(\\cdot 10^{6}\\) & 4.09 G & [67] \\\\ ResNet152 [62] & 2015 & 152 & \\(\\sim\\) 58.1\\(\\cdot 10^{6}\\) & 11.52 G & [67] \\\\ DenseNet161 [63] & 2017 & 161 & \\(\\sim\\) 26.4\\(\\cdot 10^{6}\\) & 7.73 G & [67] \\\\ EfficientNet B0 [64] & 2019 & 237 & \\(\\sim\\) 5.2\\(\\cdot 10^{6}\\) & 0.39 G & [67] version: B0 \\\\ Vision Transformer (ViT) [39] & 2020 & 12 & \\(\\sim\\) 86.5\\(\\cdot 10^{6}\\) & 17.57 G & [68] version: b\\_16\\_224 \\\\ MLPMixer [66] & 2021 & 12 & \\(\\sim\\) 59.8\\(\\cdot 10^{6}\\) & 12.61 G & [68] version: b\\_16\\_224 \\\\ ConvNeXt [65] & 2022 & 174 & \\(\\sim\\) 28\\(\\cdot 10^{6}\\) & 4.46 G & [67] version: tiny \\\\ Swin Transformer [40] & 2022 & 24 & \\(\\sim\\) 49.7\\(\\cdot 10^{6}\\) & 11.55 G & [67] version: v2 small \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Summary of the representative model architectures considered in this study. The first class of models we consider relies on convolutional architectures, which, in recent years, have driven many of the advances in computer vision. The architecture of convolutional neural networks (CNN) consists of many (hidden) layers stacked together, designed to process (image) data in the form of multiple arrays. Most typically, CNNs consist of a series of convolutional layers, which apply convolution operation (passing the data through a kernel/filter), forwarding the output to the next layer. This serves as a mechanism for constructing feature maps, with former layers typically learning low-level features (such as edges and contours), subsequently increasing the complexity of the learned features with deeper layers in the network. Convolutional layers are typically followed by pooling operations, which serve as a downsampling mechanism by aggregating the feature maps through local non-linear operations. In turn, these feature maps are fed to fully-connected layers, which perform the ML task at hand - in this case, classification. All the layers in a network employ an activation function. In practice, the intermediate, hidden layers employ a non-linear function such as rectified linear unit (ReLU) or Gaussian Error Linear Unit (GELU) as common choices. The choice of activation function in the final layer relates to the tasks at hand, typically a sigmoid function in the case of classification. CNN architectures can also include different normalization and/or dropout operators embedded among the different layers, which can further improve the network's performance. CNN architectures have been widely researched, with models applied in many contexts of remote sensing, and in particular EO image classification [11; 69; 70; 30]. This includes _AlexNet_[60], a pioneering architecture that introduced and successfully demonstrated the utility of the CNN blueprint, mentioned earlier, for computer vision tasks. Namely, even though the architecture of AlexNet has a modest depth (relative to more recent architectures) consisting of eight layers, it remains an efficient baseline approach for a variety of EO tasks [8; 10], leading to decent performance, especially when pre-trained with large image datasets [71]. We also consider the more sophisticated _VGG_[61], which employs a deeper architecture inspired by AlexNet. VGG has shown great performance in a variety of vision tasks, including EO-image classification problems [72; 73; 44]. There are two variants of VGG in practice, VGG16 and VGG19; both extend AlexNet mainly by increasing the depth of the network with 13 and 16 convolutional layers, respectively. In this study, we evaluate the performance of the former _VGG16_. VGGs employ kernels with smaller sizes than the ones typically used in AlexNet, demonstrating that stacking multiple smaller kernels are better able to extract more complex representations than one larger filter. While, in general, increasing the network depth by adding convolutional layers helps for learning more complex and more informative representations thereof, in practice, this can lead to several issues, such as the vanishing gradient problem [74], which impairs the network training. The Residual neural networks (_ResNets_) [62; 75] tackle this issue explicitly by employing skip connections between blocks, therefore enabling better backprop gradient flow, better training, and, in general, better predictive performance. ResNet architecture follows a typical CNN blueprint: Stacking residual blocks (typically same-size CNN layers) and convolutional blocks (typically introducing a bottleneck via different-size CNN layers) together, followed by fully-connected layers. By employing skip connections, the ResNet architecture allows stacking multiple layers in a block, therefore training models with much deeper architectures. Here we investigate two variants with varying depths, _ResNet50_ and _ResNet152_, with 50 and 152 layers, respectively. Since their inception, ResNets have been a prevalent choice in practice. This also extends towards their utility for EO tasks, applied in the context of image classification and semantic segmentation [8; 76; 35; 30]. Dense Convolutional Networks (_DenseNets_) [63] are another well-performing architecture variant of ResNets that has demonstrated state-of-the-art results on many classification tasks, including applications in the domain of remote sensing [77; 78; 79]. As the name suggests, DenseNets consist of dense blocks, where each layer is connected to every preceding layer, taking an additional (channel-wise) concatenated input of the feature maps learned in the former layers. This differs from the ResNets, which propagate (element-wise) aggregated feature maps through the network layers. The architecture of DenseNets encourages feature reuse throughout the network, leading to well-performing and more compact models (with fewer trainable parameters than a ResNet of equivalent size), albeit at the cost of increased memory during training. _EfficientNets_[64] are a recent class of lightweight architecture that alleviate such common computational difficulties, typical when scaling deep architectures on larger and/or harder problems. Namely, rather than scaling the architecture in one aspect of increasing the depth (number of layers) [62], width (number of channels) [75] or (input image) resolution [80]; EfficientNets implement compound scaling, that uniformly scales the architecture along the three dimensions simultaneously. Compound scaling seeks an optimal balance between these three dimensions, given the available resources and the task at hand. In turn, such an approach leads to substantially smaller models (than CNN variants of equivalent performance) while retaining state-of-the-art predictive performance. In the context of EO tasks, (variants of) EfficentNets have been successfully applied in different settings [81; 82; 83; 79], and have also been thoroughly investigated in the context of multi-label image classification tasks from BigEarthNet [30]. While there are eight variants of EfficientNets, differing in the size and complexity of the architectures, here we investigate the performance of the baseline _EfficientB0_ architecture with 5.2M parameters, substantially lower than any of the other competing model architectures. Most recently, [65] introduce _ConvNeXt_, a novel class of convolutional architectures that leverage various successful design decisions of preceding convolutional and attentional architectures typically applied for vision tasks. Namely, ConvNeXt models implement various techniques at different levels: from reconfiguring details like activation functions and normalization layers, to redesigning more general architecture details related to residual/convolutional blocks, to modifications in the training strategies. This, in turn, leads to models that achieve good predictive performance, not only better than popular models from the class of convolutional architectures but also better than the more recent attentional architectures, such as transformers, discussed next. While there are several variants of the ConvNeXt architecture that mainly differ in their size, in this study, we evaluate the performance of the smallest variant, _ConvNeXt_tiny_. Note that, to our knowledge, this is the first application of ConvNeXt on EO-image classification tasks. We next take the notion of the recent success of the class of attentional network architectures and study the performance of _Vision Transformers_ (ViT) [39] in the context of EO-image classification tasks. Namely, ViTs inspire by the popular NLP (natural language processing) Transformer architecture [84], leveraging an attention mechanism for vision tasks. Much like the original Transformer that seeks to learn implicit relationships in sequences of word-tokens via multi-head self-attention, ViTs focus on learning such relationships between image patches. Typically they employ a standard transformer encoder that takes a lower-dimensional (linear) representation of these image patches together with additional positional embedding from each, in turn, feeding the encoder-output to a standard MLP head. ViTs have shown excellent performance on various vision tasks, particularly when combined with pre-training from large datasets. This also includes several applications in remote sensing [85; 30; 86]. More recent and sophisticated, attentional network architectures such as the _Swin Transformers_ (SwinT) [87; 40] rely on additional visual inductive biases by introducing hierarchy, translation invariance, and locality in the attention mechanism. Like ViTs, SwinT architectures also attempt to learn relationships between image patches but operate on image windows (a group of neighboring image patches). SwinTs focus on computing attention between patches within a window (locality), in turn shifting these windows to allow learning of cross-window attention (translation invariance). Starting with windows with smaller patches and increasing their size at each subsequent stage, SwinTs also allow for learning representations at different granularity (hierarchy). All this leads to SwinTs performing well in practice on a variety of vision tasks, including in the domain of remote sensing [88; 89; 90], often outperforming ViTs and other convolutional architectures. In this study, we evaluate the'small' architecture variant of the latest version of Swin Transformers V2 [40]. In the context of vision tasks, an attention mechanism can be achieved differently (e.g., attending over channels and/or spatial information, etc.) and even employed with typically convolutional architectures[81; 91; 83]. One alternative that builds only on the classical MLP architecture is the _MLPMixer_[66]. Namely, similar to a transformer architecture, an MLPMixer operates on image patches; and contains two main components: A block of MLP layers for'mixing' the spatial, patch-level information on every channel; and a block of MLP layers for'mixing' the channel-information of an image. This renders lightweight models, with performance on par with many much more sophisticated architectures, on a variety of vision problems, both more general as well as specific EO tasks [92; 30; 86]. We employ an MLPMixer with an input size of 224x224 and a patch resolution of 16x16 pixels. From each of the ten highlighted architectures, we evaluate two model versions: trained entirely on a given dataset and fine-tuned models that have been pre-trained on a different image dataset. This results in comparing 20 models on each predictive task, which are available on our repository. ## 3 Experimental design ### Training and evaluation protocol To establish a unified evaluation framework and support the results' reproducibility, we generated train, validation, and test splits using 60%, 20%, and 20% fractions, respectively. All of the data splits were obtained using stratified sampling. This technique ensures that the distribution of the target variable(s) among the different splits remains the same [93]. We performed such stratification for all datasets except the ones which include predefined splits provided by the original authors. More specifically, for the _BigEarthNet_ and _So2Sat_ datasets, we use the train, validation, and test splits as provided in [59; 36; 53]. Since _SAT6_, _RSD46-WHU_, _DFC15_ and _AID_ datasets consist only of predefinedtrain and test splits, we further take 20% from the train part for validation. Finally, note that the PlanetUAS dataset was part of a competition, and as such, the test data is not publicly available. Therefore, we generated train, validation, and test splits from the original train data using the 60%, 20%, and 20% fractions, respectively. All the models were trained using the same train splits, with parameters selection/search performed using the same validation splits. Additionally, to overcome over-fitting, we perform early stopping on the validation split for each dataset; the best checkpoint/model found (with the lowest validation loss) is saved and then applied to the original test split to obtain the final assessment of the predictive performance. All the train/validation/test splits for each dataset are available on our repository. To better assess the generalization capabilities of the trained models, we evaluate their performance on different (in-domain) datasets not used for training. Specifically, we present two schemes of this evaluation: (1) performance measured on a holdout set compiled of test images with the same labels but from different datasets; (2) an exhaustive cross-dataset evaluation between pairs of datasets that contain the same labels. The former variant refers to a new test set consisting of 3216 images from the test splits of seven datasets (_RESISC45, UC Merced, CLRS, PatternNet, AID, RSI-CB256_ and _WHU-RS19_) with labels present in all datasets (in our experiments, this results in five common labels: 'Forest', 'Parking', 'River', 'Harbor' and 'Beach'). We employ this evaluation setting only for multi-class classification tasks. In the latter variant, in a pairwise fashion, we evaluate every model on test splits from other datasets not used for training it. We measure the performance only on images with labels shared between the pairs of source (used for training the model) and target (used for evaluating the model) datasets. We employ this setting in both multi-class and multi-label classification scenarios. Note that in all cases, the models are only evaluated on the unseen datasets without additional fine-tuning. These configurations are also available on our repository. During training, we perform _data augmentation_ for each dataset by first resizing all the images to 256x256, followed by selecting a random crop of size 224x224. We then perform random horizontal and/or vertical flips. During evaluation/testing, we first resize the images to 256x256, followed by a central crop of size 224x224. We believe that this, in general, helps our models to generalize better on a given dataset. Also note that in the study, we are using only RGB images. In the case of the multispectral datasets (_Eurosat_, _So2Sat_ and _BigEarthNet_), we computed the images in the RGB color space by combining the red (B04), green (B03), and blue (B02) bands. For the _Brazilian Coffee Scenes_ dataset, we use images in green, red, and near-infrared spectral bands since these are most useful and representative for distinguishing vegetation areas, as suggested by the authors. Since we train models on 22 datasets with a different number of classes, different training samples, and class distributions (as shown in Tables 1 and 2), we perform a hyperparameters search for each model and each dataset, to account for these variations. Namely, we search over different learning-rate values: \\(0.01\\), \\(0.001\\), and \\(0.0001\\). We use _ReduceLROnPlateau_ as a learning scheduler which reduces the learning rate when the loss has stopped improving. Models often benefit from reducing the learning rate by a factor once learning stagnates. This scheduler tracks the values of the loss measure, reducing the learning rate by a given factor when there is no improvement for a certain number of epochs (denoted as 'patience'). In our experiments, we track the value of the validation loss with patience set to 5 and a reduction factor set to 0.1 (the new learning rate will be \\(lr*factor\\)). The maximum number of epochs is set to \\(100\\). Additionally, we also apply early stop criteria if no improvements in the validation loss are observed over \\(10\\) epochs. We use fixed values for some of the hyperparameters, such as batch size, which we set to \\(128\\). For optimization, we use _RAdam optimizer_[94] without weight decay. RAdam is a variant of the standard Adam [95], with a mechanism that rectifies the variance from the adaptive learning rate. This, in turn, allows for an automated warm-up tailored to the particular dataset at hand. For each model architecture, we train two variants: (i) models trained entirely on a given dataset and (2) fine-tuned models previously trained on a different (and larger) image dataset. The former, which we refer to as models 'trained from scratch', refer to models trained only on the dataset at hand and initialized with random weights in the training procedure. The latter leverages transfer learning via model pre-training. The next section provides further details on how we use and fine-tune these pre-trained models. All models were trained on NVIDIA A100-PCIe GPUs with 40 GB of memory running CUDA version 11.5. We used the AiTLAS toolbox 3 to configure and run the experiments. All configuration files for each experiment are also available in our repository, along with the trained models. We believe this provides a standardized evaluation framework for EO image classification tasks. Footnote 3: [https://github.com/biasvariancelabs/aitlas](https://github.com/biasvariancelabs/aitlas) ### Transfer learning strategy In this study, we take the notion of _transfer learning_ as a strategy that can lead to performance improvements of vision models on image classification tasks [32], in particular in EO domains [96]. In our problem setting, transfer learning allows downstream, task-specific models to leverage learned representations from model architectures pre-trained on much larger image datasets. This, in turn, often leads to (fine-tuned) models with much better generalization power using fewer training data (and training iterations), which is especially useful for tasks that stem from smaller datasets. In the case of DL models for image classification, two strategies are often used for performing transfer learning: (1) fine-tuning the model weights only for the last classifier layer or (2) fine-tuning the model weights of all layers in the network. The former approach retains the values of all but the last layer's weights of the model from the pre-training, keeping them 'frozen' during fine-tuning. The latter, on the other hand, allows the weights to change throughout the entire network during fine-tuning. In practice, this can lead to better generalization [97; 98] and higher accuracy. In our experiments, we implement the latter approach. Starting with a pre-train model, we fine-tune each network entirely (the entire parameter set) for each specific dataset. Note that the choice of the pre-training dataset, and its relation to the domain of the downstream task, may also influence the predictive performance of the fine-tuned model [14]. Since here we are interested in a more general evaluation that considers 22 different datasets, we evaluate a standard approach for transfer learning using pre-trained model architectures on the ImageNet-1K [60] dataset (version V1). More specifically, we use implementations from the PyTorch vision catalog [67] for most models, except ViT and MLPMixer, for which we base the implementations on [68]. Furthermore, to evaluate the effect of the pre-training dataset on the performance of the downstream model, in a set of smaller-scale experiments, we benchmark architectures that have been pre-trained using different 'in-domain' EO datasets. In particular, we evaluate two strategies: (i) models pre-trained entirely on an EO dataset and (ii) models pre-trained on both ImageNet-1K and an EO dataset. The latter relates to a two-stage pre-training strategy, where models are first pre-trained on ImageNet-1K, followed by intermediate tuning on an in-domain EO dataset, and finally, fine-tuning them on the target EO dataset. We evaluate these pre-training strategies by comparing models from two architectures (ViT and DenseNet) using four in-domain EO datasets for pre-training. ### Evaluation measures Evaluating the performance of machine learning models is a non-trivial task that is specific to the learning task at hand and dependent on the general objectives of the model being learned. Different evaluation metrics capture different aspects of the models' behavior and their predictive capabilities measured on image samples not used for training. Since the goal of this study analyzing the predictive performance of different DL models across different datasets on multi-class and multi-label classification tasks - we examine the experimental work through the lens of evaluation measures most suitable for these two tasks. More specifically, for multi-class classification tasks, we report the following measures: Accuracy, Macro Precision, Weighted Precision, Macro Recall, Weighted Recall, Macro F1 score, and Weighted F1 score. Note that, since for these tasks, the micro-averaged measures such as F1 score, Micro Precision, and Micro Recall have values equal to accuracy, we do not report them. Note that, for image classification tasks, it is customary to report _top-n accuracy_ (typically \\(n\\) is set to 1 or 5) [60], where the score is computed based on the correct label being among the \\(n\\) most probable labels outputted by the model. In this paper, we report _top-1 accuracy_, denoted as 'Accuracy' unless stated otherwise. For multi-label classification tasks, we report Micro Precision, Macro Precision, Weighted Precision, Micro Recall, Macro Recall, Weighted Recall, Micro F1 score, Macro F1 score, Weighted F1 score, and mean average precision (mAP). Since all measures, but mAP, require setting a threshold on the predictions, we choose a threshold value of \\(0.5\\) for all models and settings. Further details and definitions of the evaluation measures used in the study are given in Appendix A. We also provide additional performance details in terms of confusion matrices of each experiment, allowing for a more detailed (per class/label) analysis of model performance (reported in Appendix D). ## 4 Results We present the results of a large-scale study comparing different DL models for multi-class (MCC) and multi-label classification (MLC) tasks from 22 datasets. To this end, we evaluate models from 10 architectures: AlexNet, VGG16, ResNet50, ResNet152, DenseNet162, EfficientNetB0, ConvNeXt, Vision Transformer (ViT), Swin Transformer (SwinT) and MLPMixer. For each model architecture, we evaluate two variants: (i) models trained from scratch and (2) fine-tuned models previously trained on the ImageNet-1K dataset. We additionally assess the performance of models pre-trained using in-domain EO datasets. In the remainder, we outline and discuss the following:* Performance of models trained from scratch with respect to the two types of tasks * Benefits of pre-training models of different architectures and their effect in view of the dataset properties * Models' ability to generalize on unseen in-domain datasets * The choice of the pre-trained dataset and its effect on the performance of the downstream model * The 'performance vs. cost of model training' trade-off between the considered modeling approaches * Common issues that affect the models' predictive performance in the context of EO applications. Detailed results of each experiment, with additional performance measures, are given in Appendices B, C and D. ### Training models from scratch We begin by analyzing the performance of models trained from scratch, i.e., models initialized with random weights during training. Tables 4 and 5 present these results for the MCC and MLC tasks, respectively. Table 4 reports the accuracy (%) of the models learned from scratch for the 15 MCC datasets. It also reports the rank of the models, estimated based on their performance and averaged over the 15 datasets. The results show that, in general, convolutional architectures, especially the DenseNet, the EfficientNet, and the two ResNets, consistently perform well. This is even more evident for datasets such as PatternNet, RSI-CB256, and SAT6, where the DenseNet (and the other top-ranked models) lead to near-perfect results (accuracy greater than 99%). More specifically, DenseNet is the best-performing model in more than half of the tasks (9 out of 15) and achieves accuracy greater than 90% in 8 tasks. These performances are generally much lower for smaller datasets, such as WHU-RS19, Optimal31, UC Merced, SIRI-WHU, RSSCN7, and CLRS. However, the most challenging task is _So2SAT_, where EfficientNetB0 achieves the highest accuracy of 65.17%, while many of the models trail behind with a performance of 55-60%. These results are consistent with previous findings [35], suggesting clear signs of over-fitting, influenced by the quality and size of the images in the dataset. The two transformer architectures (SwinT and ViT), the MLPMixer, and the latest ConvNeXt models are ranked at the bottom (only better than AlexNet), with lower, but, in many cases, still practically comparable performance to the leading DenseNets. Next, we shift our focus to MLC tasks. Table 5 reports the mean average precision (%) of the models learned from scratch across the 7 MLC datasets. While DenseNets rank the best, they achieve the best result in only 1 out of 7 tasks. The second-ranked SwinT models achieve the best performance in 3 tasks with comparable performance in the remaining 4. Unlike the MCC tasks, the performance difference to other convolutional models (i.e., the two ResNets and the EfficientNetB0) here is much smaller. Moreover, most models were only able to achieve high performance (above 90%) on two tasks, _DFC15_ and _MLRSNet_, with DenseNet and ResnNet50 achieving the best results. However, this is an expected result, as MLC tasks are generally more challenging than MCC tasks. This can be attributed to two things in particular: First, in many cases, the semantic labels can be very similar, which makes many of the models struggle. Second, MLC datasets tend to have a more significant class/label imbalance, in contrast to MCC datasets' more uniform class distribution. In this context, the most challenging MLC tasks overall are _PlanetUAS_ and _BigEarthNet43_, where the best performing SwinT models achieve mAP od 65.229% and 67.487%, respectively. Finally, similar to the previous MCC analysis, ViT, MLPMixer, and ConvNeXt remain only better ranked than AlexNet. Nevertheless, their performance on these MLC tasks is much more competitive, for instance, in the case of ViT, which is the best model on the _UC Merced_ task. ### The benefits of model pre-training While training models from scratch leads to decent performance, in practice, leveraging pre-trained models can lead to significant performance improvements on image classification tasks [32], and in particular on tasks in EO domains [96]. This is also the general conclusion from our analysis. When using models that were first pre-trained on ImageNet-1K and then fine-tuned on the specific datasets, we found that: _Pre-trained models lead to substantial performance improvements compared to models trained from scratch_. Figure 2 illustrates this performance-improvement trend for different models across the 22 MCC and MLC tasks. We find that pre-training significantly improves the performance of all the evaluated models. Notably, we observe that the transformer models, based on either ViT or SwinT \\begin{table} \\begin{tabular}{l|c c c c c c c c c} \\hline \\hline Dataset \\textbackslash{}Model & AlexNet & VGG16 & ResNet50 & ResNet152 & DenseNet161 & EfficientNetB0 & ViT & MLPMixer & ConvNeXt & SwinT \\\\ \\hline WHU-RS19 & 66.169 & 68.657 & 79.602 & 80.597 & **80.597** & 75.622 & 74.627 & 69.652 & 72.139 & 78.607 \\\\ Optimal31 & 55.108 & 56.720 & 67.204 & 62.903 & **71.237** & 68.548 & 62.634 & 59.140 & 58.871 & 66.129 \\\\ UC Merced & 81.190 & 78.871 & 85.238 & 84.048 & **86.190** & 84.286 & 83.095 & 82.381 & 84.286 & 81.429 \\\\ SIRI-WHU & 83.750 & 84.792 & **88.958** & 88.750 & 86.667 & 86.042 & 86.250 & 82.5 & 84.167 & 85.833 \\\\ RSSCN7 & 80.536 & 81.607 & 82.679 & **87.321** & 83.929 & 86.071 & 83.214 & 83.036 & 82.5 \\\\ BCS & 89.410 & 89.410 & 89.236 & 88.542 & **90.799** & 85.417 & 87.847 & 86.285 & 84.375 & 89.236 \\\\ AID & 81.350 & 81.950 & 89.050 & 89.9 & **93.300** & 90.050 & 79.350 & 71.750 & 81.1 & 87.700 \\\\ CLLRs & 71.4 & 76.067 & 85.567 & 82.3 & **86.167** & 82.267 & 65.467 & 61.133 & 69.167 & 80 \\\\ RSI-CB256 & 97.354 & 89.828 & 98.828 & **99.152** & 99.131 & 99.111 & 98.121 & 98.424 & 98.444 & 99.091 \\\\ Euroast & 96.167 & 97.185 & 97 & 97.407 & 97.630 & **97.796** & 95.037 & 95.5 & 95.426 & 95.722 \\\\ PatternNet & 97.829 & 97.911 & 99.063 & 98.882 & **99.243** & 98.832 & 96.694 & 98.832 & 97.829 & 98.520 \\\\ RESISC45 & 82.159 & 83.889 & 92.333 & 90.683 & **93.460** & 91.365 & 81.016 & 69.413 & 85.937 & 88.730 \\\\ BSI46-WHU & 86.032 & 88.625 & 90.549 & 89.944 & 92.211 & 90.612 & 86.466 & 81.253 & 88.693 & 91.806 \\\\ So2Sat & 56.511 & 62.271 & 59.587 & 61.477 & 55.428 & **65.173** & 55.333 & 53.580 & 60.154 & 57.128 \\\\ SAT6 & 99.272 & 99.564 & **100** & 99.998 & 99.995 & 99.998 & 99.985 & 99.984 & 99.998 & 99.980 \\\\ \\hline _Avg. Rank_ & 8.13 & 6.60 & 3.27 & 3.47 & **2.00** & 3.33 & 7.33 & 8.07 & 6.60 & 5.47 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Accuracy (%) of models trained from scratch on multi-class classification datasets. Bold indicates best performing model for a given dataset. We report the _average rank_ of a model (lower is better), ranked based on the performance and averaged across the 15 datasets. \\begin{table} \\begin{tabular}{l|c c c c c c c c c c} \\hline \\hline Dataset \\textbackslash{}Model & AlexNet & VGG16 & ResNet50 & ResNet152 & DenseNet161 & EfficientNetB0 & ViT & MLPMixer & ConvNeXt & SwinT \\\\ \\hline AID (mlc) & 68.780 & 69.206 & 70.867 & 69.646 & 71.218 & **72.889** & 65.581 & 64.235 & 65.595 & 69.548 \\\\ UC Merced (mlc) & 75.516 & 76.797 & 79.867 & 73.657 & 85.414 & 79.874 & **87.142** & 75.677 & 72.271 & 81.071 \\\\ DFC15 & 88.099 & 89.871 & 84.675 & 94.188 & **95.848** & 93.973 & 94.164 & 91.663 & 89.564 & 94.349 \\\\ Planet UAS & 60.282 & 60.682 & 64.192 & 64.956 & 64.738 & 63.868 & 59.414 & 58.550 & 61.277 & **65.229** \\\\ MLRNet & 90.850 & 91.524 & **95.259** & 93.982 & 94.745 & 94.395 & 87.250 & 85.281 & 90.710 & 94.099 \\\\ BigEarthNet 19 & 75.711 & 77.989 & 78.726 & 78.519 & 79.725 & 79.211 & 75.871 & 77.005 & 77.909 & **80.586** \\\\ BigEarthNet 43 & 56.082 & 58.969 & 64.343 & 62.736 & 63.390 & 62.173 & 57.410 & 58.772 & 60.472 & **67.487** \\\\ \\hline _Avg. Rank_ & 8.57 & 6.57 & 3.00 & 4.71 & **2.14** & 3.86 & 7.29 & 8.57 & 7.71 & 2.57 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Mean average precision (mAP %) of models trained from scratch on multi-label classification datasets. Bold indicates best performing model for a given dataset. We report the _average rank_ of a model (lower is better), ranked based on the performance and averaged across the 7 datasets. architectures, benefit the most from pre-training, followed by MLPMixer and ConvNeXt models. This is a significant improvement over the models trained from scratch. These results, especially for the case of ViT, are consistent with previously reported findings [39; 30]. Tables 6 and 7 present the detailed results of these analyses for MCC and MLC tasks, respectively. Similar to the analyses in the previous section, we report model accuracy (%) in the case of MCC tasks and mean average precision (%) in the case of MLC tasks. We also report the rank of the models, averaged over the respective datasets. Consider \\begin{table} \\begin{tabular}{l|c c c c c c c c c} \\hline \\hline Dataset\\textbackslash{}Model & AlexNet & VGG16 & ResNet50 & ResNet152 & DenseNet161 & EfficientNetB0 & ViT & MLPMixer & ConvNeXt & SwirlT \\\\ \\hline WHIU-RS19 & 93.532 & 99.005 & 99.502 & 98.01 & **100** & 99.502 & 99.502 & 98.507 & 99.005 & 99.502 \\\\ Optimal31 & 80.914 & 88.71 & 92.204 & 92.473 & 94.355 & 91.667 & **94.624** & 92.742 & 93.011 & 92.473 \\\\ UC Merced & 92.143 & 95.476 & 98.571 & **98.810** & 98.333 & 98.571 & 98.333 & 98.333 & 97.857 & 98.571 \\\\ SIRI-WHU & 92.292 & 93.958 & 95 & **96.25** & 95.625 & 95 & 95.625 & 95.208 & **96.25** & 95.625 \\\\ RSSCN7 & 91.964 & 93.929 & 95 & 95 & 94.821 & 95.336 & **95.893** & 95.179 & 94.643 & 95.179 \\\\ BCS & 89.853 & 90.972 & 92.014 & 92.361 & 92.708 & 91.319 & 92.014 & 93.056 & 91.493 & **93.403** \\\\ AID & 92.9 & 96.1 & 96.55 & 97.2 & 97.25 & 96.25 & **97.750** & 96.7 & 96.95 & 97.4 \\\\ CLRS & 84.1 & 89.9 & 91.567 & 91.9 & 92.2 & 90.5 & **93.200** & 90.1 & 91.1 & 92.533 \\\\ RSI-CB256 & 99.354 & 99.051 & 99.677 & **99.859** & 99.737 & 99.717 & 99.758 & 99.657 & 99.596 & 99.677 \\\\ Eurosat & 97.574 & 98.148 & 98.833 & **99** & 98.889 & 98.907 & 98.722 & 98.741 & 98.778 & 98.944 \\\\ PatternNet & 99.161 & 99.424 & **99.737** & 99.49 & **99.377** & 99.539 & 99.655 & 99.704 & 99.671 & 99.688 \\\\ RSSIC45 & 90.492 & 93.905 & 96.46 & 95.54 & 96.508 & 94.873 & **97.079** & 95.952 & 96.27 & 95.87 \\\\ RSD46-WHU & 90.646 & 92.422 & 94.158 & 94.404 & **94.507** & 93.387 & 94.238 & 93.673 & 93.627 & 93.536 \\\\ So2Sat & 59.203 & 65.375 & 61.903 & 65.169 & 65.756 & 65.801 & **68.551** & 67.066 & 66.169 & 65.950 \\\\ SAT6 & 99.98 & 99.993 & **100** & **100** & **100** & 99.988 & 99.998 & 99.995 & 99.999 & 99.999 \\\\ \\hline _Avg. Rank_ & 9.93 & 8.67 & 4.67 & 3.80 & 3.13 & 5.87 & **3.07** & 5.33 & 5.47 & 3.20 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 6: Accuracy (%) of models pre-trained on ImageNet-1K on multi-class classification datasets. Bold indicates best performing model for a given dataset. We report the _average rank_ of a model (lower is better), ranked based on the performance and averaged across the 15 datasets. Figure 2: **Comparison of average performance improvement** of models from the 10 different architectures when trained from scratch (**red**) and employing pre-trained models (**blue**) across (**left**) MCC and (**right**) MLC datasets. Error bars indicate confidence interval of 68%. Models are ordered (worst to best) based on the average performance-rank of the pre-trained variants across all of the 22 datasets. Model pre-training leads to substantial performance improvements. ing MCC tasks (Table 6), most models achieve very good performance (accuracy over 90%) on 14 (out of 15) tasks, with (almost) perfect results in five of those. Notably, we observed significant performance improvements, compared to model counterparts trained from scratch, on smaller datasets (such as _WHU-RS19, Optimal31, UC Merced, SIRI-WHU, RSSCN7, and CLRS_), reaffirming the utility of transfer learning from large datasets in the context of EO image classification tasks. In terms of model architectures, the ViT ranks at the top among the model architectures, achieving the best performance in 6 out of 15 cases, followed by DenseNet161, SwinT, and ResNet152 with lower but comparable performance. Transformer architectures, and ViTs in particular, typically require large amounts of training data [39, 99] for learning robust, good performing models. As a result, using pre-trained models and fine-tuning them leads to substantial performance improvements, compared to training them from scratch. The performance of ViTs is further highlighted for the case of the challenging _So2SAT_ task, where the ViT model leads to an accuracy of 68.55%, in contrast to the next ranked DenseNet and SwinT with an accuracy of 65.75% and 65.95%, respectively. In this specific case of _So2SAT_, we observed that over-fitting remains an issue, even for pre-trained models. Our further investigation of the train/validation loss trends showed that, regardless of the model at hand, with the training loss decreasing, the validation errors increase almost instantly (after 1-2 epochs) - a typical trend observed in over-fitting models (see Figure D.46 that illustrates such behavior in a ViT model). This, fortunately, is not the case for the remaining tasks, where we observed a decent performance overall. Most models, especially the top half ranked, achieved stable and mostly comparable performance. The benefits of pre-training models also extend to MLC tasks (Table 7), in several cases with significant performance gains, compared to model counterparts trained from scratch. In particular, we found that pre-training can lead to minor improvements (1%-2%) on challenging tasks such as _PlanetUAS_ and _BigEarthNet43_ (mAP of 67.837% and 67.733% achieved by SwinTs); to more considerable improvements (up to 15%) in some cases such as _AID_ and _UCMerced_ (mAP of 82.298% and 96.83% obtained by ConvNeXt and SwinT, respectively). Also, in this case, we found that the transformer models benefited the most from pre-training. This is in line with studies[87, 40] that highlight the significance of pre-training to the generalization performance of these types of models. Notably, SwinT models ranked the best overall and achieved the best performance on 6 (out of the 7) tasks. They are followed by ViT and ConvNeXt, with comparable performance on most tasks. \\begin{table} \\begin{tabular}{l|c c c c c c c c c c} \\hline \\hline Dataset \\textbackslash{}Model & AlexNet & VGG16 & ResNet50 & ResNet152 & DenseNet161 & EfficientNetB0 & ViT & MLPMixer & ConvNeXt & SwinT \\\\ \\hline AID (mlc) & 75.906 & 79.893 & 80.758 & 80.942 & 81.708 & 78.002 & 81.539 & 80.879 & **82.298** & 82.254 \\\\ UC Merced (mlc) & 92.638 & 92.848 & 95.665 & 96.01 & 96.056 & 95.384 & 96.699 & 96.341 & 96.631 & **96.831** \\\\ DFCI5 & 94.057 & 96.566 & 97.662 & 97.6 & 97.529 & 96.787 & 97.617 & 97.941 & 97.994 & **98.111** \\\\ Planet UAS & 64.048 & 65.584 & 65.528 & 64.825 & 66.339 & 64.157 & 66.804 & 67.330 & 66.447 & **67.837** \\\\ MLRNet & 93.399 & 94.633 & 96.272 & 96.432 & 96.306 & 95.391 & 96.41 & 95.049 & 95.807 & **96.620** \\\\ BigEarthNet 19 & 77.147 & 78.418 & 79.983 & 79.776 & 79.686 & 80.221 & 77.31 & 77.288 & 80.283 & **81.384** \\\\ BigEarthNet 43 & 58.554 & 61.205 & 66.256 & 64.066 & 64.229 & 64.589 & 58.997 & 59.648 & 66.166 & **67.733** \\\\ \\hline _Avg. Rank_ & 10.00 & 7.86 & 5.14 & 5.43 & 5.00 & 6.86 & 4.86 & 5.71 & 3.00 & **1.14** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 7: Mean average precision (mAP %) of models pre-trained on ImageNet-1K on multi-label classification datasets. Bold indicates best performing model for a given dataset. We report the _average rank_ of a model (lower is better), ranked based on the performance and averaged across the 7 datasets. ### Generalization capabilities to unseen data We further investigate the generalization ability of the trained models by evaluating their performance across datasets not used during training. In particular, we present results from two evaluation settings: (1) performance measured on a holdout set compiled of test images with shared labels and (2) an exhaustive cross-dataset evaluation between pairs of datasets with overlapping labels. First, we analyze the predictive performance of all models when applied to the same holdout set with 3216 images sampled from the test splits from seven MCC datasets (_RESISC45_, _UC Merced, CLRS, PatternNet, AID, RSI-CB256_ and _WHU-RS19_) using only images with labels shared among the seven datasets: 'Forest', 'Parking', 'River', 'Harbor', and 'Beach'. Figure C.1 (in Appendix C) presents further details of the distribution of images in the holdout set w.r.t. source datasets and labels. We evaluate and report the predictive performance of pre-trained models from all ten architectures. Note that here we only evaluate the models on the holdout set without additional fine-tuning. Table 8 reports the predictive performance assessed using accuracy (%) as an evaluation measure. The results show that ViT models are able to generalize well to unseen images from other in-domain datasets. Namely, in many cases, ViT models perform better than the competitors, further supporting previous results regarding their performance on MCC tasks. The performance of ViTs is followed by models based on more recent architectures, such as SwinT, MLPMixer, and ConvNeXt, which show worse but, in many cases, practically comparable performance. With respect to specific datasets, our experiments show that models fine-tuned on the _CLRS_ and _RESISC45_ datasets were able to achieve much better performance than the others (with ViT models achieving 92.6% in the case of _CLRS_). We hypothesize that such performance may be related to the particular properties of these datasets: Both _CLRS_ and _RESISC45_ are multi-resolution datasets (containing images at different spatial resolutions) with a large number of diverse labels. However, this is not the case for models fine-tuned on _PatternNet_ and _RSI-CB256_. While models trained and evaluated on these datasets separately show great performance ( 99% accuracy), this performance decreases significantly when evaluated on a holdout set (down to 66.79% and 65.2% for _RSI-CB256_ and _PatternNet_, respectively). These results, along with results from models learned from scratch (Table 4), are indicative of both datasets being easily learned, producing models that are not able to generalize well to other unseen images and classification tasks. In the second experimental setup, we employ the following pairwise evaluation scheme. We consider pairs of \\begin{table} \\begin{tabular}{l|r r r r r r r r r} \\hline \\hline Dataset \\textbackslash{} Model & AlexNet & VGG16 & ResNet50 & ResNet152 & DenseNet161 & EfficientNetB0 & ViT & MLPMixer & ConvNeXt & SwinT \\\\ \\hline RESISC45 & 66.853 & 78.514 & 81.063 & 84.08 & 84.111 & 77.985 & **86.007** & 82.121 & 84.422 & 83.706 \\\\ UC Merced & 63.371 & 67.04 & 76.657 & 73.01 & 74.254 & 74.44 & 75.995 & **79.478** & 75.902 & 72.326 \\\\ CLRS & 80.037 & 83.427 & 89.801 & 88.557 & 89.024 & 86.07 & **92.6** & 89.646 & 89.303 & 90.299 \\\\ PatternNet & 43.501 & 52.332 & 56.965 & 54.54 & 56.716 & 60.044 & 64.739 & 62.687 & 59.391 & **65.205** \\\\ AID & 71.393 & 69.714 & 79.384 & 80.1 & 66.169 & 77.892 & **83.862** & 77.954 & 79.851 & 79.789 \\\\ RSI-CB256 & 56.872 & 61.412 & 58.893 & 63.65 & 64.832 & 61.723 & 60.14 & **66.791** & 64.677 & 66.294 \\\\ WHU-RS19 & 61.101 & 62.624 & 71.953 & 73.321 & 72.388 & 68.284 & 72.917 & 74.036 & **74.876** & 71.144 \\\\ \\hline _Avg. Rank_ & 9.71 & 8.71 & 5.43 & 5.29 & 5.86 & 6.86 & **2.14** & 3.29 & 3.57 & 4.14 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 8: Accuracy (%) of models pre-trained on ImageNet-1K and fine-tuned on a specific source dataset and evaluated on the common test dataset with shared labels. Bold indicates best performing model for a given source dataset. _source_ and _target_ datasets: We take pre-trained models that have been fine-tuned on a _source_ dataset and evaluate them on test images from a _target_ dataset. Note that we only _evaluate_ the models on the target dataset without additional fine-tuning. We measure the performance only on a subset of images with shared labels between the source and target datasets. Therefore, for this experiment, we selected datasets with at least 0.15 IoU4 overlap of labels with at least one other dataset. This resulted in pairs from 12 (out of 15) MCC datasets and 4 (out of 7) MLC datasets, yielding 256 comparisons of pre-trained models from each of the ten considered architectures. Figures 3 and 4 present the performance of the best model for each MCC and MLC comparison in terms of accuracy (%) and mAP (%), respectively. They also provide a summary of the overlap between each pair of datasets in terms of IoU. Detailed results of all comparisons, per architecture, are given in Appendix C. Footnote 4: Intersection over Union (IoU), measures the overlap between two sets. Values range from 0 to 1, where 0 indicates no overlap and 1 indicates complete overlap between the sets The results support our earlier findings that the _transformer_-based models, in particular the ViT models (on MCC tasks) and the SwinT models (on MLC tasks), perform best when applied to other in-domain datasets. More specifically, when considering MCC tasks, the transformer-based models perform best in almost 2/3 of the comparisons, with the ViT models alone performing best in \\(\\sim\\)40% of them. ViTs are followed by SwinT, ConvNeXt, and MLPMixer models that, in many cases, showed practically comparable performance. We observed that convolutional models such Figure 3: **Model generalization on multi-class classification tasks:** Comparison of the best performing pre-trained models (**left**) from the 10 different architectures (**color-coded**) in terms of **accuracy** (% acc. is indicated in each field); the models are fine-tuned on _source_ dataset and evaluated on images with common/overlapping labels in _target_ dataset. The heatmap (**right**) reports the label overlap between each pair of datasets, in terms of IoU. Transformer-based models, in particular the ViT models, perform the best when evaluated on other in-domain MCC datasets. as DenseNets, which exhibited good performance in our previous analyses (when evaluated on test images from the same dataset), generally lead to worse performance than models from more recent architectures. The dominance of the transformer-based models also extends to MLC tasks, with SwinT models producing the best overall performance, followed closely by ViT models. Note that these empirical results are also consistent with other studies [100; 99; 101], that highlight the robustness and good generalization capabilities of transformer-based models for general-domain images. ### Domain-adaptive transfer learning Having demonstrated the practical benefits and generalization capabilities of using pre-trained models, we further investigate the impact of the pre-trained dataset on the performance of the downstream model. As we focus on particular domains of interest that leverage satellite imagery, we evaluate whether and how choosing more appropriate in-domain EO pre-training datasets (and strategies) affects downstream predictive performance. Our experimental setup aims to investigate two different strategies for such in-domain pre-training: (i) in-domain only, where models are pre-trained entirely on an EO dataset (ii) two-stage pre-training, where models are pre-trained on a combination of ImageNet-1K and an EO dataset. The former strategy is analogous to the ImageNet-1K pre-training strategy but uses a different EO dataset. In the second strategy, on the other hand, the models are first pre-trained on ImageNet-1K, followed by intermediate tuning on an in-domain EO dataset, before fine-tuning the models on the target EO dataset. Rather than evaluating all architectures, in this set of experiments, we evaluate two types of architectures: a ViT and a DenseNet161, as representatives of transformer and convolutional architectures that have shown overall good performance in our previous experiments. Specifically, we analyze their performance on six tasks (3 MCC and 3 MLC) that proved somewhat challenging for these models: _CLRS, Optimal31, So2SAT, AID (mlc), PlanetUAS, and Figure 4: **Model generalization on multi-label classification tasks: Comparison of the best performing pre-trained models (left) from the 10 different architectures (color-coded) in terms of mean average precision (% mAP is indicated in each field); the models are fine-tuned on _source_ dataset and evaluated on images with common/overlapping labels in _target_ dataset. The heatmap (_right_) reports the label overlap between each pair of datasets, in terms of IoU. In general, transformer-based models, in particular the SwinT models, lead to the best performance on MLC tasks.** _BigEarthNet 19_. We select four different in-domain datasets for our pre-training: _SAT6, RSD46-WHU, MLRSNet_ and _RESISC45_; based on the overall performance achieved in the previous analyses, their size (number of images), and their heterogeneity (in terms of semantic labels). Table 9 reports the results of these experiments. Our general conclusion regarding pre-training remains: Pre-trained models based entirely on EO datasets can still outperform their counterparts trained from scratch. However, we find that the choice of the pre-training dataset has a significant impact on the downstream performance and is not necessarily related to the quality of the pre-training dataset (measured as stand-alone performance) or solely to its size. For instance, we found that models pre-trained entirely using _SAT6_ (a dataset on which most models performed very well) performed much worse than the other pre-trained counterparts and, in some cases, even worse than models trained from scratch. This is not the case when pre-traning models on _RSD46-WHU_, _MLRSNet_, and _RESISC45_, which led to better performance, compared to their counterparts trained from scratch (in both cases of ViT and DenseNets), albeit worse than models pre-trained on ImageNet-1K. Importantly, we found that using a combined pre-training procedure, with ImageNet-1K followed by an in-domain dataset, can lead to improvements (up to 5%), especially when combined with _MLRSNet_ or _RESISC45_ datasets. This is specifically the case for _Optimal31_ and _AID_ (_mlc_), where models from both ViT and DenseNet161 architectures were able to outperform their counterparts pre-trained only on ImageNet-1K. These results suggest that using datasets for \\begin{table} \\end{table} Table 9: Comparison of pre-training strategies for (a) Vision Transformers (ViT) and (b) DenseNets161 using 4 in-domain EO datasets (SAT6, RSD46-WHU, MLRSNet, RESISC45) and ImageNet-1K. We report their performance on 3 multi-class and 3 multi-labels classification tasks, in terms of accuracy (% Acc.) and mean average precision (% mAP), respectively. intermediate fine-tuning that contain images at different resolutions with heterogeneous (but potentially semantically similar) labels, in addition to ImageNet-1K, can lead to performance improvements. However, in most cases, we did not observe neither practical nor significant benefits for using a combined pre-training procedure with an additional in-domain dataset that would justify the additional computational overhead for training such models. ### The 'performance vs. training cost' trade-off Having established the performance of our evaluated models and demonstrated the clear benefits of using pre-trained models, we focus here on another line of comparison - the cost of model training. Recall from Section 2.2, and in particular Table 3, that we study model architectures that differ significantly in the number of learnable parameters. Typically, larger models require more computing resources and much more training time than smaller models. In our experimental setup, we train all models on the same computing infrastructure, under the same conditions, and with the same training/evaluation setup (in terms of hyperparameters and data partitioning). Therefore we can directly analyze the 'performance vs. training cost' (in terms of total training time) trade-off for each model variant from the ten different architectures (either pre-trained or trained from scratch) across the 22 datasets. This way, we can explicitly measure the benefits of each model and make further modeling decisions based on the performance of the models and the 'cost' of training them. Figure 5 illustrates the trade-off for the top-3 best performing model architectures overall (as shown in Tables 6 and 7), DenseNet, ViT, and SwinT; applied to the 22 MCC and MLC tasks. While the performance analyses showed many similarities between these models, the difference between them in terms of training times is much more pronounced. In general, ViT requires less training time than both DenseNets and SwinTs. DenseNets have nearly a quarter of the Figure 5: **Performance vs. total training time** comparison of the overall top-3 performing _pre-trained_ model architectures, ViT, DenseNet161 and SwinT (denoted with different markers); evaluated on **(left)** MCC and **(right)** MLC datasets (color-coded). Performance is reported as accuracy (%) and mean average precision (mAP %) for MCC and MLC tasks, respectively. Note the log scale of the total training time (seconds). number of parameters of ViT but achieve almost half fewer FLOPS (floating-point operations per second) than them. For MCC tasks, ViT models generally result in comparable/better predictive performance than DenseNet models and, in many cases, require half the training time. SwinT models, on the other hand, are much more demanding. In almost all cases, training SwinT models takes up to 2-3 times longer than training ViTs and DenseNets. This is also true for MLC tasks, where SwinT models perform the best performance but at the cost of significant training time. These findings further support previous results [87], which point out that Swin transformers (the'small' variant) have slower training and inference performance than Vision Transformers, which have significantly more parameters but achieve considerably more FLOPS. For an extended illustration of these trade-offs, covering all 10 model architectures, see Figure B.1 in the Appendix B. We can further analyze these trends in training time trends for each model and dataset, as presented in Figure 6. In particular, Figure 6 illustrates the training (fine-tuning) times of each pre-trained model as a fraction of the cumulative training time of all models summed across all (a) multi-class and (b) multi-label datasets. This shows that, in many cases, ViT models can be trained almost twice as fast as the models of the other best-performing architectures, such as DenseNet and SwinT. The training cost of ViT models is similar to that of EfficientNetB0, ConvNeXt, and MLPMixer, which are efficient but generally perform worse on these tasks. We can also observe that these variants of SwinT models are the slowest to train on all 22 tasks compared to the other architectures. This is also evident when comparing the time for each epoch (see Appendix B), with SwinT models taking twice longer to train compared to DenseNet161 models, the next slowest architecture. We also observed that fine-tuning pre-trained models almost halves the training time compared to training models from scratch, even though they take about the same time per epoch. Note, however, Figure 6: **Total training time of pre-trained models for each of the (a) MCC and (b) MLC datasets. The training time of each model architecture (denoted with different colors) is depicted as a fraction (%) of the cumulative training time for each dataset. Furthermore, (c) and (d) illustrate the average time per epoch of each model variant on (c) MCC and (d) MLC tasks, comparing the (red) pre-trained model variants (from (a) and (b)) to their counterparts (blue) trained from scratch.** that we have not accounted for the time required to pre-train each model, which certainly increases the overall training times significantly. This is generally expected behavior but may help in the design and planning of DL pipelines for similar EO. Additional results presenting models' training costs can be found in Appendix B. ### A closer look on several tasks To better understand the performance of the learned models on the various MCC and MLC tasks, we examine the model decisions in detail, focusing on datasets (and classes) where the models tend to perform poorly. We hypothesize that these cases are related to several overarching issues that often affect the performance of the models: * High inter-class similarity between images from different classes; * Many EO image-classification tasks, which are formulated as MCC, are, in fact, MLC problems. In many cases, an image has a single label, but there are more than one classes/concepts present; * Presence of abstract/complex/compound classes within the datasets, can cause many difficulties in detecting useful and consistent patterns; * Absence of additional spatio-temporal data which captures the dynamics of land-cover changes To investigate these issues, we simultaneously analyze the models' confusion matrices and visualizations of localized activation maps that highlight the distinguishing parts of the image responsible for the model decision. To generate such visualizations, we use Gradient-weighted Class Activation Mapping (GradCAM) [102], which is typically used to diagnose model predictions for various deep learning architectures [103], including Earth Observation applications [30, 104]. GradCAM uses the gradients of the target classes from the last convolutional layer and produces a coarse localization map highlighting important regions in the image for class prediction. In this set of analyses, we select several cases from the datasets considered datasets, especially those containing classes/land types for which the models perform poorly (based on the various evaluation scores, as reported in Appendix D), and calculate/visualize the corresponding GradCAM maps. We start by investigating the inter-class similarities between images assigned to different classes. This is a common problem in practice in many similar EO applications, caused by the presence of visually similar (often indistinguishable) objects in an image. Figure 7 illustrates this problem using GradCAM activation maps of some sample images with their respective classes/labels from the different datasets. Our qualitative analyses show that the predictive models are generally able to focus on the correct parts of the images (with distinguishable patterns) but cannot identify the correct object. This is the case, for example, when distinguishing between a 'church' and a 'palace' or a 'terrace' and a'rectangular farmland', which are visually very similar but semantically different. As expected, the models also struggle with cases where the image labels are also semantically similar, such as in the distinction between 'railway' and 'railway station' or 'river' and 'harbor,' which even a human expert would have difficulty classifying. Similar cases can be further analyzed by examining the confusion matrices. For example, the most challenging dataset, _So2Sat_, contains many such examples (see Figure D.45 in Appendix D.15), which are the reason for the poor overall performance of the models. The second issue that we highlight is related to the fact that, in many cases, multiple land-cover classes/concepts are present in a single image, but the image itself is assigned to only one class - making it a multi-class instead of a multi-label problem. Figure 8 shows several activation maps illustrating this issue. For example, consider the image-pair on the far left: The image is labeled only as 'river', but we can also see an 'overpass' (a label also present in the dataset) that causes the model to make an 'incorrect' prediction, albeit with a probability of 0.54. Similar situations can be observed for the remaining images: Objects from other classes that are substantially present in an image are detected, thus confusing the models. This, however, shows that the models have been trained well and are performing as expected, but instead of outputting multiple labels (as in a typical MLC setting), they have to choose a single one - which can lead to errors and lower performance. To evaluate the third issue, which relates to complex/compound classes, we examine samples with lower F1 scores. Complex/compound classes refer to classes that consist of objects with different physical properties and spatial distribution, making it very difficult to detect useful and consistent patterns. This is also true for abstract classes, where the semantic gap (in terms of labels) is challenging to overcome, which is typically the case when the features learned from the models differ from human interpretation. Figure 9 illustrates these problems using the respective activation maps. In particular, in the case of _AID_ (the two pairs of images on the far left), the model confuses'school' with 'commercial', the latter being quite vague, for which the semantic gap is not easily dealt with. In the second case, the model has difficulty distinguishing between 'park' and Figure 7: GradCAM visualizations calculated for example images with high inter-class similarity. The input images with their ground-truth label are shown in the first row, while the corresponding activation maps with predicted labels are shown below in the second row. The datasets for the images and the models used to predict the labels are as follows, from left to right: (1) Resisc45, ViT model (2) Resisc45, ViT model (3) UC Merced, ResNet152 model (4) CLRS, ViT model and (5) SIRI-WHU, ResNet152 model 'resort' (which is also evident in the confusion matrix in Figure D.9 in Appendix D.3). This could be because these classes consist of common objects but have different spatial distributions. Similar problems can be seen in the cases of _CLRS_ and _SIRI-WHU_ (the last three image pairs), where labels such as 'industrial' or'meadow' are confused with labels such as 'commercial/residential/park', which are visually and semantically almost indistinguishable from the ground truth. Similar problems exist in MLC datasets, such as _BigEarthNet_, that contain multiple complex/compound classes. From the evaluation details (see Appendix D.17), we can see that complex/compound classes such as 'Complex cultivation patterns', 'Land principally occupied by agriculture, with significant areas of natural vegetation', and 'Industrial and commercial units' have lower F1 scores. Finally, our analysis shows that for some tasks (such as _So2Sat_), one needs additional and more sophisticated (spatio-temporal) data to improve the performance of the predictive models. For example, the _So2Sat_ dataset is very challenging, not only because of the high inter-class similarity but also because of the relatively low spatial resolution of the images. Images labeled 'Open high rise' or 'Compact low rise' are often confused with 'Open middle rise' or 'Lightweight low rise', respectively, which is hardly surprising without additional data that can capture such subtle and often subjective differences. Moreover, in the case of _BigEarthNet_, classes such as 'Permanent crops', 'Coastal wetlands', and 'Natural grassland and sparsely vegetated areas' require additional spatio-temporal data that capture the dynamics caused by frequent land cover changes, making the process of classification more reliable and thus more accurate. Figure 8: GradCAM visualizations that illustrate the MCC/MLC issues. The input images with their ground-truth label are shown in the first row, while the corresponding activation maps with predicted labels are shown below in the second row. The datasets for the images and the models used to predict the labels are as follows, from left to right: (1) Resisc45, ViT model (2) Resisc45, ViT model (3) UC Merced, ResNet152 model (4) CLRS, ViT model and (5) SIRI-WHU, ResNet152 model ## 5 Conclusions We present a systematic review and evaluation of several modern DL architectures applied in Earth Observation. Specifically, we introduce _AiTLAS: Benchmark Arena_ - an _open-source EO benchmark suite_ and demonstrate its utility with a comprehensive comparative analysis of models from ten different state-of-the-art DL architectures, comparing them to a variety of multi-class and multi-label image classification tasks from 22 datasets. We compare models trained from scratch and pre-trained models under the same conditions and with the same hardware. We evaluate more than 500 models with different architectures and learning paradigms across tasks from 22 datasets with different sizes and properties. To our knowledge, the evaluation of these different setups (in terms of machine learning tasks, model setups, model architectures, and datasets) makes this the largest and most comprehensive empirical study of deep learning methods applied to EO datasets to date. All of the important details about the study design, the results, and the trained models are freely available. This will contribute to more systematic and rigorous experiments in future work and, more importantly, will enable better usability and faster development of novel approaches. We believe that both this study and the associated repository can serve as a starting point and a guiding design principle for evaluating and documenting machine learning approaches in the different domains of EO. More importantly, we hope that with further involvement from the community, AiTLAS: Benchmark Arena can become a reference point for further studies in this highly active research area. More broadly, we believe that this work, along with the developed resources, will strongly impact the AI and EO research communities. First, such ready-to-use resources containing trained models, clear experimental designs, and detailed results will facilitate better adoption of sophisticated modeling approaches in the EO community - bringing Figure 9: GradCAM visualizations for images with complex/compound classes. The input images with their ground-truth label are shown in the first row, while the corresponding activation maps with predicted labels are shown below in the second row. The datasets for the images and the models used to predict the labels are as follows, from left to right: (1) AID, ViT model (2) AID, ViT model (3) CLRS, ViT model (4) SIRI-WHU, ResNet152 model and (5) SIRI-WHU, ResNet152 modelthe EO and AI communities closer together. Second, it demonstrates the FAIRfication process of AI4EO resources, i.e., making resources adhere to the FAIR principles (Findable, Accessible, Interoperable, and Reusable [105]). Finally, it contributes to the 'Green AI' initiative by saving additional computational overhead. Since all experimental details, especially the trained models, are publicly available - other experts and researchers can compare, reproduce, and reuse these resources - reducing the need to (repeatedly) run unnecessary experiments. ## Reproducibility All the necessary details, in terms of the trained models, model parameters and implementations as well as details on all of the used datasets and their prepossessed versions are available at [https://github.com/biasvariancelabs/aitlas-arena](https://github.com/biasvariancelabs/aitlas-arena). All the models were trained/fine-tuned on NVIDIA A100-PCIE-40GB GPUs, running CUDA Version 11.5 (www.nvidia.com/en-gb/data-center/a100/). Note that, we do not host the datasets. To obtain them, please refer to each of the respective studies (referenced in Tables 1 and 2) or follow the links provided in our repository. The study was performed using the AiTLAS Toolbox [37], a library for exploratory and predictive analysis of satellite imaginary pertaining to different remote-sensing tasks, available at [https://aitlas.bvlabs.ai](https://aitlas.bvlabs.ai). ## Acknowledgements We acknowledge the support of the European Space Agency ESA through the activity AiTLAS - AI4EO rapid prototyping environment. We thank Sofija Dimitrovska for her thoughtful feedback. ## References * (1) A. Khan, A. Sohail, U. Zahoora, A. S. Qureshi, A survey of the recent architectures of deep convolutional neural networks, Artificial Intelligence Review 53 (2020) 5455-5516. * (2) S. Khan, M. Naseer, M. Hayat, S. W. Zamir, F. S. Khan, M. Shah, Transformers in vision: A survey, ACM Comput. Surv. (2021). doi:10.1145/3505244. * (4) D. Tuia, F. Ratte, F. Pacifici, M. F. Kanevski, W. J. Emery, Active learning methods for remote sensing image classification, IEEE Transactions on Geoscience and Remote Sensing 47 (2009) 2218-2232. doi:10.1109/TGRS.2008.2010404. * (5) M. Li, S. Zang, B. Zhang, S. Li, C. Wu, A review of remote sensing image classification techniques: the role of spatio-contextual information, European Journal of Remote Sensing 47 (2014) 389-411. doi:10.5721/EuJRS20144723. * (6) T. Blaschke, Object based image analysis for remote sensing, Isprs Journal of Photogrammetry and Remote Sensing 65 (2010) 2-16. * Zeitschrift fur Geoinformationssysteme, 2001. * (8) G. Cheng, J. Han, X. Lu, Remote sensing image scene classification: Benchmark and state of the art, Proceedings of the IEEE 105 (2017) 1865-1883. doi:10.1109/JPROC.2017.2675998. * (9) Y. Yang, S. Newsam, Bag-of-visual-words and spatial extensions for land-use classification, in: Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, Association for Computing Machinery, 2010, p. 270-279. * (10) D. Marmanis, M. Datcu, T. Esch, U. Stilla, Deep learning earth observation classification using imagenet pretrained networks, IEEE Geoscience and Remote Sensing Letters 13 (2016) 105-109. doi:10.1109/LGRS.2015.2499239. * (11) H. Chen, V. Chandrasekar, H. Tan, R. Cifelli, Rainfall estimation from ground radar and tmm precipitation radar using hybrid deep neural networks, Geophysical Research Letters 46 (2019) 10669-10678. doi:[https://doi.org/10.1029/2019GL084771](https://doi.org/10.1029/2019GL084771). * (12) J. Castillo-Navarro, B. Le Saux, A. Boulch, S. Lefevre, Energy-based models in earth observation: From generation to semisupervised learning, IEEE Transactions on Geoscience and Remote Sensing 60 (2022) 1-11. doi:10.1109/TGRS.2021.3126428. * (13) Y. Wang, C. M. Albrecht, N. A. A. Braham, L. Mou, X. X. Zhu, Self-supervised learning in remote sensing: A review, CoRR (2022). doi:10.48550/ARXIV.2206.13188. * 2020 IEEE International Geoscience and Remote Sensing Symposium, 2020, pp. 6730-6733. doi:10.1109/IGARSS39084.2020.9324501. * (15) D. Ienco, R. Gaetano, C. Dupaquier, P. Maurel, Land cover classification via multitemporal spatial data by deep recurrent neural networks, IEEE Geoscience and Remote Sensing Letters 14 (2017) 1685-1689. doi:10.1109/LGRS.2017.2728698. * (16) A. Chlingaryan, S. Sukkarieh, B. Whelan, Machine learning approaches for crop yield prediction and nitrogen status estimation in precision agriculture: A review, Computers and Electronics in Agriculture 151 (2018) 61-69. doi:[https://doi.org/10.1016/j.compag.2018.05.012](https://doi.org/10.1016/j.compag.2018.05.012). * (17) M. D. Johnson, W. W. Hsieh, A. J. Cannon, A. Davidson, F. Bedard, Crop yield forecasting on the canadian prariies by remotely sensed vegetation indices and machine learning methods, Agricultural and Forest Meteorology 218-219 (2016) 74-84. doi:[https://doi.org/10.1016/j.agrformet.2015.11.003](https://doi.org/10.1016/j.agrformet.2015.11.003). * (18) J. Xu, J. Yang, X. Xiong, H. Li, J. Huang, K. Ting, Y. Ying, T. Lin, Towards interpreting multi-temporal deep learning models in crop mapping, Remote Sensing of Environment 264 (2021) 112599. doi:[https://doi.org/10.1016/j.rse.2021.112599](https://doi.org/10.1016/j.rse.2021.112599). * (19) B. Ayhan, C. Kwan, B. Budavari, L. Kwan, Y. Lu, D. Perez, J. Li, D. Skarlatos, M. Vlachos, Vegetation detection using deep learning and conventional methods, Remote Sensing 12 (2020). doi:10.3390/rs12152502. * (20) Y.-H. Jo, D.-W. Kim, H. Kim, Chlorophyll concentration derived from microwave remote sensing measurements using artificial neural network algorithm, Journal of Marine Science and Technology 26 (2018). doi:10.6119/JNST.2018.02_(1).0004. * (21) H. Shirmand, E. Farahbakhsh, R. D. Muller, R. Chandra, A review of machine learning in processing remote sensing data for mineral exploration, Remote Sensing of Environment 268 (2022) 112750. doi:[https://doi.org/10.1016/j.rse.2021.112750](https://doi.org/10.1016/j.rse.2021.112750). * (22) X. Zhang, Q. Zhang, G. Zhang, Z. Nie, Z. Gui, H. Que, A novel hybrid data-driven model for daily land surface temperature forecasting using long short-term memory neural network based on ensemble empirical mode decomposition, International Journal of Environmental Research and Public Health 15 (2018). * 2289. doi:10.1175/JHM-D-19-0110.1. * (24) N. Longbotham, C. Chaapel, L. Bleiler, C. Padwick, W. J. Emery, F. Pacifici, Very high resolution multiangle urban classification analysis, IEEE Transactions on Geoscience and Remote Sensing 50 (2012) 1155-1170. doi:10.1109/TGRS.2011.2165548. * (25) Z. Lv, T. Liu, J. A. Benediktsson, N. Falco, Land cover change detection techniques: Very-high-resolution optical images: A review, IEEE Geoscience and Remote Sensing Magazine 10 (2022) 44-63. doi:10.1109/MGRS.2021.3088865. * (26) B. Huang, B. Zhao, Y. Song, Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery, Remote Sensing of Environment 214 (2018) 73-86. doi:[https://doi.org/10.1016/j.rse.2018.04.050](https://doi.org/10.1016/j.rse.2018.04.050). * (27) M. Somrak, S. Dzeroski, Z. Kokalj, Learning to classify structures in als-derived visualizations of ancient maya settlements with CNN, Remote. Sens. 12 (2020) 2215. doi:10.3390/rs12142215. * (28) G. Cheng, X. Xie, J. Han, L. Guo, G.-S. Xia, Remote sensing image scene classification meets deep learning: Challenges, methods, benchmarks, and opportunities, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13 (2020) 3735-3756. doi:10.1109/JSTARS.2020.3005403. * (29) R. Schneider, M. Bonavita, A. Geer, R. Arcucci, P. Dueben, C. Vitolo, B. Le Saux, B. Demir, P.-P. Mathieu, Esa-ecmwf report on recent progress and research directions in machine learning for earth system observation and prediction, npj Climate and Atmospheric Science 5 (2022) 51. doi:10.1038/s41612-022-00269-z. * (30) I. Papoutsis, N.-I. Bountos, A. Zavras, D. Michail, C. Tryfonopoulos, Efficient deep learning models for land cover image classification, arXiv:2111.09451 (2022). * (31) G.-S. Xia, J. Hu, F. Hu, B. Shi, X. Bai, Y. Zhong, L. Zhang, X. Lu, AID: A benchmark data set for performance evaluation of aerial scene classification, IEEE Transactions on Geoscience and Remote Sensing 55 (2017) 3965-3981. * (32) X. Zhai, J. Puigcerver, A. Kolesnikov, P. Ruyssen, C. Riquelme, M. Lucic, J. Djolonga, A. S. Pinto, M. Neumann, A. Dosovitskiy, L. Beyer, O. Bachem, M. Tschannen, M. Michalski, O. Bousquet, S. Gelly, N. Houlsby, A large-scale study of representation learning with the visual task adaptation benchmark, arXiv:1910.04867 (2019). * (33) L. Zhang, L. Zhang, B. Du, Deep learning for remote sensing data: A technical tutorial on the state of the art, IEEE Geoscience and Remote Sensing Magazine 4 (2016) 22-40. doi:10.1109/MGRS.2016.2540798. * (34) X. X. Zhu, D. Tuia, L. Mou, G.-S. Xia, L. Zhang, F. Xu, F. Fraundorfer, Deep learning in remote sensing: A comprehensive review and list of resources, IEEE Geoscience and Remote Sensing Magazine 5 (2017) 8-36. doi:10.1109/MGRS.2017.2762307. * (35) A. J. Stewart, C. Robinson, I. A. Corley, A. Ortiz, J. M. L. Ferres, A. Banerjee, Torchgeo: deep learning with geospatial data, CoRR abs/2111.08872 (2021). arXiv:2111.08872. * (36) G. Sumbul, A. de Wall, T. Kreuziger, F. Marcelino, H. Costa, P. Benevides, M. Caetano, B. Demir, V. Markl, BigEarthNet-MM: A large-scale, multimodal, multilabel benchmark archive for remote sensing image classification and retrieval [software and data sets], IEEE Geoscience and Remote Sensing Magazine 9 (2021) 174-180. * (37) I. Dimitrovski, I. Kitanovski, P. Panov, N. Simidjievski, D. Kocev, Aitlas: Artificial intelligence toolbox for earth observation, CoRR abs/2201.08789 (2022). arXiv:2201.08789. * (38) A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, S. Chintala, Pytorch: An imperative style, high-performance deep learning library, in: Advances in Neural Information Processing Systems 32, Curran Associates, Inc., 2019, pp. 8024-8035. * (39) A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., An image is worth 16x16 words: Transformers for image recognition at scale, arXiv preprint arXiv:2010.11929 (2020). * (40) Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong, F. Wei, B. Guo, Swin transformer v2: Scaling up capacity and resolution, in: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 11999-12009. doi:10.1109/CVPR52688.2022.01170. * (41) G. Tsoumakas, I. Katakis, Multi-label classification: An overview, International Journal of Data Warehousing and Mining 3 (2009) 1-13. * ISPRS Archives 38 (2010). * (43) P. Helber, B. Bischke, A. Dengel, D. Borth, Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (2019). * (44) W. Zhou, S. Newsam, C. Li, Z. Shao, Patternnet: A benchmark dataset for performance evaluation of remote sensing image retrieval, ISPRS journal of photogrammetry and remote sensing 145 (2018) 197-209. * (45) H. Li, X. Dou, C. Tao, Z. Wu, J. Chen, J. Peng, M. Deng, L. Zhao, Rsi-cb: A large-scale remote sensing image classification benchmark using crowdsourced data, Sensors 20 (2020) 1594. doi:doi.org/10.3390/s20061594. * (46) Q. Zou, L. Ni, T. Zhang, Q. Wang, Deep learning based feature selection for remote sensing scene classification, IEEE Geoscience and Remote Sensing Letters 12 (2015) 2321-2325. doi:10.1109/LGRS.2015.2475299. * (47) S. Basu, S. Ganguly, S. Mukhopadhyay, R. DiBiano, M. Karki, R. Nemani, Deepsat: A learning framework for satellite imagery, in: Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems, SIGSPATIAL '15, Association for Computing Machinery, 2015. * (48) Q. Zhu, Y. Zhong, B. Zhao, G.-S. Xia, L. Zhang, Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery, IEEE Geoscience and Remote Sensing Letters 13 (2016) 747-751. * (49) H. Li, H. Jiang, X. Gu, J. Peng, W. Li, L. Hong, C. Tao, Clrs: Continual learning benchmark for remote sensing image scene classification, Sensors 20 (2020). * (50) Y. Long, Y. Gong, Z. Xiao, Q. Liu, Accurate object localization in remote sensing images based on convolutional neural networks, IEEE Transactions on Geoscience and Remote Sensing 55 (2017) 2486-2498. * (51) Q. Wang, S. Liu, J. Chanussot, X. Li, Scene classification with recurrent attention of vhr remote sensing images, IEEE Transactions on Geoscience and Remote Sensing 57 (2019) 1155-1167. * (52) O. A. Penatti, K. Nogueira, J. A. Dos Santos, Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?, in: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2015, pp. 44-51. * (53) X. X. Zhu, J. Hu, C. Qiu, Y. Shi, J. Kang, L. Mou, H. Bagheri, M. Haberle, Y. Hua, R. Huang, L. Hughes, H. Li, Y. Sun, G. Zhang, S. Han, M. Schmitt, Y. Wang, So2sat lcz42: A benchmark data set for the classification of global local climate zones [software and data sets], IEEE Geoscience and Remote Sensing Magazine 8 (2020) 76-89. * (54) B. Chaudhuri, B. Demir, S. Chaudhuri, L. Bruzzone, Multilabel remote sensing image retrieval using a semisupervised graph-theoretic method, IEEE Transactions on Geoscience and Remote Sensing 56 (2018) 1144-1158. * (55) X. Qi, P. Zhu, Y. Wang, L. Zhang, J. Peng, M. Wu, J. Chen, X. Zhao, N. Zang, P. T. Mathiopoulos, MIrsnet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding, ISPRS Journal of Photogrammetry and Remote Sensing 169 (2020) 337-350. * (56) Y. Hua, L. Mou, X. X. Zhu, Recurrently exploring class-wise attention in a hybrid convolutional and bidirectional LSTM network for multi-label aerial image classification, ISPRS Journal of Photogrammetry and Remote Sensing 149 (2019) 188-199. * (57) Y. Hua, L. Mou, X. X. Zhu, Relation network for multilabel aerial image classification, IEEE Transactions on Geoscience and Remote Sensing 58 (2020) 4558-4572. * (58) Kaggle, Planet: Understanding the amazon from space, 2022. URL: [https://www.kaggle.com/competitions/](https://www.kaggle.com/competitions/)planet-understanding-the-amazon-from-space, last accessed 21 May 2022. * 2019 IEEE International Geoscience and Remote Sensing Symposium (2019) 5901-5904. * (60) A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems 25 (2012) 1097-1105. * (61) K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556 (2014). * (62) K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778. * (63) G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, Densely connected convolutional networks, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708. * (64) M. Tan, Q. Le, Efficientnet: Rethinking model scaling for convolutional neural networks, in: International conference on machine learning, PMLR, 2019, pp. 6105-6114. * (65) Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, S. Xie, A convnet for the 2020s, arXiv preprint arXiv:2201.03545 (2022). * (66) I. O. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung, A. Steiner, D. Keysers, J. Uszkoreit, et al., Mlp-mixer: An all-mlp architecture for vision, Advances in Neural Information Processing Systems 34 (2021). * (67) S. Marcel, Y. Rodriguez, Torchvision the machine-vision package of torch, in: Proceedings of the 18th ACM international conference on Multimedia, 2010, pp. 1485-1488. * (68) R. Wightman, Pytorch image models, [https://github.com/rwightman/pytorch-image-models](https://github.com/rwightman/pytorch-image-models), 2019. doi:10.5281/zenodo.4414861. * (69) Q. Weng, Z. Mao, J. Lin, W. Guo, Land-use classification via extreme learning classifier based on deep convolutional features, IEEE Geoscience and Remote Sensing Letters 14 (2017) 704-708. doi:10.1109/LGRS.2017.2672643. * (70) M. Castelluccio, G. Poggi, C. Sansone, L. Verdoliva, Land use classification in remote sensing images by convolutional neural networks, CoRR (2015). doi:10.48550/ARXIV.1508.00092. * (71) X. Han, Y. Zhong, L. Cao, L. Zhang, Pre-trained alcnet architecture with pyramid pooling and supervision for high spatial resolution remote sensing image scene classification, Remote Sensing 9 (2017). doi:10.3390/rs9088848. * (72) J. Kang, M. Korner, Y. Wang, H. Taubenbock, X. X. Zhu, Building instance classification using street view images, ISPRS journal of photogrammetry and remote sensing 145 (2018) 44-59. * (73) F. Hu, G.-S. Xia, J. Hu, L. Zhang, Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery, Remote Sensing 7 (2015) 14680-14707. doi:10.3390/rs71114680. * (74) I. Goodfellow, Y. Bengio, A. Courville, Deep Learning, MIT Press, 2016. [http://www.deeplearningbook.org](http://www.deeplearningbook.org). * (75) S. Zagoruyko, N. Komodakis, Wide residual networks, CoRR (2016). doi:10.48550/ARXIV.1605.07146. * (76) N. Audebert, B. Le Saux, S. Lefevre, Beyond rgb: Very high resolution urban remote sensing with multimodal deep networks, ISPRS Journal of Photogrammetry and Remote Sensing 140 (2018) 20-32. * (77) J. Zhang, C. Lu, X. Li, H.-J. Kim, J. Wang, A full convolutional network based on densenet for remote sensing scene classification, Mathematical Biosciences and Engineering 16 (2019) 3345-3367. doi:10.3934/mbe.2019167. * (78) W. Tong, W. Chen, W. Han, X. Li, L. Wang, Channel-attention-based densenet network for remote sensing image scene classification, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13 (2020) 4121-4132. doi:10.1109/JSTARS.2020.3009352. * (79) F. Chen, J. Y. Tsou, Drsnet: Novel architecture for small patch and low-resolution remote sensing image scene classification, International Journal of Applied Earth Observation and Geoinformation 104 (2021) 102577. doi:[https://doi.org/10.1016/j.jag.2021.102577](https://doi.org/10.1016/j.jag.2021.102577). * (80) T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, S. Belongie, Feature pyramid networks for object detection, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 936-944. doi:10.1109/CVPR.2017.106. * (81) S. Liu, C. He, H. Bai, Y. Zhang, J. Cheng, Light-weight attention semantic segmentation network for high-resolution remote sensing images, in: IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, IEEE, 2020, pp. 2595-2598. * (82) Z. Tian, W. Wang, B. Tian, R. Zhan, J. Zhang, Resolution-aware network with attention mechanisms for remote sensing object detection., ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences 5 (2020). * (83) H. Alhichri, A. S. Alswayed, Y. Bazi, N. Ammour, N. A. Alajlan, Classification of remote sensing images using efficientnet-b3 cnn model with attention, IEEE Access 9 (2021) 14078-14094. doi:10.1109/ACCESS.2021.3051085. * (84) J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805 (2018). * (85) Y. Bazi, L. Bashmal, M. M. A. Rahhal, R. A. Dayil, N. A. Ajlan, Vision transformers for remote sensing image classification, Remote Sensing 13 (2021). doi:10.3390/rs13030516. * (86) N. Gong, C. Zhang, H. Zhou, K. Zhang, Z. Wu, X. Zhang, Classification of hyperspectral images via improved cycle-mlp, IET Computer Vision 16 (2022) 468-478. doi:[https://doi.org/10.1049/cviz.12104](https://doi.org/10.1049/cviz.12104). * (87) Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, B. Guo, Swin transformer: Hierarchical vision transformer using shifted windows, in: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 9992-10002. doi:10.1109/ICCV48922.2021.00986. * (88) L. Scheibenreif, J. Hanna, M. Mommert, D. Borth, Self-supervised vision transformers for land-cover segmentation and classification, in: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2022, pp. 1421-1430. doi:10.1109/CVPRW56347.2022.00148. * (89) C. Zhang, L. Wang, S. Cheng, Y. Li, Swinsunet: Pure transformer network for remote sensing image change detection, IEEE Transactions on Geoscience and Remote Sensing 60 (2022) 1-13. doi:10.1109/TGRS.2022.3160007. * (90) D. Wang, J. Zhang, B. Du, G.-S. Xia, D. Tao, An empirical study of remote sensing pretraining, IEEE Transactions on Geoscience and Remote Sensing (2022) 1-1. doi:10.1109/TGRS.2022.3176603. * (91) Z. Xu, W. Zhang, T. Zhang, Z. Yang, J. Li, Efficient transformer for remote sensing image segmentation, Remote Sensing 13 (2021). doi:10.3390/rs13183585. * (92) Z. Meng, F. Zhao, M. Liang, Ss-mlp: A novel spectral-spatial mlp architecture for hyperspectral image classification, Remote Sensing 13 (2021). doi:10.3390/rs13204060. * Volume Part III, Springer-Verlag, 2011, p. 145-158. * (94) L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, J. Han, On the variance of the adaptive learning rate and beyond, in: Proceedings of the Eighth International Conference on Learning Representations (ICLR 2020), 2020. * (95) D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014). * (96) V. Risojevic, V. Stojnic, Do we still need ImageNet pre-training in remote sensing scene classification?, arXiv abs/2111.03690 (2021). arXiv:2111.03690. * Volume 2, 2014, p. 3320-3328. * (98) S. Kornblith, J. Shlens, Q. V. Le, Do better imagenet models transfer better?, in: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 2656-2666. * (99) S. Paul, P.-Y. Chen, Vision transformers are robust learners, Proceedings of the AAAI Conference on Artificial Intelligence 36 (2022) 2071-2081. * (100) S. Bhojanapalli, A. Chakrabarti, D. Glasner, D. Li, T. Unterthiner, A. Veit, Understanding robustness of transformers for image classification, in: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE Computer Society, Los Alamitos, CA, USA, 2021, pp. 10211-10221. * (101) C. Zhang, M. Zhang, S. Zhang, D. Jin, Q. feng Zhou, Z. Cai, H. Zhao, S. Yi, X. Liu, Z. Liu, Delving deep into the generalization of vision transformers under distribution shifts, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022) 7267-7276. * (102) R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-cam: Visual explanations from deep networks via gradient -based localization, in: 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 618-626. * [103] J. Gildenblat, contributors, Pytorch library for cam methods, [https://github.com/jacobgi/pytorch-grad-cam](https://github.com/jacobgi/pytorch-grad-cam), 2021. * [104] J. Li, D. Lin, Y. Wang, G. Xu, Y. Zhang, C. Ding, Y. Zhou, Deep discriminative representation learning with attention map for scene classification, Remote Sensing 12 (2020). * [105] M. D. Wilkinson, M. Dumontier, I. J. Aalbersberg, G. Appleton, M. Axton, A. Baak, N. Blomberg, J.-W. Boiten, L. B. da Silva Santos, P. E. Bourne, et al., The FAIR guiding principles for scientific data management and stewardship, Scientific Data 3 (2016) 1-9. **Current Trends in Deep Learning for Earth Observation:** **An Open-source Benchmark Arena for Image Classification** _Ivica Dimitrovski, Ivan Kitanovski, Dragi Kocev, Nikola Simidjievski_ **Supplementary Material** **Table of Contents** * 1 Evaluation metrics * 2 Training Time Details * 3 Extended results on model generalization performance \t* 3.1 Evaluation on a same holdout set \t* 3.2 Results from pairwise comparisons * 4 Detailed data descriptions & extended results per task \t* 4.1 UC Merced \t* 4.2 WHU-RS19 \t* 4.3 AID \t* 4.4 Eurosat \t* 4.5 PatternNet \t* 4.6 Resisc45 \t* 4.7 RSI-CB256 \t* 4.8 RSSCN7 \t* 4.9 SAT6 * 4.10 Siri-Whu * 4.11 CLRS * 4.12 RSD46-WHU * 4.13 Brazilian Coffee Scenes * 4.14 Optimal 31 * 4.15 So2Sat * 4.16 UC Merced multi-label * 4.17 BigEarthNet * 4.18 MLRSNet * 4.19 DFC15 * 4.20 Planet UAS * 4.21 AID multi-label Evaluation metrics The predictive performance of machine learning models is typically assessed using different evaluation measures that capture different aspects of the models' behavior. Selecting the proper evaluation measures requires knowledge of the task and problem at hand. In order to have an unbiased and fair view of the performance, one needs to consider the models' performance along several measures and then compare their performance. In this study, we assess the performance of the models using a variety of different measures available for the machine learning tasks studied here: multi-class and multi-label classification. **Multi-class classification** refers to the task where a sample can be assigned to exactly one class/label selected from a predefined set of possible classes/labels. Here, we overview several evaluation measures used for this task. Most widely used evaluation measure is _accuracy_ due to its intuitive interpretation and straightforward calculation. It denotes the percentage of correctly labeled samples. _Precision_ and _Recall_ are defined for binary tasks (two classes, often called positive and negative class) by default. To extend the binary measures to multi-class classification tasks, we adopt the One-vs-Rest (One-vs-All) approach which converts a multi-class task into a series of binary tasks for each class/label in the target. Within this approach the sample from given class/label is treated as positive, and the samples from all the other classes/labels are treated as negative. To calculate most of the evaluation measures, we need to define the following concepts: True Positives (TP), True Negatives (TN), False Positives (FP) and False Negatives (FN). These concepts combined together form the confusion matrix for the performance of a given model over a given dataset. The TP, TN, FP and FN are defined as follows: * TP: the label is positive and the prediction is also positive * TN: the label is negative and the prediction is also negative * FP: the label is negative but the prediction is positive * FN: the label is positive but the prediction is negative _Precision_ is then calculated as the fraction of correctly predicted positive observations from the total predicted positive observations:\\[Precision=\\frac{TP}{TP+FP}\\] _Recall_ is calculated as the fraction of correctly predicted positive observations from the available positive observations: \\[Recall=\\frac{TP}{TP+FN}\\] _F1 score_ is also a common evaluation measure used in machine learning tasks, basically it combines precision and recall through a weighted average. Therefore, this score takes both false positives and false negatives into account and is very useful, especially if we have an imbalanced class/label distribution. The F1 score can be calculated as: \\[F1=2\\cdot\\frac{Precision\\cdot Recall}{Precision+Recall}\\] These evaluation measures can then be aggregated across multiple classes using three strategies: * _Macro averaging_: calculate the evaluation measures for each class/label separately and then average the individual values, * _Micro averaging_: calculate the class wise confusion matrices and then aggregate the confusion matrices into a single one (i.e., add together the TP, FP, FN and FP values for each class). The aggregated confusion matrix is then used to calculate the values for the different evaluation measures, and * _Weighted averaging_: based on macro averaging but using the frequency of the class/label as a weight in the average calculation. Using these aggregation strategies, we then obtain macro-averaged, micro-averaged and weighted-averaged precision, recall and F1 score. Note that micro F1 score, micro precision and micro recall yield the same values as accuracy for the multi-class classification task. Taking into account this, for the multi-class classification tasks we report the following evaluation measures: Accuracy, Macro Precision, Weighted Precision, Macro Recall, Weighted Recall, Macro F1 score and Weighted F1 score. **Multi-label classification** refers to the task where a sample can be assigned to multiple class/label from a predefined set of possible classes/labels. To transform the multi-label classification task to binary classification and apply the same metrics previously defined, we adopt the binary relevance method [41] that considers each label as an independent binary problem. In our case, in each node from the output layer, we use the sigmoid activation function to obtain a probability of the input image being labeled with each of the classes/labels. To use these probabilities to predict the classes/labels of the image, we need to define a threshold value. The model predicts whether an image contains the classes/labels with a probability that exceed the given threshold. The threshold value controls the rate of false positives v.s false negatives. Increasing the threshold reduces the number of false positives, whereas decreasing it reduces the number of false negatives. In our experiments, we use threshold value of 0.5. Taking into account this transformation, we can apply the formulas from above to calculate the same evaluation measures for multi-label classification tasks. While these evaluation measures are threshold dependent, we additionally use the the _mean average precision_ (mAP) - a threshold independent evaluation measure widely used in image classification tasks. mAP is calculated as the mean over the average precision values of the individual labels. Average precision summarizes a precision-recall curve as the weighted mean of the precision values obtained at each threshold, with the increase in recall from the previous threshold used as the weight: \\[AP=\\sum_{n}(R_{n}-R_{n-1})P_{n}\\] Where \\(P_{n}\\) and \\(R_{n}\\) are the precision and recall at the n-th threshold. It is a useful metric to compare how well models are ordering the predictions, without considering any specific decision threshold. For the multi-label classification task, we report the following evaluation measures: Micro Precision, Macro Precision, Weighted Precision, Micro Recall, Macro Recall, Weighted Recall, Micro F1 score, Macro F1 score, Weighted F1 score and mean average precision (mAP). For all measures but mAP, that require a threshold on the predictions, we set it to 0.5 for all the models and settings. For both tasks, we provide the means to perform even more detailed analysis of the performance by reporting the confusion matrices as a performance summary of the models. The confusion matrices provide detailed per class/label view of the models' performance. ## Appendix B Additional Results Figure B.2: **Training time of models trained from scratch and pre-trained models for each of the (a,b) MCC and (d,e) MLC datasets. The training time of each model architecture (denoted with different colors) is depicted as a fraction (%) of the cumulative training time for each dataset. Furthermore, (c) and (f) illustrate the average time per epoch of each model variant on (c) MCC and (f) MLC tasks, comparing the (red) pre-trained model variants (from (a) and (b)) to their counterparts (blue) trained from scratch.** \\begin{table} \\begin{tabular}{l|c c c c c c c c c c c c c} \\hline \\hline Source \\(\\backslash\\)Target dataset & RESISC45 & UC Merced & CLRS & Optimal31 & PatternNet & AID & RSI-CB256 & WHU-8519 & SIRI-WHU & RSD46-WHU & Eurostat & SAT6 & RESCN7 \\\\ \\hline RESISC45 & 96.54 & 72.205 & 68.485 & 98.656 & 85.066 & 80.684 & 73.38 & 83.465 & 59.583 & 47.194 & 18.63 & 0.331 & 52.5 \\\\ UC Merced & 61.729 & 98.81 & 46.417 & 73.889 & 89.743 & 63.956 & 89.713 & 69.811 & 22.5 & 34.018 & 15.188 & 13.09 & 47.5 \\\\ CLRS & 81.553 & 70.01 & 91.933 & 84.896 & 80.469 & 88.482 & 79.984 & 85.119 & 70.0 & 71.065 & 46.259 & 2.62 & 51.75 \\\\ Optimal31 & 87.972 & 72.667 & 65.99 & 92.473 & 86.797 & 82.217 & 87.874 & 56.5 & 38.252 & 6.273 & 18.893 & 42.5 \\\\ PatternNet & 51.653 & 49.412 & 33.681 & 65.424 & 99.497 & 35.756 & 61.463 & 28.571 & 30.0 & 30.013 & 24.875 & 0.0 & 15.0 \\\\ AID & 79.143 & 61.667 & 69.298 & 80.769 & 73.125 & 97.72 & 73.683 & 98.889 & 35.5 & 45.048 & 11.438 & 17.338 & 44.688 \\\\ RSI-CB256 & 53.312 & 73.125 & 37.755 & 62.5 & 78.724 & 58.483 & 98.959 & 64.84 & 27.5 & 44.325 & 27.643 & 19.288 & 20.625 \\\\ WHIU-8519 & 72.798 & 87.07 & 75.708 & 79.167 & 66.255 & 86.567 & 73.755 & 98.01 & 40.0 & 51.83 & 10.409 & 0.141 & 49.5 \\\\ SIRI-WHU & 66.19 & 50.0 & 67.262 & 73.333 & 74.375 & 66.288 & 40.233 & 57.895 & 96.25 & 84.061 & 55.062 & 18.092 & 27.083 \\\\ RSID6-WHU & 44.396 & 43.75 & 49.062 & 56.25 & 45.438 & 43.959 & 39.045 & 51.923 & 46.25 & 94.041 & 95.5 & 19.68 & 40.0 \\\\ EArost & 28.857 & 6.667 & 38.167 & 27.083 & 3.75 & 50.476 & 29.62 & 27.907 & 71.667 & 85.222 & 99.0 & 1.034 & 47.083 \\\\ SATS & 0.0 & 10.0 & 26.25 & 0.0 & 0.0 & 46.429 & 19.681 & 0.0 & 15.0 & 6.825 & 2.833 & 10.060 & 68.75 \\\\ RSSCN7 & 76.19 & 87.5 & 73.167 & 83.333 & 83.75 & 86.786 & 55.977 & 94.231 & 88.333 & 52.311 & 36.176 & 0.087 & 95.0 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.6: DenseNet161 on MCC tasks: Generalization performance in terms of accuracy (%) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c c c c c c c c c c c c} \\hline \\hline Source \\(\\backslash\\)Target dataset & RESISC45 & UC Merced & CLRS & Optimal31 & PatternNet & AID & RSI-CB256 & WHU-8519 & SIRI-WHU & RSD46-WHU & Eurostat & SAT6 & RESCN7 \\\\ \\hline RESISC45 & 96.508 & 75.0 & 69.015 & 98.925 & 87.862 & 78.975 & 76.543 & 81.89 & 60.0 & 49.385 & 10.111 & 29.235 & 56.25 \\\\ UC Merced & 63.271 & 98.333 & 50.917 & 73.889 & 87.059 & 68.025 & 92.214 & 75.472 & 16.875 & 31.995 & 12.25 & 5.067 & 55.0 \\\\ CLRS & 80.994 & 72.5 & 92.28 & 88.021 & 81.146 & 83.844 & 79.502 & 86.31 & 72.857 & 71.199 & 38.519 & 1.229 & 49.75 \\\\ Optimal31 & 88.641 & 77.0 & 65.669 & 49.357 & 86.461 & 83.603 & 78.283 & 78.74 & 55.5 & 40.67 & 46.09 & 35.969 & 44.167 \\\\ PatternNet & 52.105 & 60.294 & 32.038 & 69.792 & 99.737 & 41.246 & 63.1 & 39.683 & 20.833 & 25.006 & 19.5 & 0.021 & 20.265 \\\\ AID & 66.25 & 39.444 & 45.175 & 76.923 & 58.755 & 88.855 & 49.683 & 93.333 & 24.5 & 26.295 & 39.125 & 20.976 & 38.125 \\\\ RSI-CB256 & 52.208 & 68.75 & 41.75 & 55.583 & 72.292 & 63.125 & 99.737 & 71.233 & 30.833 & 45.03 & 39.464 & 0.145 & 23.75 \\\\ WHIU-8519 & 74.821 & 91.0 & 59.167 & 79.861 & 69.167 & 87.396 & 69.295 & 100.00 & 45.0 & 51.233 & 1.591 & 0.007 & 55.55 \\\\ SIRI-WHU & 55.833 & 33.75 & 65.714 & 55.0 & 53.333 & 59.945 & 43.111 & 56.579 & 95.625 & 76.269 & 26.188 & 28.087 & 38.383 \\\\ RSDD46-WHU & 41.758 & 44.375 & 49.271 & 50.0 & 41.25 & 47.5 & 36.353 & 53.846 & 38.75 & 94.507 & 96.0 & 0.714 & 48.75 \\\\ Eurostat & 5.857 & 1.657 & 20.5 & 0.0 & 5.417 & 2.381 & 39.402 & 25.581 & 57.5 & 76.601 & 98.889 & 0.2 & 30.417 \\\\ SATS & 0.0 & 5.0 & 32.083 & 0.0 & 0.0 & 50.083 & 33.511 & 0.0 & 0.0 & 0.89 & 0.0 & 100.0 & 63.75 \\\\ RSSCN7 & 85.0 & 97.5 & 80.5 & 80.556 & 93.125 & 91.429 & 59.013 & 92.308 & 81.667 & 63.316 & 11.941 & 0.921 & 94.821 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.3: VGG16 on MCC tasks: Generalization performance in terms of accuracy (%) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c|c c c c c c c c c c c c} \\hline \\hline Source \\(\\backslash\\)Target dataset & RESISC45 & UC Merced & CLRS & Optimal31 & PatternNet & AID & RSI-CB256 & WHU-8519 & SIRI-WHU & RSD46-WHU & Eurostat & SAT6 & RESCN7 \\\\ \\hline RESISC45 & 96.54 & 72.2095 & 68.485 & 98.656 & 85.066 & 80.684 \\begin{table} \\begin{tabular}{l|c c c c c c c c c c c c c c} \\hline \\hline Source \\(\\backslash\\)Target dataset & REISSC45 & UC Merced & CLRS & Optimal31 & PatternNet & AID & RSI-CB256 & WUH-08519 & SIRI-WHU & RSD46-WHU & Eurostat & SAT6 & RESCN7 \\\\ \\hline REISSC45 & 95.952 & 81.842 & 65.947 & 98.925 & 86.25 & 78.455 & 78.164 & 83.465 & 52.5 & 47.255 & 11.926 & 1.255 & 47.5 \\\\ UC Merced & 64.474 & 98.333 & 52.417 & 72.778 & 91.471 & 68.966 & 93.187 & 78.922 & 27.5 & 36.929 & 29.75 & 27.488 & 53.75 \\\\ CLRS & 80.733 & 67.07 & 90.1 & 83.333 & 80.365 & 84.82 & 80.305 & 84.524 & 69.286 & 68.252 & 34.37 & 3.88 & 49.25 \\\\ Optimal31 & 87.12 & 74.467 & 66.5938 & 92.742 & 86.797 & 82.679 & 80.303 & 86.614 & 64.0 & 39.678 & 8.364 & 6.486 & 41.667 \\\\ PatternNet & 52.481 & 68.529 & 39.792 & 67.078 & 99.904 & 51.868 & 67.686 & 49.206 & 48.333 & 29.662 & 23.123 & 0.234 \\\\ AID & 77.929 & 68.333 & 67.544 & 78.205 & 71.688 & 97.67 & 68.417 & 99.444 & 99.45 & 46.179 & 34.438 & 29.456 & 34.129 \\\\ RSI-CB256 & 56.169 & 82.5 & 45.5 & 67.5 & 80.139 & 63.321 & 99.657 & 80.832 & 32.5 & 45.438 & 31.69 & 13.455 & 29.983 \\\\ WHI-08519 & 68.452 & 84.0 & 91.677 & 77.788 & 69.306 & 86.816 & 84.44 & 98.907 & 47.143 & 58.095 & 25.6 & 6.472 & 55.22 \\\\ SIRI-WHU & 67.262 & 38.75 & 72.024 & 80.0 & 77.292 & 66.557 & 34.694 & 66.589 & 95.208 & 82.999 & 40.25 & 0.978 & 33.333 \\\\ RSID6-WHU & 41.264 & 41.25 & 49.375 & 50.0 & 49.25 & 54.821 & 43.696 & 66.385 & 50.0 & 93.667 & 96.833 & 7.126 & 53.75 \\\\ EArost & 53.857 & 35.0 & 63.5 & 54.167 & 60.208 & 93.333 & 56.929 & 81.395 & 70.0 & 68.473 & 98.741 & 6.685 & 60.833 \\\\ SATS & 97.143 & 50.0 & 55.833 & 100.0 & 100.0 & 61.697 & 6.117 & 90.909 & 0.0 & 9.125 & 0.999 & 77.5 \\\\ RSSCN7 & 84.286 & 95.0 & 81.333 & 86.111 & 91.562 & 92.857 & 65.844 & 96.154 & 95.833 & 62.362 & 13.588 & 2.358 & 95.179 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.8: ViT on MCC tasks: Generalization performance in terms of accuracy (%) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c c c c c c c c c c c c c} \\hline \\hline Source \\(\\backslash\\)Target dataset & REISSC45 & UC Merced & CLRS & Optimal31 & PatternNet & AID & RSI-CB256 & WUH-08519 & SIRI-WHU & RSD46-WHU & Eurostat & SAT6 & RESCN7 \\\\ \\hline REISSC45 & 96.27 & 82.895 & 71.136 & 98.656 & 90.099 & 81.798 & 78.318 & 81.89 & 56.25 & 52.253 & 11.444 & 0.049 & 55.0 \\\\ UC Merced & 64.436 & 97.857 & 52.667 & 72.778 & 91.006 & 66.928 & 91.971 & 73.585 & 18.125 & 38.737 & 5.438 & 2.732 & 50.625 \\\\ CLRS & 81.056 & 71.1 & 86.458 & 86.792 & 91.029 & 72.96 & 88.095 & 71.011 & 72.907 & 31.111 & 0.184 & 52.75 \\\\ Oymali31 & 87.81 & 82.676 & 65.521 & 80.111 & 8 \\begin{table} \\begin{tabular}{l|c c c c} \\hline \\hline Source \\textbackslash{}Target dataset & MLRSNet & AID (mlc) & UC merced (mlc) & DFC15 \\\\ \\hline MLRSNet & 96.432 & 76.818 & 81.111 & 48.062 \\\\ AID (mlc) & 62.899 & 80.943 & 53.994 & 48.793 \\\\ UC merced (mlc) & 51.062 & 51.454 & 96.007 & 48.286 \\\\ DFC15 & 53.242 & 73.216 & 56.132 & 97.606 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.16: DenseNet161 on MLC tasks: Generalization performance in terms of mean average precision (% mAP) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c c c c c c c c c c c} \\hline \\hline Source \\textbackslash{}Target dataset & \\multicolumn{2}{c}{MRLSNet} & \\multicolumn{2}{c}{AID (mlc)} & UC merced (mlc) & DFC15 \\\\ \\hline MLRSNet & 96.306 & 77.359 & 80.212 & 52.007 \\\\ AID (mlc) & 62.522 & 81.709 & 55.913 & 43.452 \\\\ UC merced (mlc) & 55.494 & 54.165 & 96.057 & 52.486 \\\\ DFC15 & 52.947 & 73.871 & 53.681 & 97.532 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.13: VGG16 on MLC tasks: Generalization performance in terms of mean average precision (% mAP) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c c c c c c c c c c c} \\hline \\hline Source \\textbackslash{}Target dataset & \\multicolumn{2}{c}{MRLSNet} & \\multicolumn{2}{c}{AID (mlc)} & UC merced (mlc) & DFC15 \\\\ \\hline MLRSNet & 96.432 & 76.818 & 81.111 & 48.062 \\\\ AID (mlc) & 62.899 & 80.943 & 53.994 & 48.793 \\\\ UC merced (mlc) & 51.062 & 51.454 & 96.007 & 48.286 \\\\ DFC15 & 53.242 & 73.216 & 56.132 & 97.606 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.15: ResNet152 on MLC tasks: Generalization performance in terms of mean average precision (% mAP) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c c c c c c c c c c c} \\hline \\hline Source \\textbackslash{}Target dataset & \\multicolumn{2}{c}{MRLSNet} & \\multicolumn{2}{c}{AID (mlc)} & UC merced (mlc) & DFC15 \\\\ \\hline MLRSNet & 96.432 & 76.818 & 81.111 & 48.062 \\\\ AID (mlc) & 62.899 & 80.943 & 53.994 & 48.793 \\\\ UC merced (mlc) & 51.062 & 51.454 & 96.007 & 48.286 \\\\ DFC15 & 53.242 & 73.216 & 56.132 & 97.606 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.16: DenseNet161 on MLC tasks: Generalization performance in terms of mean average precision (% mAP) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c c c c c c c c c c c} \\hline \\hline Source \\textbackslash{}Target dataset & \\multicolumn{2}{c}{MRLSNet} & \\multicolumn{2}{c}{AID (mlc)} & UC merced (mlc) & DFC15 \\\\ \\hline MLRSNet & 96.432 & 76.818 & 81.111 & 48.062 \\\\ AID (mlc) & 62.899 & 80.943 & 53.994 & 48.793 \\\\ UC merced (mlc) & 51.062 & 51.454 & 96.007 & 48.286 \\\\ DFC15 & 53.242 & 73.216 & 56.132 & 97.606 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.15: ResNet152 on MLC tasks: Generalization performance in terms of mean average precision (% mAP) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c c c c c c c c c c} \\hline \\hline Source \\textbackslash{}Target dataset & \\multicolumn{2}{c}{MRLSNet} & \\multicolumn{2}{c}{AID (mlc)} & UC merced (mlc) & DFC15 \\\\ \\hline MLRSNet & 96.432 & 76.818 & 81.111 & 48.062 \\\\ AID (mlc) & 62.899 & 80.943 & 53.994 & 48.793 \\\\ UC merced (mlc) & 51.062 & 51.454 & 96.007 & 48.286 \\\\ DFC15 & 53.242 & 73.216 & 56.132 & 97.606 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.16: DenseNet161 on MLC tasks: Generalization performance in terms of mean average precision (% mAP) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c c c c c c c c c} \\hline \\hline Source \\textbackslash{}Target dataset & \\multicolumn{2}{c}{MRLSNet} & \\multicolumn{2}{c}{AID (mlc)} & UC merced (mlc) & DFC15 \\\\ \\hline MLRSNet & 96.432 & 76.818 & 81.111 & 48.062 \\\\ AID (mlc) & 62.899 & 80.943 & 53.994 & 48.793 \\\\ UC merced (mlc) & 51.062 & 51.454 & 96.007 & 48.286 \\\\ DFC15 & 53.242 & 73.216 & 56.132 & 97.606 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.15: ResNet152 on MLC tasks: Generalization performance in terms of mean average precision (% mAP) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c c c c} \\hline \\hline Source \\textbackslash{}Target dataset & MLRSNet & AID (mlc) & UC merced (mlc) & DFC15 \\\\ \\hline MLRSNet & 95.807 & 76.333 & 80.243 & 56.256 \\\\ AID (mlc) & 62.766 & 82.298 & 57.133 & 42.201 \\\\ UC merced (mlc) & 56.192 & 53.227 & 96.43 & 50.419 \\\\ DFC15 & 55.426 & 74.261 & 65.665 & 97.994 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.20: ConvNeXt on MLC tasks: Generalization performance in terms of mean average precision (% mAP) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c c c c} \\hline \\hline Source \\textbackslash{}Target dataset & MLRSNet & AID (mlc) & UC merced (mlc) & DFC15 \\\\ \\hline MLRSNet & 96.62 & 78.452 & 81.812 & 59.951 \\\\ AID (mlc) & 61.463 & 82.254 & 58.374 & 52.293 \\\\ UC merced (mlc) & 58.263 & 55.748 & 96.831 & 63.277 \\\\ DFC15 & 60.869 & 76.023 & 63.476 & 98.111 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.21: SwinT on MLC tasks: Generalization performance in terms of mean average precision (% mAP) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c c c c} \\hline \\hline Source \\textbackslash{}Target dataset & MLRSNet & AID (mlc) & UC merced (mlc) & DFC15 \\\\ \\hline MLRSNet & 95.391 & 76.929 & 77.914 & 44.973 \\\\ AID (mlc) & 58.154 & 78.003 & 50.246 & 41.025 \\\\ UC merced (mlc) & 49.899 & 51.137 & 95.383 & 49.322 \\\\ DFC15 & 46.145 & 67.706 & 45.535 & 96.784 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.17: EfficientNetB0 on MLC tasks: Generalization performance in terms of mean average precision (% mAP) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c c c c} \\hline \\hline Source \\textbackslash{}Target dataset & MLRSNet & AID (mlc) & UC merced (mlc) & DFC15 \\\\ \\hline MLRSNet & 95.048 & 77.694 & 78.951 & 54.953 \\\\ AID (mlc) & 62.023 & 80.878 & 53.97 & 48.563 \\\\ UC merced (mlc) & 53.876 & 52.737 & 96.34 & 52.036 \\\\ DFC15 & 55.864 & 79.241 & 58.727 & 97.941 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.19: MLPMixer on MLC tasks: Generalization performance in terms of mean average precision (% mAP) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. \\begin{table} \\begin{tabular}{l|c c c c} \\hline \\hline Source \\textbackslash{}Target dataset & MLRSNet & AID (mlc) & UC merced (mlc) & DFC15 \\\\ \\hline MLRSNet & 95.807 & 76.333 & 80.243 & 56.256 \\\\ AID (mlc) & 62.766 & 82.298 & 57.133 & 42.201 \\\\ UC merced (mlc) & 56.192 & 53.227 & 96.43 & 50.419 \\\\ DFC15 & 55.426 & 74.261 & 65.665 & 97.994 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table C.20: ConvNeXt on MLC tasks: Generalization performance in terms of mean average precision (% mAP) of ImageNet-1K pre-trained models, fine-tuned on _source_ and evaluated on images with shared labels in _target_ dataset. ### Detailed data descriptions & extended results per task #### d.1 UC Merced The UC Merced dataset [9] consists of 2100 images divided into 21 land-use scene classes. Each class has 100 RGB aerial image which are 256x256 pixels and have a spatial resolution of 0.3m per pixel. The images were manually extracted from large images from the United States Geological Survey (USGS) National Map of the following US regions: Birmingham, Boston, Buffalo, Columbus, Dallas, Harrisburg, Houston, Jacksonville, Las Vegas, Los Angeles, Miami, Napa, New York, Reno, San Diego, Santa Barbara, Seattle, Tampa, Tucson, and Ventura. Samples from the datasets can be seen on Figure D.1. The 21 classes are: agricultural, airplane, baseball diamond, beach, buildings, chaparral, dense residential, forest, freeway, golf course, harbor, intersection, medium density residential, mobile home park, overpass, parking lot, river, runway, sparse residential, storage tanks, and tennis courts. The authors have not set predefined train-test splits, so we have made such for our study (Figure D.2). The detailed results for all pre-trained models are shown on Table D.1 and for all the models learned from scratch are presented on Table D.2. The best performing model is the pre-trained ResNet152. The results on a class Figure D.1: Example images with labels from the UC Merced dataset. Figure D.2: Class distribution for the UC Merced dataset. \\begin{table} \\begin{tabular}{l r r r} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline agricultural & 100.00 & 100.00 & 100.00 \\\\ airplane & 100.00 & 100.00 & 100.00 \\\\ baseballdiamond & 100.00 & 100.00 & 100.00 \\\\ beach & 100.00 & 100.00 & 100.00 \\\\ buildings & 94.74 & 90.00 & 92.31 \\\\ chaparral & 100.00 & 100.00 & 100.00 \\\\ demersersidential & 99.91 & 100.00 & 95.24 \\\\ forest & 100.00 & 100.00 & 100.00 \\\\ freeway & 100.00 & 100.00 & 100.00 \\\\ goldcourse & 100.00 & 100.00 & 100.00 \\\\ harbor & 100.00 & 100.00 & 100.00 \\\\ intersection & 100.00 & 100.00 & 100.00 \\\\ mediuminescent & 100.00 & 90.00 & 94.74 \\\\ mobilehomerpark & 100.00 & 95.00 & 97.44 \\\\ overpass & 100.00 & 100.00 & 100.00 \\\\ parkinglot & 100.00 & 100.00 & 100.00 \\\\ river & 100.00 & 100.00 & 100.00 \\\\ runway & 100.00 & 100.00 & 100.00 \\\\ spersersidential & 95.24 & 100.00 & 97.56 \\\\ storagetanks & 95.24 & 100.00 & 97.56 \\\\ tenniscourt & 100.00 & 100.00 & 100.00 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.2: Detailed results for models trained from scratch on the UC Merced dataset. \\begin{table} \\begin{tabular}{l r r r r r r r r r} \\hline \\hline Model \\textbackslash{Metric} & \\multicolumn{1}{c}{\\multirow{2}{*}{ \\begin{tabular}{c} \\\\ \\end{tabular} }} & \\multicolumn{1}{c}{\\Figure D.3: Confusion matrix for the pre-trained ResNet152 model on the UC Merced dataset. ### Whu-Rs19 WHU-RS19 is a set of satellite images exported from Google Earth, which provides high-resolution satellite images up to 0.5m and red, green and blue spectral bands [42]. It contains 19 classes of meaningful scenes in high-resolution satellite imagery, including: airport, beach, bridge, commercial area, desert, farmland, football field, forest, industrial area, meadow, mountain, park, parking lot, pond, port, railway station, residential area, river, and viaduct. For each class, there are about 50 samples with a total of 1005 images in the entire dataset. The data does not come with predefined train and test splits, so per standard we have made splits (Figure D.5). The size of images is 600x600 pixel. The image samples of the same class are collected from different regions in satellite images of different resolutions and then might have different scales, orientations and illumin Figure D.4: Example images with labels from the WHU-RS19 dataset. Figure D.5: Class distribution for the WHU-RS19 dataset. \\begin{table} \\begin{tabular}{l r r r} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline Airport & 100.00 & 100.00 & 100.00 \\\\ Beach & 100.00 & 100.00 & 100.00 \\\\ Bridge & 100.00 & 100.00 & 100.00 \\\\ Commercial & 100.00 & 100.00 & 100.00 \\\\ Desert & 100.00 & 100.00 & 100.00 \\\\ Farmland & 100.00 & 100.00 & 100.00 \\\\ footballField & 100.00 & 100.00 & 100.00 \\\\ Forest & 100.00 & 100.00 & 100.00 \\\\ Industrial & 100.00 & 100.00 & 100.00 \\\\ Meadow & 100.00 & 100.00 & 100.00 \\\\ Mountain & 100.00 & 100.00 & 100.00 \\\\ Park & 100.00 & 100.00 & 100.00 \\\\ Parking & 100.00 & 100.00 & 100.00 \\\\ Pond & 100.00 & 100.00 & 100.00 \\\\ Port & 100.00 & 100.00 & 100.00 \\\\ railwayStation & 100.00 & 100.00 & 100.00 \\\\ Residential & 100.00 & 100.00 & 100.00 \\\\ River & 100.00 & 100.00 & 100.00 \\\\ Viaduct & 100.00 & 100.00 & 100.00 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.5: Detailed results for models trained from scratch the WHU-RS19 dataset. \\begin{table} \\begin{tabular}{l r r r r r r r r r} \\hline \\hline Model \\textbackslash{}Metric & \\multicolumn{1}{c}{\\multirow{2}{*}{ \\begin{tabular} Figure D.6: Confusion matrix for the pre-trained DenseNet161 model on the WHU-RS19 dataset. ### Aid Aerial Image Dataset (AID) is a large-scale aerial image dataset generated by collecting sample images from Google Earth imagery. The goal of AID is to advance the state-of-the-art in scene classification of remote sensing images. For creating AID, more than ten thousands aerial scene images have been collected and annotated. It consists of 10000 RGB images with 600x600 pixels resolution (Figure D.7). The dataset is made up of the following 30 classes (aerial scene types): airport, bare land, baseball field, beach, bridge, center, church, commercial, dense residential, desert, farmland, forest, industrial, meadow, medium residential, mountain, park, parking, playground, pond, port, railway station, resort, river, school, sparse residential, square, stadium, storage tanks and viaduct. All the images were labeled by the specialists in the field of remote sensing image interpretation. All samples from each class are chosen from different countries and regions around the world, but mainly in China, USA, England, France, Italy, Japan, Germany etc. They are extracted at different time and seasons under different image conditions. Although, all images have a 600x600 pixels resolution, their spatial resolution varies from 8 to 0.5 meters. The dataset has no predefined train-test splits, so for properly conducting the study we have made train, test and validation splits. The distribution of the splits is presented on Figure D.8. Detailed results for all pre-trained models are shown on Table D.7 and for all the models learned from scratch are presented on Table D.8. The Figure D.7: Example images with labels from the AID dataset. Figure D.8: Class distribution for the AID dataset. \\begin{table} \\begin{tabular}{l r r r} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline Airport & 98.61 & 98.61 & 98.61 \\\\ BareLand & 98.41 & 100.00 & 99.20 \\\\ BaseballField & 97.78 & 100.00 & 98.88 \\\\ Beach & 100.00 & 100.00 & 100.00 \\\\ Bridge & 100.00 & 100.00 & 100.00 \\\\ Center & 87.72 & 96.15 & 91.74 \\\\ Church & 93.48 & 89.58 & 91.49 \\\\ Commercial & 95.71 & 95.71 & 95.71 \\\\ DenseResidential & 98.80 & 100.00 & 99.39 \\\\ Desert & 100.00 & 100.00 & 100.00 \\\\ Farmland & 100.00 & 100.00 & 100.00 \\\\ Forest & 100.00 & 100.00 & 100.00 \\\\ Industrial & 94.94 & 96.15 & 95.54 \\\\ Meadow & 100.00 & 100.00 & 100.00 \\\\ MediumResidential & 98.28 & 98.28 & 98.28 \\\\ Mountain & 100.00 & 100.00 & 100.00 \\\\ Park & 94.44 & 97.14 & 95.77 \\\\ Parking & 100.00 & 100.00 & 100.00 \\\\ Playground & 98.63 & 97.30 & 97.96 \\\\ Pond & 98.81 & 98.81 & 98.81 \\\\ Port & 97.44 & 100.00 & 98.70 \\\\ RailwayStation & 96.23 & 98.08 & 97.14 \\\\ Resort & 94.12 & 82.76 & 88.07 \\\\ River & 98.80 & 100.00 & 99.39 \\\\ School & 91.38 & 88.33 & 89.83 \\\\ SparseResidential & 98.36 & 100.00 & 99.17 \\\\ Square & 98.44 & 95.45 & 96.92 \\\\ Stadium & 96.49 & 94.83 & 95.65 \\\\ StorageTanks & 100.00 & 100.00 & 100.00 \\\\ Viaduct & 100.00 & 98.81 & 99.40 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.9: Per class results for the pre-trained Vision Transformer on the AID dataset. \\begin{table} \\begin{tabular}{l r r r r r r r r r} \\hline \\hline Model \\textbackslash{Metric} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\hline AlexNet & 81.35 & 81.23 & 81.32 & 81.14 & 81.35 & 81.07 & 81.23 & 19.46 & 1927 & 84 \\\\ VGG16 & 81.95 & 81.80 & 82.04 & 81.52 & 81.95 & 81.50 & 81.84 & 19.65 & 1356 & 54 \\\\ ResNet50 & 89.05 & 89.09 & 89.23 & 88.82 & 89.05 & 88.85 & 89.04 & 19.66 & 1514 & 62 \\\\ RestNet152 & 89.90 & 90.08 & 90.09 & 89.60 & 89.90 & 89.73 & 89.88 & 22.25 & 1513 & 53 \\\\ DenseNet161 & **93.30** & 93.32 & 93.42 & 93.13 & 93.30 & 93.17 & 93.30 & 24.48 & 2228 & 76 \\\\ EfficientNetB0 & 90.05 & 90.19 & 90.32 & 89.88 & 90.05 & 89.92 & 90.08 & 19.33 & 1121 & 43 \\\\ ConvNeXt & 81.10 & 81.51 & 81.18 & 80.87 & 81.10 & 81.03 & 80.98 & 19.15 & 1915 & 96 \\\\ Vision Transformer & 79.35 & 79.27 & 79.27 & 79.51 & 79.35 & 79.30 & 79.21 & 19.63 & 1060 & 39 \\\\ MLP Mixer & 71.75 & 72.02 & 71.87 & 72.01 & 71.75 & 71.73 & 71.52 & 19.06 & 953 & 35 \\\\ Swin Transformer & 87.70 & 87.96 & 87.85 & 87.62 & 87.70 & 87.66 & 87.66 & 46.94 & 4647 & 84 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.8: Detailed results for models trained from scratch on the AID dataset. Figure D.9: Confusion matrix for the pre-trained Vision Transformer model on the AID dataset. ### Eurosat EuroSAT [43] is a land use and land cover classification dataset based on Sentinel-2 satellite images covering 13 spectral bands and consisting out of 10 classes with in total 27000 labeled and geo-referenced images. The dataset provides RGB and multi-spectral (MS) version of the data. The spectral bands and their respective spatial resolutions are presented on Table D.10. The 10 image classes are the following: Annual Crop, Forest, Herbaceous Vegetation, Highway, Industrial, Pasture, Permanent Crop, Residential, River, Sea/Lake. Some samples from the dataset are presented on Figure D.10.The class distrubtion of our train, test and validation splits are provided on Figure D.11. Detailed results for all pre-trained models are shown on Table D.11 and for all the models learned from scratch are presented on Table D.12. The best performing model is the pre-trained ResNet152 model. The results on a class level are show on Table D.13 along with a confusion matrix on Figure D. Figure D.11: Class distribution for the Eurosat dataset. \\begin{table} \\begin{tabular}{c c} \\hline **Band** & **Spatial resolution \\(m\\)** \\\\ \\hline B01 - Aerosols & 60 \\\\ B02 - Blue & 10 \\\\ B03 - Green & 10 \\\\ B04 - Red & 10 \\\\ B05 - Red edge 1 & 20 \\\\ B06 - Red edge 2 & 20 \\\\ B07 - Red edge 3 & 20 \\\\ B08 - NIR & 10 \\\\ B08A - Red edge 4 & 20 \\\\ B09 - Water vapor & 60 \\\\ B10 - Cirrus & 60 \\\\ B11 - SWIR 1 & 20 \\\\ B12 - SWIR 2 & 20 \\\\ \\hline \\end{tabular} \\end{table} Table D.10: Eurosat bands and spatial resolutions. \\begin{table} \\begin{tabular}{l c c c} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline Annual Crop & 98.66 & 98.33 & 98.50 \\\\ Forest & 99.17 & 99.50 & 99.33 \\\\ Herbaceous Vegetation & 98.01 & 98.67 & 98.34 \\\\ Highway & 99.20 & 98.80 & 99.00 \\\\ Industrial & 99.40 & 99.00 & 99.20 \\\\ Pasture & 98.74 & 98.25 & 98.50 \\\\ Permanent Crop & 98.59 & 97.60 & 98.09 \\\\ Residential & 99.50 & 100.00 & 99.75 \\\\ River & 99.20 & 99.60 & 99.40 \\\\ Sea Lake & 99.50 & 99.83 & 99.67 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 13: Per class results for the pre-trained ResNet152 model on the Eurosat dataset. Figure D.12: Confusion matrix for the pre-trained ResNet152 model on the Eurosat dataset. ### PatternNet PatternNet is a large-scale remote sensing dataset that was collected specifically for Remote sensing image retrieval. It contains 38 classes: airplane, baseball field, basketball court, beach, bridge, cemetery, chaparral, christmas tree farm, closed road, coastal manison, crosswalk, dense residential, ferry terminal, football field, forest, freeway, golf course, harbor, intersection, mobile home park, nursing home, oil gas field, oil well, overpass, parking lot, parking space, railway, river, runway, runway marking, shipping yard, solar panel, sparse residential, storage tank, swimming pool, tennis court, transformer station and wastewater treatment plant. There are a total of 38 classes with 800 images of size 256x256 pixels for each class. The class distribution of the train, test and validation splits we generated is presented on Figure D.14, since the dataset does not have predefined ones. PatternNet dataset has the following main characteristics: It's the largest publicly available dataset specifically designed for remote sensing image retrieval. It has a higher spatial resolution, so that the classes of interest constitute a larger portion of the image. It has high inter-class similarity and high intra-class diversity. Some sample images are shown on Figure D.13. Detailed results for all pre-trained models are shown on Table D.14 and for all the models learned from scratch are presented on Table D.15. The best performing models are the pre-trained DenseNet161 and ResNet50 models. The results on a class level are show on Table D.16 along with a confusion matrix on Figure D.15. \\begin{table} \\begin{tabular}{l c c c c c c c c c} Model \\textbackslash{}Metric & & & & & & & & & & \\\\ \\hline AlexNet & 99.16 & 99.17 & 99.17 & 99.16 & 99.16 & 99.16 & 15.17 & 637 & 32 \\\\ VGG16 & 99.42 & 99.43 & 99.43 & 99.42 & 99.42 & 99.42 & 99.42 & 37.74 & 1321 & 25 \\\\ ResNet50 & **99.74** & 99.74 & 99.74 & 99.74 & 99.74 & 99.74 & 99.74 & 29.10 & 1193 & 31 \\\\ ResNet152 & 99.49 & 99.49 & 99.49 & 99.49 & 99.49 & 99.49 & 99.49 & 62.94 & 1070 & 7 \\\\ DenseNet161 & **99.74** & 99.74 & 99.74 & 99.74 & 99.74 & 99.74 & 99.74 & 68.87 & 3168 & 36 \\\\ EfficientNetB0 & 99.54 & 99.54 & 99.54 & 99.54 & 99.54 & 99.54 & 99.54 & 25.86 & 569 & 12 \\\\ ConvNext & 99.67 & 99.67 & 99.67 & 99.67 & 99.67 & 99.67 & 99.67 & 45.93 & 1378 & 20 \\\\ Vision Transformer & 99.65 & 99.66 & 99.66 & 99.65 & 99.65 & 99.65 & 99.65 & 48.50 & 1067 & 12 \\\\ MLP Mixer & 99.70 & 99.71 & 99.71 & 99.70 & 99.70 & 99.70 & 99.70 & 33.80 & 1521 & 35 \\\\ Swin Transformer & 99.69 & 99.69 & 99.69 & 99.69 & 99.69 & 99.69 & 98.65 & 2357 & 7 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.14: Detailed results for pre-trained models on the PatternNet dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c c} Model \\(\\backslash\\)Metric & \\multicolumn{1}{c}{\\(\\backslash\\)} & \\multicolumn{1}{c}{\\(\\backslash\\)} & \\multicolumn{1}{c}{\\(\\backslash\\)} & \\multicolumn{1}{c}{\\(\\backslash\\)} & \\multicolumn{1}{c}{\\(\\backslash\\)} & \\multicolumn{1}{c}{\\(\\backslash\\)} & \\multicolumn{1}{c}{\\(\\backslash\\)} & \\multicolumn{1}{c}{\\(\\backslash\\)} \\\\ \\hline AlexNet & 97.83 & 97.83 & 97.83 & 97.83 & 97.83 & 97.82 & 97.82 & 13.75 & 1141 & 68 \\\\ VGG16 & 97.91 & 97.93 & 97.93 & 97.91 & 97.91 & 97.91 & 97.91 & 37.47 & 2061 & 40 \\\\ ResNet50 & 99.06 & 99.07 & 99.07 & 99.06 & 99.06 & 99.06 & 99.06 & 35.65 & 3030 & 70 \\\\ ResNet152 & 98.88 & 98.89 & 98.89 & 98.88 & 98.88 & 98.88 & 98.88 & 69.05 & 6905 & 88 \\\\ DenseNet161 & **99.24** & 99.25 & 99.25 & 99.24 & 99.24 & 99.24 & 99.24 & 71.08 & 5260 & 59 \\\\ EfficientNetB0 & 98.83 & 98.84 & 98.84 & 98.83 & 98.83 & 98.83 & 98.83 & 27.54 & 2286 & 68 \\\\ ConvNeXt & 97.83 & 97.83 & 97.83 & 97.83 & 97.83 & 97.82 & 97.82 & 45.06 & 4326 & 81 \\\\ Vision Transformer & 96.69 & 96.69 & 96.69 & 96.69 & 96.69 & 96.68 & 96.68 & 49.05 & 3237 & 51 \\\\ MLP Mixer & 98.83 & 98.84 & 98.84 & 98.83 & 98.83 & 98.83 & 98.83 & 34.54 & 2038 & 44 \\\\ Swin Transformer & 98.52 & 98.53 & 98.53 & 98.52 & 98.52 & 98.52 & 98.52 & 138.59 & 12612 & 76 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 15: Detailed results for models trained from scratch on the PatternNet dataset. Figure D.13: Example images with labels from the PatternNet dataset. Figure D.14: Class distribution for the PatternNet dataset. \\begin{table} \\begin{tabular}{l r r r} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline airplane & 100.00 & 100.00 & 100.00 \\\\ baseball field & 100.00 & 100.00 & 100.00 \\\\ basketball court & 99.37 & 98.75 & 99.06 \\\\ beach & 100.00 & 100.00 & 100.00 \\\\ bridge & 98.77 & 100.00 & 99.38 \\\\ cemetery & 100.00 & 100.00 & 100.00 \\\\ chaparral & 100.00 & 100.00 & 100.00 \\\\ christmas tree farm & 100.00 & 100.00 & 100.00 \\\\ closed\\_road & 99.38 & 100.00 & 99.69 \\\\ coastal\\_mansion & 98.73 & 97.50 & 98.11 \\\\ crosswalk & 100.00 & 100.00 & 100.00 \\\\ dense\\_residential & 100.00 & 100.00 & 100.00 \\\\ ferry terminal & 100.00 & 98.75 & 99.37 \\\\ football field & 100.00 & 100.00 & 100.00 \\\\ forest & 100.00 & 100.00 & 100.00 \\\\ freeway & 100.00 & 100.00 & 100.00 \\\\ golf course & 100.00 & 100.00 & 100.00 \\\\ harbor & 100.00 & 100.00 & 100.00 \\\\ intersection & 99.38 & 100.00 & 99.69 \\\\ mobile home park & 100.00 & 100.00 & 100.00 \\\\ nursing home & 100.00 & 99.38 & 99.69 \\\\ oil gas field & 100.00 & 100.00 & 100.00 \\\\ oil well & 100.00 & 100.00 & 100.00 \\\\ overpass & 100.00 & 100.00 & 100.00 \\\\ parking lot & 100.00 & 100.00 & 100.00 \\\\ parking space & 100.00 & 100.00 & 100.00 \\\\ railway & 100.00 & 100.00 & 100.00 \\\\ river & 100.00 & 100.00 & 100.00 \\\\ runway & 100.00 & 99.38 & 99.69 \\\\ runway marking & 99.38 & 100.00 & 99.69 \\\\ shipping yard & 100.00 & 100.00 & 100.00 \\\\ solar panel & 100.00 & 100.00 & 100.00 \\\\ sparse residential & 96.91 & 98.13 & 97.52 \\\\ storage tank & 99.38 & 99.38 & 99.38 \\\\ swimming pool & 100.00 & 100.00 & 100.00 \\\\ tennis court & 100.00 & 99.38 & 99.69 \\\\ transformer station & 99.38 & 100.00 & 99.69 \\\\ wastewater treatment plant & 99.38 & 99.38 & 99.38 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.16: Per class results for the pre-trained DenseNet161 model on the PatternNet dataset. Figure D.15: Confusion matrix for the pre-trained DenseNet161 model on the PatternNet dataset. ### Resisc45 RESISC45 [8] dataset is a publicly available benchmark for Remote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU). This dataset contains 31500 images, covering 45 scene classes with 700 images in each class. The 45 scene classes are as follows: airplane, airport, baseball diamond, basketball court, beach, bridge, chaparral, church, circular farmland, cloud, commercial area, dense residential, desert, forest, freeway, golf course, ground track field, harbor, industrial area, intersection, island, lake, meadow, medium residential, mobile home park, mountain, overpass, palace, parking lot, railway, railway station, rectangular farmland, river, roundabout, runway, sea ice, ship, snowberg, sparse residential, stadium, storage tank, tennis court, terrace, thermal power station, and wetland. Accordingly, these classes contain a variety of spatial patterns, some homogeneous with respect to texture, some homogeneous with respect to color, others not homogeneous at all. The images are with a size of 256x256 pixels in the RGB color space. The spatial resolution varies from about 30m to 0.2m per pixel for most of the scene classes except for the classes of island, lake, mountain, and snowberg that have lower spatial resolutions. The 31500 images cover more than 100 countries and regions all over the world, including developing, transition, and highly developed economies (Figure D.16). Our generated train, test and validation splits distribution is show on Figure D.17. Detailed results for all pre-trained models are shown on Table D.17 and for all the models learned from scratch are presented on Table D.18. The best performing model is the pre-trained Vision Transformer model. The results on a class level are show on Table D.19 along with a confusion matrix on Figure D.18. \\begin{table} \\begin{tabular}{l c c c c c c c c c c c} \\hline \\hline \\multirow{2}{*}{Model \\textbackslash{Metric}} & \\multirow{2}{*}{ \\begin{tabular}{c} \\\\ \\end{tabular} } & \\multirow \\begin{table} \\begin{tabular}{l c c c c c c c c c c} \\hline \\hline Model \\textbackslash{Metric} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} \\\\ \\hline AlexNet & 82.16 & 82.29 & 82.29 & 82.16 & 82.16 & 82.10 & 82.10 & 10.91 & 633 & 43 \\\\ VGG16 & 83.89 & 84.00 & 84.00 & 83.89 & 83.89 & 83.84 & 83.84 & 38.37 & 2993 & 63 \\\\ ResNet50 & 92.33 & 92.40 & 92.40 & 92.33 & 92.33 & 92.33 & 92.33 & 31.31 & 1941 & 47 \\\\ RestNet152 & 90.68 & 90.79 & 90.79 & 90.68 & 90.68 & 90.69 & 90.69 & 64.83 & 4084 & 48 \\\\ DenseNet161 & **93.46** & 93.50 & 93.50 & 93.46 & 93.46 & 93.46 & 93.46 & 71.22 & 5484 & 62 \\\\ EfficientNetB0 & 91.37 & 91.47 & 91.47 & 91.37 & 91.37 & 91.38 & 91.38 & 27.66 & 2102 & 61 \\\\ ConvNeXt & 85.94 & 86.30 & 86.30 & 85.94 & 85.94 & 86.05 & 86.05 & 46.51 & 2279 & 34 \\\\ Vision Transformer & 81.02 & 81.18 & 81.18 & 81.02 & 81.02 & 80.98 & 80.98 & 50.21 & 2611 & 37 \\\\ MLP Mixer & 69.41 & 69.67 & 69.67 & 69.41 & 69.41 & 69.22 & 69.22 & 35.69 & 1285 & 21 \\\\ Swin Transformer & 88.73 & 88.82 & 88.82 & 88.73 & 88.73 & 88.71 & 88.71 & 144.87 & 14487 & 85 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 18: Detailed results for models trained from scratch on the Resisc45 dataset. Figure 17: Class distribution for the Resisc45 dataset. \\begin{table} \\begin{tabular}{l r r r} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline airplane & 99.28 & 98.57 & 98.92 \\\\ airport & 95.89 & 100.00 & 97.90 \\\\ baseball\\_diamond & 97.89 & 99.29 & 98.58 \\\\ basketball\\_court & 97.22 & 100.00 & 98.59 \\\\ beach & 98.59 & 100.00 & 99.29 \\\\ bridge & 97.87 & 98.57 & 98.22 \\\\ chaparral & 97.90 & 100.00 & 98.94 \\\\ church & 90.85 & 92.14 & 91.49 \\\\ circular\\_farmland & 98.59 & 100.00 & 99.29 \\\\ cloud & 100.00 & 99.29 & 99.64 \\\\ commercial\\_area & 95.07 & 96.43 & 95.74 \\\\ dense\\_residential & 94.20 & 92.86 & 93.53 \\\\ desert & 97.86 & 97.86 & 97.86 \\\\ forest & 97.79 & 95.00 & 96.38 \\\\ freeway & 99.27 & 97.14 & 98.19 \\\\ golf\\_course & 98.58 & 99.29 & 98.93 \\\\ ground\\_track\\_field & 100.00 & 99.29 & 99.64 \\\\ harbor & 100.00 & 100.00 & 100.00 \\\\ industrial\\_area & 94.96 & 94.29 & 94.62 \\\\ intersection & 97.86 & 97.86 & 97.86 \\\\ island & 98.59 & 100.00 & 99.29 \\\\ lake & 93.75 & 96.43 & 95.07 \\\\ meadow & 95.00 & 95.00 & 95.00 \\\\ medium\\_residential & 91.61 & 93.57 & 92.58 \\\\ mobile\\_home\\_park & 97.22 & 100.00 & 98.59 \\\\ mountain & 95.74 & 96.43 & 96.09 \\\\ overpass & 99.25 & 94.29 & 96.70 \\\\ palace & 91.91 & 89.29 & 90.58 \\\\ parking\\_lot & 99.28 & 98.57 & 98.92 \\\\ railway & 93.84 & 97.86 & 95.80 \\\\ railway\\_station & 96.30 & 92.86 & 94.55 \\\\ rectangular\\_farmland & 91.95 & 97.86 & 94.81 \\\\ river & 99.24 & 92.86 & 95.94 \\\\ roundabout & 99.29 & 100.00 & 99.64 \\\\ runway & 100.00 & 95.71 & 97.81 \\\\ sea\\_ice & 100.00 & 98.57 & 99.28 \\\\ ship & 97.22 & 100.00 & 98.59 \\\\ snowberg & 98.59 & 100.00 & 99.29 \\\\ sparse\\_residential & 96.43 & 96.43 & 96.43 \\\\ stadium & 97.90 & 100.00 & 98.94 \\\\ storage\\_lank & 98.56 & 97.86 & 98.21 \\\\ tennis\\_court & 98.54 & 96.43 & 97.47 \\\\ terrace & 96.21 & 90.71 & 93.38 \\\\ thermal\\_power\\_station & 96.45 & 97.14 & 96.80 \\\\ wetland & 97.01 & 92.86 & 94.89 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.19: Per class results for the pre-trained Vision Transformer model on the Resisc45 dataset,Figure D.18: Confusion matrix for the pre-trained Vision Transformer model on the Resisc45 dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c c} \\hline \\hline Model \\textbackslash{Metric} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} \\\\ \\hline AlexNet & 99.35 & 99.13 & 99.36 & 99.06 & 99.35 & 99.09 & 99.35 & 34.84 & 1568 & 35 \\\\ VGG16 & 99.05 & 98.93 & 99.07 & 98.75 & 99.05 & 98.83 & 99.05 & 34.04 & 885 & 16 \\\\ ResNet50 & 99.68 & 99.53 & 99.68 & 99.54 & 99.68 & 99.53 & 99.68 & 33.69 & 1078 & 22 \\\\ ResNet152 & **99.86** & 99.85 & 99.86 & 99.82 & 99.86 & 99.83 & 99.86 & 51.90 & 1609 & 21 \\\\ DenseNet161 & 99.74 & 99.68 & 99.74 & 99.64 & 99.74 & 99.66 & 99.74 & 56.60 & 2717 & 38 \\\\ EfficientNetB0 & 99.72 & 99.63 & 99.72 & 99.65 & 99.72 & 99.64 & 99.72 & 33.50 & 1340 & 30 \\\\ ConvNeXt & 99.60 & 99.50 & 99.60 & 99.55 & 99.60 & 99.52 & 99.60 & 40.35 & 1977 & 39 \\\\ Vision Transformer & 99.76 & 99.75 & 99.76 & 99.71 & 99.76 & 99.73 & 99.76 & 41.18 & 1400 & 24 \\\\ MLP Mixer & 99.66 & 99.54 & 99.66 & 99.61 & 99.66 & 99.57 & 99.66 & 35.29 & 1235 & 25 \\\\ Swin Transformer & 99.68 & 99.54 & 99.68 & 99.62 & 99.68 & 99.57 & 99.68 & 113.14 & 4752 & 32 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 20: Detailed results for pre-trained models on the RSI-CB256 dataset. Figure 20: Class distribution for the RSI-CB526 dataset. \\begin{table} \\begin{tabular}{l r r r} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline airplane & 100.00 & 100.00 & 100.00 \\\\ airport\\_runway & 100.00 & 100.00 & 100.00 \\\\ artificial\\_grassland & 100.00 & 100.00 & 100.00 \\\\ avenue & 100.00 & 99.08 & 99.54 \\\\ bare\\_land & 98.30 & 100.00 & 99.14 \\\\ bridge & 98.95 & 100.00 & 99.47 \\\\ city\\_building & 100.00 & 100.00 & 100.00 \\\\ coastline & 100.00 & 98.91 & 99.45 \\\\ container & 100.00 & 99.24 & 99.62 \\\\ crossroads & 99.11 & 100.00 & 99.55 \\\\ dam & 100.00 & 100.00 & 100.00 \\\\ desert & 100.00 & 98.62 & 99.31 \\\\ dry\\_farm & 100.00 & 100.00 & 100.00 \\\\ forest & 100.00 & 100.00 & 100.00 \\\\ green\\_farmland & 100.00 & 100.00 & 100.00 \\\\ highway & 100.00 & 97.73 & 98.85 \\\\ histr & 100.00 & 100.00 & 100.00 \\\\ lakeshoe & 100.00 & 100.00 & 100.00 \\\\ mangrove & 100.00 & 100.00 & 100.00 \\\\ marina & 100.00 & 100.00 & 100.00 \\\\ mountain & 100.00 & 100.00 & 100.00 \\\\ parkinglot & 98.94 & 100.00 & 99.47 \\\\ pipeline & 100.00 & 100.00 & 100.00 \\\\ residents & 100.00 & 100.00 & 100.00 \\\\ river & 100.00 & 100.00 & 100.00 \\\\ river\\_protection\\_forest & 100.00 & 100.00 & 100.00 \\\\ sandbeach & 100.00 & 100.00 & 100.00 \\\\ saplin & 100.00 & 100.00 & 100.00 \\\\ sen & 99.52 & 100.00 & 99.76 \\\\ shrubwood & 100.00 & 100.00 & 100.00 \\\\ snow\\_mountain & 100.00 & 100.00 & 100.00 \\\\ sparse\\_forest & 100.00 & 100.00 & 100.00 \\\\ storage\\_room & 100.00 & 100.00 & 100.00 \\\\ stream & 100.00 & 100.00 & 100.00 \\\\ town & 100.00 & 100.00 & 100.00 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 22: Per class results for the pre-trained ResNet152 model on the RSI-CB256 dataset. \\begin{table} \\begin{tabular}{l r r r r r r r r r} \\hline \\hline Model \\textbackslash{Metric} & & & & & & & & & & \\\\ \\hline AlexNet & 97.35 & 96.55 & 97.39 & 96.54 & 97.35 & 96.51 & 97.35 & 34.99 & 2414 & 54 \\\\ VGG16 & 98.83 & 98.51 & 98.84 & 98.36 & 98.83 & 98.43 & 98.83 & 34.90 & 2757 & 64 \\\\ ResNet50 & 98.83 & 98.51 & 98.84 & 98.36 & 98.83 & 98.43 & 98.83 & 36.39 & 3166 & 72 \\\\ ResNet152 & **99.15** & 98.98 & 99.15 & 98.81 & 99.15 & 98.89 & 99.15 & 51.86 & 4472 & 72 \\\\ DenseNet161 & 99.13 & 98.80 & 99.13 & 98.71 & 99.13 & 98.75 & 99.13 & 56.75 & 4029 & 56 \\\\ EfficientNetB0 & 99.11 & 98.85 & 99.12 & 98.91 & 99.11 & 98.87 & 99.11 & 26.50 & 2123 & 71 \\\\ ConvNeXt & 98.44 & 97.75 & 98.45 & 97.74 & 98.44 & 97.73 & 98.44 & 36.93 & 2622 & 56 \\\\ Vision Transformer & 98.12 & 97.52 & 98.13 & 97.12 & 98.12 & 97.31 & 98.12 & 41.08 & 3204 & 63 \\\\ MLP Mixer & 98.42 & 97.81 & 98.43 & 97.80 & 98.42 & 97.79 & 98.42 & 29.00 & 2900 & 86 \\\\ Swin Transformer & 99.09 & 98.83 & 99.09 & 98.70 & 99.09 & 98.76 & 99.09 & 113.60 & 7157 & 48 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 21: Detailed results for models trained from scratch on the RSI-CB256 dataset. Figure D.21: Confusion matrix for the pre-trained ResNet152 model on the RSI-CB256 dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c c c} \\hline \\hline Model \\textbackslash{Metric} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} \\\\ \\hline AlexNet & 91.96 & 92.05 & 92.05 & 91.96 & 91.96 & 91.92 & 91.92 & 3.19 & 118 & 27 \\\\ VGG16 & 93.93 & 93.95 & 93.95 & 93.93 & 93.93 & 93.90 & 93.90 & 4.68 & 159 & 24 \\\\ ResNet50 & 95.00 & 95.08 & 95.08 & 95.00 & 95.00 & 94.99 & 94.99 & 3.90 & 121 & 21 \\\\ ResNet152 & 95.00 & 95.07 & 95.07 & 95.00 & 95.00 & 95.01 & 95.01 & 7.09 & 241 & 24 \\\\ DenseNet161 & 94.82 & 94.83 & 94.83 & 94.82 & 94.82 & 94.82 & 94.82 & 7.59 & 220 & 19 \\\\ EfficientNetB0 & 95.54 & 95.56 & 95.56 & 95.54 & 95.54 & 95.54 & 95.54 & 3.79 & 163 & 33 \\\\ ConvNext & 94.64 & 94.76 & 94.76 & 94.64 & 94.64 & 94.61 & 94.61 & 5.23 & 183 & 25 \\\\ Vision Transformer & **95.89** & 95.95 & 95.95 & 95.89 & 95.89 & 95.91 & 95.91 & 5.54 & 227 & 31 \\\\ MLP Mixer & 95.18 & 95.23 & 95.23 & 95.18 & 95.18 & 95.17 & 95.17 & 4.30 & 86 & 10 \\\\ Swin Transformer & 95.18 & 95.23 & 95.23 & 95.18 & 95.18 & 95.18 & 95.18 & 13.42 & 416 & 21 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 24: Detailed results for models trained from scratch on the RSSCN7 dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c c} \\hline \\hline Model \\textbackslash{Metric} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} & \\multicolumn{1}{c}{\\(\\textbackslash{Metric}\\)} \\\\ \\hline AlexNet & 80.54 & 80.64 & 80.64 & 80.54 & 80.54 & 80.45 & 80.45 & 6.97 & 697 & 85 \\\\ VGG16 & 81.61 & 81.50 & 81.50 & 81.61 & 81.61 & 81.41 & 81.41 & 6.74 & 526 & 63 \\\\ ResNet50 & 82.68 & 82.65 & 82.65 & 82.68 & 82.68 & 82.41 & 82.41 & 3.76 & 316 & 69 \\\\ ResNet152 & 82.68 & 82.65 & 82.65 & 82.68 & 82.68 & 82.61 & 82.41 & 82.41 & 6.90 & 407 & 44 \\\\ DenseNet161 & **87.32** & 87.55 & 87.55 & 87.32 & 87.32 & 87.38 & 87.38 & 8.50 & 595 & 55 \\\\ EfficientNetB0 & 83.93 & 84.03 & 83.03 & 83.93 & 83.93 & 83.87 & 83.87 & 3.65 & 365 & 93 \\\\ ConvNeXt & 83.04 & 82.84 & 83.04 & 83.04 & 82.90 & 82.90 & 5.43 & 543 & 87 \\\\ Vision Transformer & 86.07 & 86.17 & 86.07 & 86.07 & 86.00 & 86.00 & 5.52 & 453 & 67 \\\\ MLP Mixer & 83.21 & 83.29 & 83.29 & 83.21 & 83.21 & 83.17 & 83.17 & 4.08 & 408 & 100 \\\\ Swin Transformer & 82.50 & 82.59 & 82.50 & 82.50 & 82.50 & 82.50 & 82.50 & 13.78 & 951 & 54 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 23: Class distribution for the RSSCN7 dataset. Figure 23: Class distribution for the RSSCN7 dataset. \\begin{table} \\begin{tabular}{l c c c} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline farm\\_land & 97.40 & 93.75 & 95.54 \\\\ forest & 100.00 & 98.75 & 99.37 \\\\ grass\\_land & 91.57 & 95.00 & 93.25 \\\\ industrial\\_region & 92.59 & 93.75 & 93.17 \\\\ parking\\_lot & 94.94 & 93.75 & 94.34 \\\\ residential\\_region & 100.00 & 98.75 & 99.37 \\\\ river\\_lake & 95.12 & 97.50 & 96.30 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 25: Per class results for the pre-trained Vision Transformer model on the RSSCN7 dataset. Figure D.24: Confusion matrix for the pre-trained Vision Transformer model on the RSSCN7 dataset. ### Sa76 SAT-6 [47] consists of a total of 405000 image patches each of size 28x28 and covering 6 land cover classes - barren land, trees, grassland, roads, buildings and water bodies (Figure D.25). The authors of the dataset selected 324000 images for the training dataset and 81000 were selected as testing dataset. Additionally we have selected 20% of the images from the train dataset to create the validation split. The training and test datasets were selected from disjoint National Agriculture Imagery Program (NAIP) tiles. The specifications for the various land cover classes of SAT-6 were adopted from those used in the National Land Cover Data (NLCD) algorithm. The class distribution of the train, test and validation splits is presented on Figure D.25. Detailed results for all pre-trained models are shown on Table D.26 and for all the models learned from scratch are presented on Table D.27. All pre-trained model obtained excellent result on the dataset with ResNet50, ResNet152, DenseNet161, ConvNeXt, Vision Transformer, MLP-Mixer and Swin Transformer achieving 100 % accuracy. The results on a class level are show on Table D.28 along with a confusion matrix on Figure D.27 for the DenseNet161 model. \\begin{table} \\begin{tabular}{l c c c c c c c c c c c} Model \\(\\backslash\\)Metric & & & & & & & & & & & & & & \\\\ \\hline AlexNet & 99.98 & 99.98 & 99.98 & 99.97 & 99.98 & 99.97 & 99.98 & 92.48 & 5364 & 48 \\\\ VGG16 & 99.99 & 99.99 & 99.99 & 99.99 & 99.99 & 99.99 & 99.99 & 550.04 & 29702 & 44 \\\\ ResNet50 & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 410.33 & 37340 & 81 \\\\ ResNet152 & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 872.87 & 61974 & 61 \\\\ DenseNet161 & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 970.39 & 55312 & 47 \\\\ EfficientNetB0 & 99.99 & 99.99 & 99.99 & 99.99 & 99.99 & 99.99 & 99.99 & 363.00 & 8712 & 14 \\\\ ConvNeX1 & 100.00 & 100.00 & 100.00 & 99.99 & 100.00 & 100.00 & 100.00 & 630.78 & 42262 & 57 \\\\ Vision Transformer & 100.00 & 100.00 & 100.00 & 100.00 & 100 \\begin{table} \\begin{tabular}{l c c c} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline buildings & 100.00 & 100.00 & 100.00 \\\\ barren land & 100.00 & 100.00 & 100.00 \\\\ trees & 100.00 & 100.00 & 100.00 \\\\ grassland & 100.00 & 100.00 & 100.00 \\\\ roads & 100.00 & 100.00 & 100.00 \\\\ water bodies & 100.00 & 100.00 & 100.00 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 28: Per class results for the pre-trained DenseNet model on the SAT6 dataset. Figure D.27: Confusion matrix for the pre-trained DenseNet161 model on the SAT6 dataset. ### Siri-Whu The SIRI-WHU [48] is a scene classification dataset comprised of 2400 images organized into 12 classes. Each class contains 200 images with a 2m spatial resolution and a size of 200x200 pixels (Figure D.28). It was collected from Google Earth (Google Inc.) by the Intelligent Data Extraction and Analysis of Remote Sensing (RS_IDEA) Group in Wuhan University. The 12 land-use classes contain agriculture, commercial, harbor, idle land, industrial, meadow, overpass, park, pond, residential, river, and water. This dataset mainly covers urban areas in China, which means it lack diversity and is less challenging. The class distribution is presented on Figure D.29. Detailed results for all pre-trained models are shown on Table D.29 and for all the models learned from scratch are presented \\begin{table} \\begin{tabular}{l c c c c c c c c c c} \\hline \\hline Model \\textbackslash{Metric} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)Metric}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)Metric}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)Metric}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)Metric}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)Metric}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)Metric}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)Metric}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)Metric}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)Metric}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)Metric}} \\\\ \\hline AlexNet & 92.29 & 92.64 & 92.64 & 92.29 & 92.29 & 92.31 & 92.31 & 4.28 & 197 & 36 \\\\ VGG16 & 93.96 & 94.08 & 94.08 & 93.96 & 93.96 & 93.96 & 93.96 & 4.98 & 214 & 33 \\\\ ResNet50 & 95.00 & 95.12 & 95.12 & 95.00 & 95.00 & 95.01 & 95.01 & 4.66 & 191 & 31 \\\\ RestNet152 & **96.25** & 96.27 & 96.27 & 96.25 & 96.25 & 96.24 & 96.24 & 6.65 & 226 & 24 \\\\ DenseNet161 & 95.63 & 95.64 & 95.64 & 95.63 & 95.63 & 95.61 & 95.61 & 7.30 & 365 & 40 \\\\ EfficientNetB0 & 95.00 & 95.09 & 95.09 & 95.00 & 95.00 & 95.01 & 95.01 & 4.57 & 329 & 62 \\\\ ConvNeXt & 96.25 & 96.34 & 96.34 & 96.25 & 96.25 & 96.24 & 96.24 & 5.64 & 203 & 26 \\\\ Vision Transformer & 95.63 & 95.73 & 95.73 & 95.63 & 95.62 & 95.63 & 95.63 & 5.37 & 322 & 50 \\\\ MLP Mixer & 95.21 & 95.36 & 95.36 & 95.21 & 95.21 & 95.23 & 95.23 & 4.55 & 150 & 23 \\\\ Swin Transformer & 95.63 & 95.60 & 95.60 & 95.63 & 95.62 & 95.57 & 95.57 & 11.87 & 534 & 35 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 29: Detailed results for pre-trained models on the SIRI-WHU dataset. Figure 29: Class distribution for the SIRI-WHU dataset. \\begin{table} \\begin{tabular}{l c c c c} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline agriculture & 100.00 & 100.00 & 100.00 \\\\ commercial & 100.00 & 97.50 & 98.73 \\\\ harbor & 90.48 & 95.00 & 92.68 \\\\ idle\\_land & 97.50 & 97.50 & 97.50 \\\\ industrial & 100.00 & 97.50 & 98.73 \\\\ meadow & 92.11 & 87.50 & 89.74 \\\\ overpass & 95.24 & 100.00 & 97.56 \\\\ park & 92.31 & 90.00 & 91.14 \\\\ pond & 100.00 & 100.00 \\\\ residential & 97.56 & 100.00 & 98.77 \\\\ river & 92.50 & 92.50 & 92.50 \\\\ water & 97.50 & 97.50 & 97.50 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.31: Per class results for the pre-trained ResNet152 model on the SIRI-WHU dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c c} \\hline \\hline Model \\textbackslash{Metric} & \\multicolumn{1}{c}{\\begin{tabular}{c}} & \\multicolumn{1}{c}{\\begin{tabular}{c}} & \\multicolumn{1}{c}{\\begin{tabular}{c}} & \\multicolumn{1}{c}{\\begin{tabular}{c}} & \\multicolumn{1}{c}{\\begin{tabular}{c} \\\\ \\\\ \\end{tabular} \\\\ \\end{tabular} } & \\multicolumn{1}{c}{ \\begin{tabular}{c}} & \\multicolumn{1} Figure D.32: Class distribution for the CLRS dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c c} \\hline \\hline Model \\textbackslash{Metric} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)}} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(\\backslash\\)}} \\\\ \\cline{1-1} \\cline{5-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\cline{7-12} \\clineine{7-12} \\cline{7-12} \\clineine{7-12} \\clineine{7-12} \\clineine{7-12} \\clineine{7-12} \\clineine{7-12} \\clineine{7-12} \\clineine{7-12} \\clineine{7-12} \\clineineine{7-12} \\clineineine{7-12} \\clineineine{7-12} \\clineineine{7-12} \\clineineineine{7-12} \\clineineineine{7-12} \\clineineineine{7-12} \\clineineineineine{7-12} \\clineineineineine{7-12} \\clineineineineine{7-12} \\clineineineineineine{7-12} \\clineineineineineine{7-12} \\clineineineineineineine{7-12} \\clineineineineineineine{7-12} \\clineineineineineineine{7-12} \\clineineineineineineine{7-12} \\clineineineineineineineine{7-12} \\clineineineineineineineine{7-12} \\clineineineineineineineineine{7-12} \\clineineineineineineineineine{7-12} \\clineineineineineineineineine{7-12} \\clineineineineineineineineineine{7-12} \\clineineineineineineineineineine{7-12} \\clineineineineineineineineineine{7-12} \\clineineineineineineineineineineine{7-12} \\clineineineineineineineineineineine{7-12} \\clineineineineineineineineineineineineine{7-12} \\clineineineineineineineineineineineine{7-12} \\clineineineineineineineineineineineineineine{7-12} \\clineineineineineineineineineineineineineineine{7-12} \\clineineineineineineineineineineineineineineineineineineine{7-12} \\cline \\begin{table} \\begin{tabular}{l c c c} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline airport & 97.48 & 96.67 & 97.07 \\\\ bare-land & 92.00 & 95.83 & 93.88 \\\\ beach & 99.15 & 97.50 & 98.32 \\\\ bridge & 90.91 & 91.67 & 91.29 \\\\ commercial & 79.84 & 85.83 & 82.73 \\\\ desert & 97.50 & 97.50 & 97.50 \\\\ farmland & 93.70 & 99.17 & 96.36 \\\\ forest & 100.00 & 100.00 & 100.00 \\\\ golf-course & 94.96 & 94.17 & 94.56 \\\\ highway & 92.11 & 87.50 & 89.74 \\\\ industrial & 88.79 & 85.83 & 87.29 \\\\ meadow & 96.72 & 98.33 & 97.52 \\\\ mountain & 99.15 & 97.50 & 98.32 \\\\ overpass & 89.68 & 94.17 & 91.87 \\\\ park & 85.60 & 89.17 & 87.35 \\\\ parking & 98.25 & 93.33 & 95.73 \\\\ playground & 95.04 & 95.83 & 95.44 \\\\ port & 94.74 & 90.00 & 92.31 \\\\ railway & 86.29 & 89.17 & 87.70 \\\\ railway-station & 88.79 & 85.83 & 87.29 \\\\ residential & 90.68 & 89.17 & 89.92 \\\\ river & 90.32 & 93.33 & 91.80 \\\\ runway & 98.33 & 98.33 & 98.33 \\\\ stadium & 95.61 & 90.83 & 93.16 \\\\ storage-tank & 96.55 & 93.33 & 94.92 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 34: Per class results for the pre-trained Vision Transformer model on the CLRS dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c} \\hline \\hline Model \\textbackslash{}Metric & & & & & & & & & & \\\\ \\hline AlexNet & 71.40 & 71.59 & 71.59 & 71.40 & 71.40 & 71.33 & 71.33 & 20.35 & 2035 & 92 \\\\ VGG16 & 76.07 & 76.20 & 76.20 & 76.07 & 76.07 & 76.00 & 76.00 & 19.33 & 1450 & 60 \\\\ ResNet50 & 85.57 & 85.72 & 85.72 & 85.57 & 85.57 & 85.57 & 19.43 & 1788 & 77 \\\\ RestNet152 & 82.30 & 82.47 & 82.47 & 82.30 & 82.30 & 82.19 & 82.19 & 32.05 & 2373 & 60 \\\\ DenseNet161 & **86.17** & 86.29 & 86.29 & 86.17 & 86.17 & 86.18 & 86.18 & 35.81 & 2757 & 62 \\\\ EfficientNetB0 & 82.27 & 82.55 & 82.55 & 82.27 & 82.27 & 82.31 & 82.31 & 20.71 & 1512 & 58 \\\\ ConvNeXt & 69.17 & 69.02 & 69.02 & 69.17 & 69.17 & 69.01 & 69.01 & 23.09 & 2309 & 96 \\\\ Vision Transformer & 65.47 & 66.41 & 66.41 & 65.47 & 65.47 & 65.49 & 65.49 & 24.96 & 1173 & 32 \\\\ MLP Mixer & 61.13 & 62.18 & 62.18 & 61.13 & 61.13 & 60.87 & 60.87 & 17.98 & 809 & 30 \\\\ Swin Transformer & 80.00 & 80.10 & 80.10 & 80.00 & 80.00 & 79.91 & 79.91 & 69.19 & 5535 & 65 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 33: Detailed results for models trained from scratch on the CLRS dataset. Figure D.33: Confusion matrix for the pre-trained Vision Transformer model on the CLRS dataset. #### d.12 RSD46-WHU RSD46-WHU is a large-scale open dataset for scene classification in remote sensing images. The dataset is manually collected from Google Earth and Tianditu. The ground resolution of most classes is 0.5m, and the others are about 2m. There are 500-3000 images in each class. The RSD46-WHU dataset contains around 117000 images with 46 classes (Figure D.34). The image are not evenly distributed between classes and each class contains between 428 to 3000 images. The dataset comes with predefined train and test splits. For creating \\begin{table} \\begin{tabular}{l c c c c c c c c c c} Model \\(\\backslash\\)Metric & \\multicolumn{1}{c}{\\(\\backslash\\)Metric} & \\multicolumn{1}{c}{\\(\\backslash\\)Metric} & \\multicolumn{1}{c}{\\(\\backslash\\)Metric} & \\multicolumn{1}{c}{\\(\\backslash\\)Metric} & \\multicolumn{1}{c}{\\(\\backslash\\)Metric} & \\multicolumn{1}{c}{\\(\\backslash\\)Metric} & \\multicolumn{1}{c}{\\(\\backslash\\)Metric} & \\multicolumn{1}{c}{\\(\\backslash\\)Metric} \\\\ \\hline AlexNet & 90.65 & 90.43 & 90.61 & 90.35 & 90.65 & 90.36 & 90.61 & 58.03 & 2031 & 25 \\\\ VGG16 & 92.42 & 92.30 & 92.38 & 92.25 & 92.42 & 92.22 & 92.37 & 158.32 & 4433 & 18 \\\\ ResNet50 & 94.16 & 94.07 & 94.15 & 94.18 & 94.16 & 94.11 & 94.14 & 123.27 & 3205 & 16 \\\\ ResNet152 & 94.40 & 94.33 & 94.40 & 94.41 & 94.40 & 94.36 & 94.39 & 269.45 & 7814 & 19 \\\\ DenseNet161 & **94.51** & 94.36 & 94.49 & 94.41 & 94.51 & 94.36 & 94.48 & 297.07 & 6847 & 13 \\\\ EfficientNetB0 & 93.39 & 93.20 & 93.38 & 93.39 & 93.39 & 93.26 & 93.35 & 111.55 & 2231 & 10 \\\\ ConvNeXt & 93.63 & 93.61 & 93.67 & 93.47 & 93.63 & 93.48 & 93.60 & 196.20 & 3924 & 10 \\\\ Vision Transformer & 94.24 & 94.38 & 94.23 & 94.08 & 94.24 & 94.16 & 94.20 & 210.37 & 3997 & 9 \\\\ MLP Mixer & 93.67 & 93.77 & 93.69 & 93.47 & 93.67 & 93.55 & 93.65 & 148.25 & 3558 & 14 \\\\ Swin Transformer & 93.54 & 93.48 & 93.62 & 93.62 & 93.54 & 93.50 & 93.52 & 599.42 & 11389 & 9 \\\\ \\hline \\end{tabular} \\end{table} Table 35: Detailed results for pre-trained models on the RSD46-WHU dataset. Figure D.35: Class distribution for the RSD46-WHU dataset. \\begin{table} \\begin{tabular}{l c c c c} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline Airplane & 99.56 & 99.78 & 99.67 \\\\ Airport & 98.39 & 99.19 & 98.79 \\\\ Artificial dense forest land & 87.11 & 86.90 & 87.01 \\\\ Artificial sparse forest land & 87.06 & 82.55 & 84.75 \\\\ Bare land & 94.12 & 96.00 & 95.05 \\\\ Basketball court & 90.37 & 92.39 & 91.37 \\\\ Blue structured factory building & 96.57 & 97.83 & 97.19 \\\\ Building & 82.44 & 83.40 & 82.92 \\\\ Construction site & 82.11 & 79.43 & 80.75 \\\\ Cross river bridge & 99.70 & 99.70 & 99.70 \\\\ Crossroads & 97.74 & 98.70 & 98.22 \\\\ Dense tall building & 94.35 & 94.35 & 94.35 \\\\ Dock & 98.94 & 98.73 & 98.83 \\\\ Fish pond & 97.52 & 97.93 & 97.72 \\\\ Footbridge & 99.49 & 99.24 & 99.36 \\\\ Graff & 98.37 & 93.79 & 96.03 \\\\ Grassland & 95.07 & 95.52 & 95.29 \\\\ Low scattered building & 96.15 & 97.49 & 96.82 \\\\ Lreguelar farmland & 97.68 & 98.51 & 98.09 \\\\ Medium density scattered building & 76.98 & 68.15 & 72.30 \\\\ Medium density structured building & 89.58 & 92.11 & 90.82 \\\\ Natural dense forest land & 95.40 & 96.89 & 96.14 \\\\ Natural sparse forest land & 93.16 & 97.98 & 95.51 \\\\ Oiltank & 90.66 & 96.68 & 93.57 \\\\ Overpass & 99.19 & 98.13 & 98.66 \\\\ Parking lot & 96.49 & 96.07 & 96.28 \\\\ Plasticgreenhouse & 100.00 & 93.34 & 99.67 \\\\ Playground & 96.85 & 95.84 & 96.34 \\\\ Railway & 99.14 & 99.14 & 99.14 \\\\ Red structured factory building & 97.78 & 98.66 & 98.22 \\\\ Refinery & 92.84 & 87.72 & 90.21 \\\\ Regular farmland & 95.20 & 94.80 & 95.00 \\\\ Scattered blue roof factory building & 94.44 & 96.72 & 95.57 \\\\ Scattered red roof factory building & 93.28 & 97.73 & 95.45 \\\\ Sewage plant-type-one & 95.06 & 96.25 & 95.65 \\\\ Sewage plant-type-two & 88.73 & 98.44 & 93.33 \\\\ Ship & 99.56 & 99.33 & 99.45 \\\\ Solar power station & 99.78 & 99.78 & 99.78 \\\\ Sparse residential area & 91.42 & 88.14 & 89.75 \\\\ Square & 94.52 & 97.38 & 95.93 \\\\ Steelsmelter & 90.48 & 90.89 & 90.68 \\\\ Storage land & 99.03 & 96.52 & 97.76 \\\\ Tennis court & 95.93 & 91.38 & 93.60 \\\\ Thermal power plant & 88.95 & 85.19 & 87.03 \\\\ Vegetable plot & 94.12 & 92.59 & 93.35 \\\\ Water & 99.02 & 99.51 & 99.26 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.36: Detailed results for models trained from scratch on the RSD46-WHU dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c} \\hline \\hline Model \\(\\backslash\\)Metric & \\(\\backslash\\)Metric & \\(\\backslash\\)Metric & \\(\\backslash\\)Metric & \\(\\backslash\\)Metric & \\(\\backslash\\)Metric & \\(\\backslash\\)Metric & \\(\\backslash\\)Metric & \\(\\backslash\\)Metric & \\(\\backslash\\)Metric \\\\ \\hline AlexNet & 86.03 & 85.83 & 86.03 & 85.67 & 86.03 & 85.71 & 85.99 & 58.84 & 3707 & 48 \\\\ VGG16 & 88.62 & 88.37 & 88.56 & 88.37 & 88.62 & 88.32 & 88.55 & 162.89 & 8796 & 39 \\\\ ResNet50 & 90.55 & 90.40 & 90.53 & 90.26 & 90.55 & 90.30 & 90.52 & 127.53 & 8672 & 53 \\\\ ResNet152 & 89.94 & 89.84 & 89.99 & 89.77 & 89.94 & 89.78 & 89.95 & 272.70 & 19907 & 58 \\\\ DenseNet161 & **92.21** & 92.11 & 92.23 & 92.03 & 92.21 & 92.06 & 92.21 & 301.16 & 15318 & 36 \\\\ EfficientWeb & 90.61 & 90.57 & 90.61 & 90.25 & 90.61 & 90.37 & 90.58 & 113.93 & 6446 & 40 \\\\ ConvNeXt & 88.69 & 88.66 & 88.67 & 88.33 & 88.69 & 88.46 & 88.66 & 194.93 & 11891 & 46 \\\\ Vision Transformer & 86.47 & 86.22 & 86.45 & 85.94 & 86.47 & 86.02 & 86.42 & 211.93 & 9325 & 29 \\\\ MLP Mixer & 81.25 & 81.56 & 81.59 & 80.11 & 81.25 & 80.51 & 81.19 & 148.42 & 4149 & 12 \\\\ Swin Transformer & 91.81 & 91.50 & 91.79 & 91.48 & 91.81 & 91.47 & 91.79 & 588.25 & 41766 & 56 \\\\ \\hline \\hline \\end{tabular} \\end{table} D.37: Per class results for the pre-trained DenseNet161 model on the RSD46-WHU dataset. Figure D.36: Confusion matrix for the pre-trained DenseNet161 model on the RSD46-WHU dataset. ### Brazilian Coffee Scenes dataset The Brazilian Coffee Scenes dataset [52] consists of only two classes: coffee and non-coffee class. Each class has 1438 images with 64x64 pixels cropped from SPOT satellite images over four counties in the state of Minas Gerais, Brazil: Arceburgo, Guaranesia, Guaxupe, and Monte Santo (Figure D.37). The images in the dataset are in green, red and near-infrared spectral bands, since these are most useful and representative for distinguishing vegetation areas. The dataset is manually annotated by agricultural researchers. Images which contain coffee pixels in at least 85% of the image were assigned to the coffee class. Image with less than 10% of coffee pixels are assigned to the non-coffee class. The number of classes and the degree to which the data is tailored, should make this less challenging dataset. The class distribution is presented on Figure D.38. Detailed results for all pre-trained models are shown on Table D.38 and for all the models learned from scratch are presented on Table D.39. \\begin{table} \\begin{tabular}{l c c c c c c c c c c} \\hline \\hline Model \\textbackslash{Metric} & \\multicolumn{1}{c}{\\multirow{2}{*}{ \\begin{tabular}{c} \\\\ \\end{tabular} }} \\begin{table} \\begin{tabular}{l r r r} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline coffee & 93.10 & 93.75 & 93.43 \\\\ noncoffee & 93.71 & 93.06 & 93.38 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.40: Per class results for Swin Transformer on the Brazilian Coffee Scenes dataset. Figure D.39: Confusion matrix for Swin Transformer on the Brazilian Coffee Scenes dataset. ### Optimal 31 The Optimal 31 dataset [51] is for remote sensing image scene classification. The dataset contains 31 classes, each class contains 60 images with a size of 256x256 pixels. Totaling 1860 aerial RGB images (Figure D.40). These classes include: airplane, airport, basketball court, baseball field, bridge, beach, bushes, crossroads, church, round farmland, business district, desert, harbor, dense houses, factory, forest, freeway, golf field, island, lake, meadow, medium houses, mountain, mobile house area, overpass, playground, parking lot, roundabout, runway, railway, and square farmland. It is considered challenging due to small number of images dispersed across many classes. We have generated train, test and validation spits for our study and their class distribution is presented on Figure D.41. Detailed results for all pre-trained models are shown on Table D.41 and for all the models learned from scratch are presented on Table D.42. The best performing model is the pre-trained Vision Transformer model. The results on a class level are show on Table D.4 Figure D.41: Class distribution for the Optimal 31 dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c c} \\hline \\hline Model \\textbackslash{}Metric & & & & & & & & & & & \\\\ \\hline AlexNet & 80.91 & 81.90 & 81.90 & 80.91 & 80.91 & 80.74 & 80.74 & 1.10 & 45 & 31 \\\\ VGG16 & 88.71 & 89.58 & 89.58 & 88.71 & 88.71 & 88.79 & 88.79 & 2.97 & 95 & 22 \\\\ ResNet50 & 92.20 & 92.85 & 92.85 & 92.20 & 92.20 & 92.25 & 92.25 & 2.58 & 129 & 40 \\\\ ResNet152 & 92.47 & 92.99 & 92.99 & 92.47 & 92.47 & 92.47 & 92.47 & 4.62 & 217 & 37 \\\\ DenseNet161 & 94.35 & 94.92 & 94.92 & 94.35 & 94.35 & 94.43 & 94.43 & 5.02 & 306 & 51 \\\\ EfficientNetB0 & 91.67 & 92.04 & 92.04 & 91.67 & 91.67 & 91.60 & 91.60 & 2.25 & 187 & 73 \\\\ ConvNeXt & 93.01 & 93.33 & 93.33 & 93.01 & 93.01 & 92.99 & 92.99 & 3.50 & 203 & 48 \\\\ Vision Transformer & **94.62** & 94.85 & 94.85 & 94.62 & 94.62 & 94.56 & 94.56 & 3.71 & 126 & 24 \\\\ MLP Mixer & 92.74 & 93.17 & 93.17 & 92.74 & 92.74 & 92.74 & 92.74 & 2.82 & 141 & 40 \\\\ Swin Transformer & 92.47 & 92.92 & 92.92 & 92.47 & 92.47 & 92.51 & 92.51 & 9.19 & 340 & 27 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.41: Detailed results for pre-trained models on the Optimal 31 dataset. \\begin{table} \\begin{tabular}{l c c c} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline airplane & 100.00 & 100.00 & 100.00 \\\\ airport & 100.00 & 100.00 & 100.00 \\\\ baseball\\_diamond & 92.31 & 100.00 & 96.00 \\\\ basketball\\_court & 100.00 & 100.00 & 100.00 \\\\ beach & 100.00 & 100.00 & 100.00 \\\\ bridge & 100.00 & 91.67 & 95.65 \\\\ chaparral & 100.00 & 100.00 & 100.00 \\\\ church & 100.00 & 91.67 & 95.65 \\\\ circular\\_farmland & 92.31 & 100.00 & 96.00 \\\\ commercial\\_area & 85.71 & 100.00 & 92.31 \\\\ dense\\_residential & 84.62 & 91.67 & 88.00 \\\\ desert & 100.00 & 91.67 & 95.65 \\\\ forest & 91.67 & 91.67 & 91.67 \\\\ freeway & 100.00 & 91.67 & 95.65 \\\\ golf\\_course & 91.67 & 91.67 & 91.67 \\\\ ground\\_track\\_field & 92.31 & 100.00 & 96.00 \\\\ harbor & 85.71 & 100.00 & 92.31 \\\\ industrial\\_area & 84.62 & 91.67 & 88.00 \\\\ intersection & 100.00 & 100.00 & 100.00 \\\\ island & 100.00 & 100.00 & 100.00 \\\\ lake & 91.67 & 91.67 & 91.67 \\\\ meadow & 83.33 & 83.33 & 83.33 \\\\ medium\\_residential & 88.89 & 66.67 & 76.19 \\\\ mobile\\_home\\_park & 90.91 & 83.33 & 86.96 \\\\ mountain & 100.00 & 100.00 & 100.00 \\\\ overpass & 92.31 & 100.00 & 96.00 \\\\ parking\\_lot & 100.00 & 100.00 & 100.00 \\\\ railway & 92.31 & 100.00 & 96.00 \\\\ rectangular\\_farmland & 100.00 & 83.33 & 90.91 \\\\ roundabout & 100.00 & 100.00 & 100.00 \\\\ runway & 100.00 & 91.67 & 95.65 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 42: Detailed results for models trained from scratch on the Optimal 31 dataset. Figure D.42: Confusion matrix for the pre-trained Vision Transformer model on the Optimal 31 dataset. ### So2Sat This dataset [53] consists of co-registered synthetic aperture radar and multispectral optical image patches acquired by the Sentinel-1 and Sentinel-2 remote sensing satellites, and the corresponding local climate zones (LCZ) label. So2Sat has a total of 400673 images of size 32x32 pixels organized into 17 classes. Sample images are shown on Figure D.40. The dataset is distributed over 42 cities across different continents and cultural regions of the world. The classes include: compact high rise, compact middle rise, compact low rise, open high rise, open middle rise, open low rise, lightweight low rise, large low rise, sparsely built, heavy industry, dense trees, scattered trees, bush scrub, low plants, bare rock or paved, bare soil or sand, and water. The creators of So2Sat have provided different versions for train, test and validation splits for the dataset. The class distribution of the splits is depicted on Figure D.44. We are using Version 25 with only Sentinel 2 data. Version 2 provides a training set covering 42 cities around the world, a validation set covering western half of 10 other cities covering 10 cultural zones and a test set containing the eastern half of the 10 other cities. Footnote 5: available at So2Sat-LCZ42 repo [https://github.com/zhu-xlab/So2Sat-LCZ42](https://github.com/zhu-xlab/So2Sat-LCZ42). Detailed results for all pre-trained models are shown on Table D.44 and for all the models learned from scratch are presented on Table D.45. The best performing model is the pre-trained Vision Transformer model. The results on a class level are show on Table D.46 along with a confusion matrix on Figure D.45. Figure D.44: Class distribution for the So2Sat dataset. Figure D.43: Example images with labels from the So2Sat dataset. \\begin{table} \\begin{tabular}{l c c c c} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline Compact high,rise & 62.37 & 21.80 & 32.31 \\\\ Compact middle\\_rise & 70.74 & 61.49 & 65.79 \\\\ Compact low\\_rise & 68.52 & 75.33 & 71.77 \\\\ Open high\\_rise & 76.54 & 59.39 & 66.89 \\\\ Open middle\\_rise & 56.12 & 59.50 & 57.76 \\\\ Open low\\_rise & 47.29 & 64.36 & 54.52 \\\\ Lightweight low\\_rise & 57.14 & 39.76 & 46.89 \\\\ Large low\\_rise & 87.11 & 84.87 & 85.98 \\\\ Sparsely built & 67.30 & 45.80 & 54.51 \\\\ Heavy industry & 39.39 & 69.49 & 50.28 \\\\ Dense trees & 97.11 & 73.86 & 83.91 \\\\ Scattered trees & 26.16 & 55.89 & 35.64 \\\\ Bush or scrub & 15.22 & 1.80 & 3.22 \\\\ Low plants & 60.68 & 90.55 & 72.66 \\\\ Bare rock or paved & 79.38 & 37.56 & 50.99 \\\\ Bare soil or sand & 62.05 & 32.87 & 42.97 \\\\ Water & 97.10 & 97.60 & 97.35 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.45: Detailed results for models trained from scratch on the So2Sat dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c} \\hline \\hline Model \\textbackslash{Metric} & & & & & & & & & & \\\\ \\hline AlexNet & 59.20 & 46.01 & 59.31 & 42.70 & 59.20 & 41.57 & 57.59 & 158.09 & 1790 & 1 \\\\ VGG16 & 65.38 & 57.30 & 64.34 & 50.00 & 65.38 & 49.64 & 63.00 & 716.09 & 7877 & 1 \\\\ ResNet50 & 61.90 & 51.01 & 60.88 & 48.45 & 61.90 & 48.35 & 60.41 & 565.55 & 6221 & 1 \\\\ ResNet152 & 65.17 & 56.66 & 64.48 & 53.34 & 65.17 & 52.93 & 63.75 & 1,200.64 & 13207 & 1 \\\\ DenseNet161 & 65.76 & 55.47 & 64.58 & 48.59 & 65.76 & 48.67 & 63.81 & 1,324.09 & 14784 & 1 \\\\ EfficientNetBo & 65.80 & 56.30 & 65.64 & 53.37 & 65.80 & 53.65 & 64.77 & 510.45 & 5615 & 1 \\\\ ConvNeXt & 66.17 & 59.11 & 66.87 & 54.87 & 66.17 & 54.71 & 65.56 & 853.91 & 9393 & 1 \\\\ Vision Transformer & **68.85** & 62.95 & 69.64 & 57.17 & 68.85 & 57.26 & 67.48 & 925.09 & 10176 & 1 \\\\ MLP Mixer & 67.07 & 63.74 & 68.25 & 51.34 & 67.07 & 51.94 & 65.66 & 643.91 & 7278 & 1 \\\\ Swin Transformer & 65.95 & 59.11 & 66.82 & 53.20 & 65.95 & 52.89 & 64.60 & 2,636.45 & 29001 & 1 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.45: Detailed results for models trained from scratch on the So2Sat dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c} \\hline \\hline Model \\textbackslash{Metric} & & & & & & & & & & \\\\ \\hline AlexNet & 56.51 & 41.86 & 54.97 & 40.70 & 56.51 & 39.72 & 54.65 & 174.74 & 3320 & 4 \\\\ VGG16 & 62.27 & 51.36 & 61.08 & 45.40 & 62.27 & 45.54 & 59.78 & 723.72 & 13027 & 3 \\\\ ResNet50 & 59.59 & 46.54 & 59.35 & 43.94 & 59.59 & 43.37 & 58.18 & 558.79 & 10617 & 4 \\\\ ResNet152 & 61.48 & 49.43 & 62.30 & 48.71 & 61.48 & 46.98 & 60.22 & 1,198.37 & 22769 & 4 \\\\ DenseNet161 & 55.43 & 48.87 & 60.98 & 42.53 & 55.43 & 40.76 & 54.11 & 1,325.67 & 23862 & 3 \\\\ EfficientNetBo & **65.17** & 53.75 & 64.00 & 50.34 & 65.17 & 50.36 & 63.88 & 499.21 & 11981 & 9 \\\\ ConvNeXt & 60.15 & 50.97 & 61.52 & 48.03 & 60.15 & 47.17 & 59.73 & 851.06 & 15319 & 3 \\\\ Vision Transformer & 55.33 & 43.56 & 55.31 & 37.42 & 55.33 & 37.01 & 52.20 & 926.50 & 14824 & 1 \\\\ MLP Mixer & 53.58 & 42.31 & 53.80 & 36.73 & 53.58 & 36.61 & 51.19 & 651.31 & 10421 & 1 \\\\ Swin Transformer & 57.13 & 47.93 & 56.48 & 36.29 & 57.13 & 35.29 & 52.28 & 2,631.44 & 42103 & 1 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.46: Per class results for the pre-trained ViT model on the So2Sat dataset. Figure D.46: Train and validation learning curves showing an over-fit of a ViT model on the So2Sat dataset. Figure D.45: Confusion matrix for the pre-trained Vision Transformer model on the So2Sat dataset. #### d.16 UC Merced multi-label The UC Merced dataset was extended in [54] for multi-label classification. The dataset still has the same number of 2100 images of 256x256 pixels size (Figure D.47). The difference is in the number of classes (labels) and the number of annotations (classes) an image belongs to. Each image in the dataset has been manually labeled with one or more (maximum seven) labels based on visual inspection in order to create the ground truth data (the multilabels are available at [http://bigearth.eu/datasets](http://bigearth.eu/datasets)). The total number of distinct class labels in the dataset is 17. The labels are: airplane, bare-soil, buildings, cars, chaparral, court, dock, field, grass, mobile-home, pavement, sand, sea, ship, tanks, trees, water. The average number of labels per image is 3.3. This dataset has no predefined train-test splits by the authors. For our study, we made appropriate splits and their distribution is presented on Figure D.48. Detailed results for all pre-trained models are shown on Table D.47 and for all the models learned from scratch are presented on Table D.48. The best performing model is the pre-trained Swin Transformer model. The results on a class level are show on Table D.49 along with a \\begin{table} \\begin{tabular}{l c c c c c c c c c c c c c c} \\hline \\hline & & & & & & & & & & & & & & & & & \\\\ \\hline AlexNet & 92.64 & 82.78 & 88.47 & 83.14 & 86.32 & 86.07 & 86.23 & 84.47 & 86.91 & 84.52 & 1.31 & 71 & 44 \\\\ VGG16 & 92.85 & 86.43 & 91.38 & 86.61 & 86.37 & 87.84 & 86.37 & 86.40 & 89.33 & 86.39 & 3.00 & 132 & 30 \\\\ ResNet50 & 95.66 & 86.19 & 92.37 & 86.53 & 87.71 & 88.84 & 87.71 & 86.94 & 90.23 & 86.95 & 2.76 & 124 & 35 \\\\ ResNet152 & 96.01 & 88.10 & 93.19 & 88.33 & 86.23 & 89.45 & 86.23 & 87.15 & 91.07 & 87.13 & 5.04 & 227 & 35 \\\\ DenseNet161 & 96.06 & 88.82 & 93.99 & 89.90 & 87.01 & 98.69 & 87.01 & 87.91 & 91.51 & 87.76 & 6.54 & 468 & 73 \\\\ EfficientNet10 & 95.38 & 87.98 & 93.22 & 88.23 & 87.36 & 89.19 & 87.36 & 87.67 & 90.92 & 87.65 & 2.54 & 254 & 98 \\\\ ConvNeX & 96.43 & 88.82 & 94.30 & 88.91 & 87.92 & 89.92 & 89.22 & 88.36 & 91.84 & 88.32 & 3.92 & 292 & 56 \\\\ Vision Transformer & 96.70 & 88.87 & 94.16 & 89.09 & 89.62 & 90.55 & 89.62 & 89.24 & 92.14 & 89.16 & 4.13 & 132 & 22 \\\\ MLP Mixer & 96.34 & 88.62 & 94.38 & 88.75 & 87.99 & 88.16 & 87.99 & 88.31 & 90.77 & 88.21 & 3.25 & 182 & 46 \\\\ Swin Transformer & **96.83** & 89.01 & 93.75 & 89.08 & 89.19 & 91.50 & 89.19 & 89.10 & 92.46 & 89.06 & 10.22 & 552 & 44 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.47: Detailed results for pre-trained models on the UC Merced multi-label dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c c c c} \\hline \\hline & & & & & & & & & & & & & \\\\ AlexNet & 75.52 & 72.54 & 67.64 & 70.50 & 73.87 & 63.95 & 73.87 & 73.20 & 64.95 & 71.73 & 1.03 & 103 & 91 \\\\ VGG16 & 76.80 & 74.33 & 72.59 & 73.65 & 78.53 & 70.75 & 78.53 & 76.37 & 71.14 & 75.77 & 3.24 & 324 & 99 \\\\ ResNet50 & 79.87 & 76.27 & 77.52 & 76.42 & 78.67 & 77.12 & 78.67 & 77.68 & 72.73 & 76.99 & 2.76 & 276 & 99 \\\\ ResNet152 & 73.66 & 76.89 & 69.85 & 74.78 & 73.80 & 65.05 & 73.80 & 75.32 & 66.81 & 73.92 & 5.06 & 506 & 86 \\\\ DenseNet161 & 85.41 & 81.30 & 84.62 & 81.61 & 79.52 & 76.19 & 79.52 & 80.04 & 79.63 & 80.26 & 5.60 & 487 & 72 \\\\ EfficientNet160 & 79.87 & 78.45 & 74.10 & 76.91 & 75.85 & 72.13 & 75.85 & 77.13 & 72.89 & 76.25 & 2.23 & 252 & 99 \\\\ ConvNeX & 72.27 & 72.40 & 69.27 & 71.19 & 74.65 & 62.31 & 74.65 & 73.50 & 63.50 & 71.89 & 3.81 & 381 & 100 \\\\ Vision Transformer & **87.14** & 81.02 & 85.66 & 81.10 & 79.31 & 75.95 & 79.31 & 80.16 & 79.29 & 79.69 & 4.12 & 412 & 95 \\\\ MLP Mixer & 75.68 & 75.29 & 73.64 & 74.60 & 73.38 & 64.54 & 73.38 & 74.32 & 67.44 & 73.43 & 3.11 & 311 & 99 \\\\ Swin Transformer & 81.07 & 76.88 & 75.54 & 76.02 & 79.38 & 72.02 & 79.38 & 78.11 & 72.50 & 77.27 & 10.12 & 1012 & 99 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.48: Detailed results for models trained from scratch on the UC Merced multi-label dataset. Figure D.48: Label distribution for the UC Merced multi-label dataset. \\begin{table} \\begin{tabular}{l r r r} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline airplane & 95.24 & 100.00 & 97.56 \\\\ bare-soil & 83.45 & 80.56 & 81.98 \\\\ buildings & 88.31 & 87.74 & 88.03 \\\\ cars & 85.89 & 80.00 & 82.84 \\\\ chaparral & 100.00 & 100.00 & 100.00 \\\\ court & 100.00 & 76.19 & 86.49 \\\\ dock & 100.00 & 100.00 & 100.00 \\\\ field & 100.00 & 95.24 & 97.56 \\\\ grass & 86.21 & 89.29 & 87.72 \\\\ mobile-home & 94.44 & 85.00 & 89.47 \\\\ pavement & 88.24 & 93.02 & 90.57 \\\\ sand & 83.08 & 93.10 & 87.80 \\\\ sea & 100.00 & 100.00 & 100.00 \\\\ ship & 100.00 & 95.24 & 97.56 \\\\ tanks & 100.00 & 90.00 & 94.74 \\\\ trees & 91.22 & 92.57 & 91.89 \\\\ water & 97.62 & 97.62 & 97.62 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 49: Per label results for the pre-trained Swin Transformer model on the UC Merced multi-label dataset. ### BigEarthNet BigEarthNet is a new large-scale multi-label Sentinel-2 benchmark archive [59][36]. The BigEarthNet consists of 590326 Sentinel-2 image patches, each of which is a section of: 120x120 pixels for 10m bands; 60x60 pixels for 20m bands; and 20x20 pixels for 60m bands. Each image patch is annotated by multiple land-cover classes (i.e., multi-labels) that are provided from the CORINE Land Cover database. It was constructed by selecting 125 Sentinel-2 tiles acquired between June 2017 and May 2018. Covering different countries and seasonal period. More precisely, the number of images acquired in autumn, winter, spring and summer seasons are 154943, 117156, 189276 and 128951 respectively. The image patches are geographically distributed across 10 countries (Austria, Belgium, Finland, Ireland, Kosovo, Lithuania, Luxembourg, Portugal, Serbia, Switzerland) of Europe. The images are stored in tiff format and accompanied with additional metadata in JSON format. The authors provide a predefined set of train-validation-test splits. Additionally, they proposed 2 versions of the labels in the dataset. The first version of the dataset contains 43 labels with an 3.0 labels per image (Figure D.51). The labels in this version are: Continuous urban fabric, Discontinuous urban fabric, Industrial or commercial units, Road and rail networks and associated land, Port areas, Airports, Mineral extraction sites, Dump sites, Construction sites, Green urban areas, Sport and leisure facilities, Non-irrigated arable land, Permanently irrigated land, Rice fields, Vineyards, Fruit trees and berry plantations, Olive groves, Pastures, Annual crops associated with permanent crops, Complex cultivation patterns, Land principally occupied by agriculture, with significant areas of natural vegetation, Agro-forestry areas, Broad-leaved forest, Coniferous forest, Mixed forest, Natural grassland, Moors and heathland, Sclerophyllous vegetation, Transitional woodland/shrub, Beaches, dunes, sands, Bare rock, Sparsely vegetated areas, Burnt areas, Inland marshes, Peatbogs, Salt marshes, Salines, Intertidal flats, Water courses, Water bodies, Coastal lagoons, Estuaries, Sea and ocean. The largest class (label), Mixed forest, appeared in 217119 image, whereas the label with fewest appearances, Burnt areas, appeared in 328 images. This high imbalance should make the dataset more challenging. Detailed results for all pre-trained models are shown on Table D.50 and for all the models learned from scratch are presented on Table D.51. The best performing model is the pre-trainedSwin Transformer model. The results on a class level are show on Table D.52 along with a confusion matrix on Figure D.52. The second version of the dataset contains 19 labels with 2.9 labels per image on average (Figure D.53). The labels contained here are: Urban fabric, Industrial or commercial units, Arable land, Permanent crops, Pastures, Complex cultivation patterns, Land principally occupied by agriculture, with significant areas of natural vegetation, Agro-forestry areas, Broad-leaved forest, Coniferous forest, Mixed forest, Natural grassland and sparsely vegetated areas, Moors, heathland and sclerophyllous vegetation, Transitional woodland, shrub, Beaches, dunes, sands, Inland wetlands, Coastal wetlands, Inland waters, Marine waters. The label Mixed forest is most commonly found and is present in 176546 images, whereas Beaches, dunes, sands appears in 1536 images and is the least frequently used label. Sample images are shown on Figure D.50. Detailed results for all pre-trained models are shown on Table D.53 and for all the models learned from scratch are presented on Table D.54. The best performing model is the pre-trained Swin Transformer model. The results on a class level are show on Table D.55 along with a confusion matrix on Figure D.54. #### b.17.1 BigEarthNet 43 Figure D.51: Label distribution for the BigEarthNet 43 dataset. \\begin{table} \\begin{tabular}{l r r r} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline Continuous urban fabric & 85.67 & 79.44 & 82.44 \\\\ Discontinuous urban fabric & 83.49 & 70.72 & 76.57 \\\\ Industrial or commercial units & 77.21 & 41.13 & 53.67 \\\\ Road and rail networks and associated land & 47.17 & 50.59 & 48.82 \\\\ Port areas & 63.33 & 47.50 & 54.29 \\\\ Airports & 93.18 & 29.93 & 45.30 \\\\ Mineral extraction sites & 47.00 & 47.49 & 47.24 \\\\ Dump sites & 42.22 & 22.89 & 29.69 \\\\ Construction sites & 51.85 & 35.22 & 41.95 \\\\ Green urban areas & 51.18 & 37.24 & 43.11 \\\\ Sport and leisure facilities & 52.11 & 40.89 & 45.83 \\\\ Non-irrigated arable land & 88.42 & 82.99 & 85.62 \\\\ Permanently irrigated land & 82.30 & 53.94 & 65.17 \\\\ Rice fields & 71.38 & 58.99 & 64.59 \\\\ Vineyards & 70.33 & 53.73 & 60.92 \\\\ Fruit trees and berry plantations & 54.12 & 49.57 & 51.75 \\\\ Olive growes & 72.01 & 54.77 & 62.22 \\\\ Pastures & 79.66 & 75.73 & 77.65 \\\\ Annual crops associated with permanent crops & 60.34 & 40.11 & 48.19 \\\\ Complex cultivation patterns & 76.87 & 66.05 & 71.05 \\\\ Land principally occupied by agriculture, with significant areas of natural vegetation & 75.21 & 61.09 & 67.42 \\\\ Agro-forestry areas & 86.27 & 74.22 & 79.79 \\\\ Broad-leaved forest & 83.35 & 72.94 & 77.79 \\\\ Coniferous forest & 88.73 & 84.79 & 86.72 \\\\ Mixed forest & 81.94 & 84.25 & 83.08 \\\\ Natural grassland & 69.90 & 48.79 & 57.46 \\\\ Moors and healthland & 64.58 & 40.24 & 49.58 \\\\ Sclerophyllous vegetation & 75.64 & 71.15 & 73.33 \\\\ Transitional woodland/shrub & 73.16 & 64.26 & 68.42 \\\\ Beaches, dunes, sands & 58.87 & 61.54 & 60.18 \\\\ Bare rock & 58.41 & 75.90 & 66.01 \\\\ Sparsely vegetated areas & 45.88 & 53.94 & 49.58 \\\\ Burnt areas & 100.00 & 2.78 & 5.41 \\\\ Inland marshes & 67.74 & 31.21 & 42.73 \\\\ Peatbogs & 81.07 & 62.30 & 70.45 \\\\ Salt marshes & 60.62 & 60.62 & 60.62 \\\\ Salines & 80.00 & 57.14 & 66.67 \\\\ Intertidal flats & 70.73 & 52.10 & 60.00 \\\\ Water courses & 82.71 & 71.71 & 76.82 \\\\ Water bodies & 91.33 & 77.54 & 83.87 \\\\ Coastal lagoons & 88.16 & 81.56 & 84.73 \\\\ Estuaries & 78.32 & 70.52 & 74.21 \\\\ Sea and ocean & 99.20 & 97.77 & 98.48 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.52: Per label results for the pre-trained Swin Transformer model on the BigEarthNet 43 dataset. Figure D.52: Confusion matrix for the pre-trained Swin Transformer model on the BigEarthNet 43 dataset. Figure D.53: Label distribution for the BigEarthNet 19 dataset. \\begin{table} \\begin{tabular}{l c c c} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline Urban fabric & 82.21 & 76.09 & 79.03 \\\\ Industrial or commercial units & 65.47 & 53.81 & 59.07 \\\\ Arabic land & 88.07 & 84.53 & 86.26 \\\\ Permanent crops & 80.24 & 61.39 & 69.56 \\\\ Pastures & 82.29 & 72.91 & 77.32 \\\\ Complex cultivation patterns & 77.35 & 66.05 & 71.25 \\\\ Land principally occupied by agriculture, with significant areas of natural vegetation & 72.72 & 64.01 & 68.08 \\\\ Agro-forestry areas & 83.00 & 80.58 & 81.77 \\\\ Broad-leaved forest & 82.55 & 74.08 & 78.08 \\\\ Coniferous forest & 88.52 & 85.04 & 86.74 \\\\ Mixed forest & 80.71 & 85.67 & 83.12 \\\\ Natural grassland and sparsely vegetated areas & 72.20 & 47.05 & 56.98 \\\\ Moors, healthand and sclerophyllous vegetation & 69.63 & 69.45 & 69.54 \\\\ Transitional woodland, shrub & 71.88 & 66.79 & 69.24 \\\\ Beaches, dunes, sands & 49.82 & 63.35 & 55.78 \\\\ Inland wetlands & 76.50 & 59.39 & 66.87 \\\\ Coastal wetlands & 55.81 & 69.68 & 61.98 \\\\ Inland waters & 90.53 & 79.30 & 84.55 \\\\ Marine waters & 99.10 & 98.15 & 98.62 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 53: Detailed results for pre-trained models on the BigEarthNet 19 dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c c c c} \\hline \\hline AlexNet & 77.15 & 80.90 & 75.60 & 80.58 & 73.59 & 66.06 & 73.59 Figure D.54: Confusion matrix for the pre-trained Swin Transformer model on the BigEarthNet 19 dataset. ### MLRSNet MLRSNet [55] is a multi-label high spatial resolution remote sensing dataset for semantic scene understanding. It is composed of high-resolution optical satellite or aerial RGB images. MLRSNet contains a total of 109161 images (Figure D.55) within 46 scene categories, and each image has at least one of 60 predefined labels. The number of labels associated with each image varies between 1 and 13, but averages at 5.0 labels per image (Figure D.56). The labels annotating the images are: airplane, airport, bare soil, baseball diamond, basketball court, beach, bridge, buildings, cars, cloud, containers, crosswalk, dense residential area, desert, dock, factory, field, football field, forest, freeway, golf course, grass, greenhouse, gully, labor, intersection, island, lake, mobile home, mountain, overpass, park, parking lot, parkway, pavement, railway, railway station, river, road, roundabout, runway, sand, sea, ships, snow, snowberg, sparse residential area, stadium, swimming pool, tanks, tennis court, terrace, track, trail, transmission tower, trees, water, chaparral, wetland, wind turbine. The dataset does not have predefined train-tests splits. Detailed results for all pre-trained models are shown on Table D.56 and for all the models learned from scratch are presented on Table D.57. The best performing model is the pre-trained Swin Transformer model. The results on a class level are show on Table D.58 along with a confusion matrix on Figure D.57. \\begin{table} \\begin{tabular}{l c c c c c c c c c c c c c} \\hline \\hline & & & & & & & & & & & & & & & \\\\ AlexNet & 90.85 & 86.53 & 83.69 & 86.69 & 86.58 & 86.54 & 86.58 & 86.56 & 84.04 & 86.58 & 34.92 & 2549 & 58 \\\\ VGG16 & 91.52 & 86.63 & 83.24 & 87.00 & 87.98 & 88.23 & 87.98 & 87.30 & 85.33 & 87.41 & 132.22 & 7272 & 40 \\\\ ResNet50 & **95.26** & 90.65 & 90.76 & 90.68 & 89.42 & 90.33 & 89.42 & 90.03 & 90.37 & 90.00 & 102.26 & 6238 & 46 \\\\ ResNet152 & 93.98 & 89.47 & 88.92 & 92.94 & 88.45 & 88.55 & 84.58 & 89.96 & 85.51 & 88.92 & 214.47 & 14155 & 51 \\\\ DenseNet161 & 94.74 & 90.23 & 89.59 & 90.28 & 88.13 & 88.86 & 88.13 & 89.17 & 88.87 & 90.08 & 237.96 & 11422 & 33 \\\\ EfficientNet10 & 94.40 & 89.90 & 89.09 & 89.99 & 89.22 & 90.19 & 89.22 & 89.56 & 89.40 & 89.54 & 89.34 & 87 \\\\ ConvNe2v & 90.71 & 87.86 & 84.50 & 88.00 & 85.38 & 88.47 & 83.53 & 85.86 & 86.60 & 84.36 & 86.00 & 159.35 & 5896 & 22 \\\\ Vision Transformer & 87.25 & 85.78 & 82.28 & 85.81 & 88.44 & 89.00 & 84.64 & 85.20 & 81.06 & 85.03 & 170.71 & 5975 & 20 \\\\ MLP Mixer & 85.28 & 84.45 & 82.59 & 84.45 & 82.19 & 75.60 & 82.19 & 83.31 & 78.11 & 83.01 & 123.20 & 3080 & 10 \\\\ Swin Transformer & 94.10 & 89.56 & 88.56 & 89.65 & 89.93 & 90.17 & 89.93 & 89.74 & 89.11 & 89.73 & 482.72 & 42962 & 74 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 56: Detailed results for pre-trained models on the MLRSNet dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c c c c} \\hline \\hline & & & & & & & & & & & & & \\\\ AlexNet & 90.85 & 86.53 & 83.69 & 86.69 & 86.58 & 86.54 & 86.58 & 86.56 & 84.04 & 86.58 & 34.92 & 2549 & 58 \\\\ VGG16 & 91.52 & 86.63 & 83.24 & 87.00 & 87.98 & 88.23 & 87.98 & 87.30 & 85.33 & 87.41 & 132.22 & 7272 & 40 \\\\ ResNet50 & **95.26** & 90.65 & 90.76 & 90.68 & 89.42 & 90.33 & 89.42 & 90.03 & 90.37 & 90.00 & 102.26 & 623 \\begin{table} \\begin{tabular}{l r r r r} \\hline \\hline Label & Precision & Recall & F1 score & Label & F1 score \\\\ \\hline airplane & 87.84 & 92.21 & 89.97 & overpass & 93.96 & 95.99 & 94.96 \\\\ airport & 87.75 & 85.71 & 86.72 & park & 89.51 & 93.00 & 91.22 \\\\ bare soil & 83.04 & 85.55 & 84.28 & parking lot & 70.80 & 67.36 & 69.04 \\\\ baseball diamond & 99.59 & 99.39 & 99.49 & parkway & 89.33 & 91.50 & 90.40 \\\\ basketball court & 89.43 & 91.95 & 90.67 & pavement & 96.49 & 96.43 \\\\ beach & 99.80 & 99.80 & 99.80 & railway & 90.95 & 93.36 \\\\ bridge & 95.34 & 93.80 & 94.57 & railway station & 91.44 & 85.58 & 88.42 \\\\ buildings & 93.94 & 91.66 & 92.78 & river & 98.01 & 98.80 & 98.40 \\\\ cars & 85.20 & 91.03 & 88.02 & road & 92.32 & 91.71 & 92.01 \\\\ cloud & 98.63 & 100.00 & 99.31 & roundabout & 95.24 & 98.52 & 96.85 \\\\ containers & 99.40 & 100.00 & 99.70 & runway & 99.26 & 88.50 & 93.57 \\\\ crosswalk & 77.32 & 72.12 & 74.63 & sand & 98.37 & 98.68 & 98.53 \\\\ dense residential area & 99.49 & 97.55 & 98.51 & sea & 98.68 & 99.73 & 99.20 \\\\ desert & 97.88 & 100.00 & 98.93 & ships & 89.30 & 88.80 & 89.05 \\\\ dock & 99.03 & 99.24 & 99.14 & snow & 96.00 & 90.88 & 93.37 \\\\ factory & 91.58 & 84.47 & 87.89 & snowberg & 89.05 & 98.21 & 93.40 \\\\ field & 92.62 & 92.41 & 92.51 & sparse residential area & 97.03 & 97.86 & 97.45 \\\\ football field & 59.67 & 84.65 & 70.00 & stadium & 92.23 & 96.35 & 94.25 \\\\ forest & 85.13 & 93.97 & 89.33 & swimming pool & 89.72 & 85.67 & 87.65 \\\\ freeway & 99.18 & 99.30 & 99.24 & tanks & 95.05 & 98.97 & 96.97 \\\\ golf course & 98.29 & 97.46 & 97.87 & tennis court & 98.38 & 97.00 & 97.68 \\\\ grass & 88.78 & 88.97 & 88.87 & terrace & 92.88 & 96.31 & 94.56 \\\\ greenhouse & 99.81 & 98.65 & 99.23 & track & 94.47 & 94.39 & 94.43 \\\\ gully & 90.32 & 94.62 & 92.42 & trail & 80.84 & 83.16 & 81.99 \\\\ habor & 99.03 & 99.35 & 99.19 & transmission tower & 98.66 & 99.61 & 99.14 \\\\ intersection & 70.15 & 94.00 & 80.34 & trees & 92.32 & 93.72 & 93.01 \\\\ island & 98.82 & 99.21 & 99.02 & water & 95.30 & 91.56 & 93.39 \\\\ lake & 96.51 & 99.60 & 98.03 & chaparral & 96.90 & 95.26 & 96.07 \\\\ mobile home & 60.78 & 100.00 & 75.61 & wetland & 90.03 & 87.26 & 88.62 \\\\ mountain & 98.00 & 95.78 & 96.88 & wind turbine & 99.76 & 100.00 & 99.88 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.58: Per label results for the pre-trained Swin Transformer model on the MLRSNet dataset. Figure D.57: Confusion matrix for the pre-trained Swin Transformer model on the MLRSNet dataset. ### Dfc15 DFC15 [56] is a multi-label dataset created from the semantic segmentation dataset, DFC15 (IEEE GRSS data fusion contest, 2015), which was published and first used in 2015 IEEE GRSS Data Fusion Contest. The dataset is acquired over Zeebrugge with an airborne sensor, which is 300m off the ground. In total, 7 tiles are collected in DFC dataset, and each of them is pixels with a spatial resolution of 5cm. All tiles in DFC15 dataset are labeled in pixel-level, and each pixel is categorized into 8 distinct object classes: impervious, water, clutter, vegetation, building, tree, boat, and car. As a result of this process, the dataset contains 3342 images with a size of 600x600 pixels (Figure D.58). The images are annotated with one or more of the 8 labels in the dataset, with an average of 2.8 labels per image (Figure D.59). The most frequent labels is _impervious_ and it appears in 3133 image. The label _tree_ is least frequent and it appears in 258 images. \\begin{table} \\begin{tabular}{l c c c c c c c c c c c c c c} \\hline \\hline & & & & & & & & & & & & & & & & \\\\ \\hline AlexNet & 88.10 & 90.40 & 84.01 & 90.16 & 85.57 & 76.29 & 85.57 & 87.92 & 79.75 & 87.69 & 7.83 & 783 & 99 \\\\ VGG16 & 89.87 & 91.97 & 86.00 & 91.03 & 87.37 & 79.82 & 87.37 & 89.18 & 82.38 & 88.89 & 8.50 & 799 & 79 \\\\ ResNet50 & 94.67 & 92.88 & 89.33 & 92.84 & 91.75 & 87.01 & 91.75 & 92.32 & 88.11 & 92.26 & 8.92 & 464 & 37 \\\\ ResNet52 & 94.19 & 92.05 & 89.36 & 91.91 & 89.96 & 83.80 & 89.96 & 90.99 & 86.36 & 90.82 & 9.66 & 647 & 52 \\\\ DenseNet161 & **95.85** & 94.23 & 92.10 & 94.19 & 92.28 & 87.62 & 92.28 & 93.24 & 89.65 & 93.15 & 9.39 & 613 & 47 \\\\ EfficientNetB0 & 93.97 & 93.90 & 91.67 & 93.77 & 91.91 & 85.64 & 91.91 & 92.90 & 88.40 & 92.75 & 8.47 & 686 & 66 \\\\ Conv-Net & 89.56 & 91.08 & 87.12 & 90.85 & 87.47 & 79.56 & 87.47 & 92.84 & 82.99 & 89.03 & 8.80 & 800 & 91 \\\\ Vision Transformer & 91.66 & 92.45 & 89.36 & 92.34 & 89.96 & 84.84 & 89.96 & 91.19 & 87.00 & 91.10 & 8.35 & 743 & 69 \\\\ MLP Mixer & 91.66 & 90.43 & 86.00 & 90.27 & 88.80 & 82.91 & 88.90 & 89.66 & 84.40 & 89.56 & 8.31 & 831 & 100 \\\\ Swin Transformer & 94.35 & 93.47 & 91.02 & 93.43 & 90.80 & 84.66 & 90.80 & 92.12 & 87.56 & 91.97 & 170.9 & 1709 & 95 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 60: Detailed results for models trained from scratch on the DFC15 dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c c c c} \\hline \\hline & & & & & & & & & & & & & \\\\ \\hline AlexNet & 94.06 & 92.33 & 89.03 & 92.32 & 91.01 & 86.21 & 91.01 & 91.67 & 87.52 & 91.60 & 7.74 & 325 & 32 \\\\ VGG16 & 96.57 & 94.09 & 91.75 & 94.30 & 92.60 & 88.57 & 92.60 & 93.34 & 89.79 & 93.30 & 8.94 & 286 & 22 \\\\ ResNet50 & 97.66 & 95.21 & 94.19 & 95.19 & 93.50 & 91.54 & 93.50 & 94.35 & 92.81 & 94.31 & 8.49 & 312 \\\\ ResNet152 & 97.60 & 95.08 & 93.78 & 95.04 & 99.97 & 90.88 & 93.97 & 94.52 & 92.25 & 94.66 & 9.45 & 444 & 37 \\\\ DenseNet161 & 97.53 & 95.07 & 93.52 & 95.03 & 94.71 & 91.43 & 94.71 & 94.89 & 92.43 & 94.85 & 9.54 & 544 & 47 \\\\ EfficientNetB0 & 96.79 & 95.54 & 94.09 & 95.51 & 94.08 & 90.97 & 94.08 & 94.81 & 92.48 & 94.77 & 8.33 & 853 & 60 \\\\ Conv-Net & 97.99 & 94.99 & 93.84 & 94.98 & 94.24 & 91.99 & 94.24 & 94.61 & 92.55 & 94.58 & 8.72 & 471 & 44 \\\\ Vision Transformer & 97.62 & 96.04 & 94.75 & 96.33 & 93.34 & 89.45 & 93.34 & 94.84 & 91.96 & 94.77 & 8.76 & 219 & 15 \\\\ MLP Mixer & 97.94 & 95.23 & 94.29 & 95.20 & 93.92 & 90.82 & 93.92 & 94.57 & 92.48 & 94.53 & 8.18 & 239 & 18 \\\\ Swin Transformer & **98.11** & 95.54 & 93.90 & 95.50 & 93.97 & 91.18 & 93.97 & 94.75 & 92.49 & 94.71 & 17.59 & 686 & 29 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 59: Detailed results for pre-trained models on the DFC15 dataset. Figure 59: Label distribution for the DFC15 dataset. \\begin{table} \\begin{tabular}{l r r r} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline impervious & 97.28 & 98.54 & 97.91 \\\\ water & 96.52 & 93.72 & 95.10 \\\\ clutter & 96.78 & 94.26 & 95.50 \\\\ vegetation & 93.64 & 94.06 & 93.85 \\\\ building & 94.36 & 90.20 & 92.23 \\\\ tree & 90.38 & 82.46 & 86.24 \\\\ boat & 90.74 & 90.74 & 90.74 \\\\ car & 91.49 & 85.43 & 88.36 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 61: Per label results for the pre-trained Swin Transformer model on the DFC15 dataset. ### D.20 Planet UAS The Planet UAS dataset [58] was created by the company, Planet - designer and builder of the world's largest constellation of Earth-imaging satellites. The aim is to label satellite image chips with atmospheric conditions and various classes of land cover/land use. The dataset is available on Kaggle and is approximately 32 GB worth of data. The data contains 40479 satellite images organized in tiff and jpg files (Figure D.61). The jpg file show the natural light spectrum of the image, whereas the tiff files provide extra information about the infrared features of the satellite image, both with 256x256 pixels resolution. There are a total of 17 different labels with an average of 2.9 labels per image. The imagery has a ground-sample distance (GSD) of 3.7m and an orthorectified pixel size of 3m. The data comes from Planet's Flock 2 satellites in both sun-synchronous and ISS orbits and was collected between January 1, 2016 and February 1, 2017. All of the scenes come from the Amazon basin which includes Brazil, Peru, Uruguay, Colombia, Venezuela, Guyana, Bolivia, and Ecuador. There are a total of 17 different labels. Out of those, 4 labels correspond to weather: Clear, Cloudy, Partly Cloudy, Haze. The rest of the (13) labels correspond to land: Habitation, Bare Ground, Cultivation, Agriculture, Blow Down, Conventional Mine, Selective Logging, Slash Burn, Artisanal Mine, Blooming, Primary, Water, and None. The dataset only has the train set publicly available and we use that to generate train, test and validation splits (Figure D.62). Detailed results for all pre-trained models are shown on Table D.62 and for all the models learned from scratch are presented on Table D.63. The best performing model is the pre-trained Swin Transformer model. The results on a class level are show on Table D.64 along with a confusion matrix on Figure D.63. \\begin{table} \\begin{tabular}{l c c c c c c c c c c c c c c} \\hline \\hline & & & & & & & & & & & & & & & & & \\\\ \\hline AlexNet & 60.28 & 90.32 & 67.35 & 88.88 & 84.81 & 51.24 & 84.81 & 87.48 & 54.52 & 86.25 & 18.65 & 1865 & 87 \\\\ VGG16 & 60.68 & 90.39 & 60.11 & 88.74 & 84.97 & 50.56 & 84.97 & 87.60 & 53.21 & 86.44 & 50.68 & 2889 & 42 \\\\ ResNet50 & 64.19 & 92.16 & 67.02 & 90.86 & 86.52 & 54.31 & 86.52 & 89.25 & 58.47 & 88.24 & 37.57 & 2592 & 54 \\\\ ResNet52 & 64.96 & 91.57 & 69.94 & 90.42 & 86.97 & 55.02 & 86.97 & 89.21 & 90.96 & 88.28 & 80.86 & 6792 & 69 \\\\ DenseNet161 & 64.74 & 91.79 & 69.52 & 90.53 & 87.01 & 55.20 & 87.01 & 89.34 & 59.12 & 83.37 & 90.11 & 4866 & 39 \\\\ EfficientNet60 & 63.87 & 91.70 & 65.64 & 90.55 & 87.03 & 53.36 & 87.03 & 89.30 & 57.21 & 88.40 & 33.47 & 2711 & 66 \\\\ ConvNeX & 61.28 & 90.92 & 64.25 & 89.39 & 84.29 & 51.55 & 84.29 & 87.48 & 54.68 & 86.19 & 59.35 & 5935 & 90 \\\\ Vision Transformer & 59.41 & 90.35 & 60.32 & 88.16 & 83.12 & 47.68 & 83.12 & 86.58 & 51.94 & 84.94 & 65.52 & 412 & 48 \\\\ MLP-Sharar & 58.55 & 89.67 & 62.22 & 87.36 & 82.21 & 43.88 & 82.21 & 85.78 & 51.46 & 84.06 & 45.93 & 25.72 & 41 \\\\ Swin Transformer & **66.23** & 91.53 & 66.89 & 90.11 & 86.34 & 54.05 & 86.34 & 88.86 & 57.89 & 87.80 & 181.83 & 16365 & 75 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 63: Detailed results for models trained from scratch on the PlanetUAS dataset. \\begin{table} \\begin{tabular}{l c c c c c c c c c c c c} \\hline \\hline & & & & & & & & & & & & & & \\\\ \\hline AlexNet & 60.28 & 90.32 & 67.35 & 88.88 & 84.81 & 51.24 & 84.81 & 87.48 & 54.52 & 86.25 & 18.65 & 1865 & 87 \\\\ VGG16 & 60.68 & 90.39 & 60.11 & 88.74 & 84.97 & 50.56 & 84.97 & 87.60 & 53.21 & 86.44 & 50.68 & 2889 & 42 \\\\ ResNetFigure D.62: Label distribution for the PlanetUAS dataset. \\begin{table} \\begin{tabular}{l r r r} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline haze & 75.05 & 65.31 & 69.84 \\\\ primary & 97.86 & 98.28 & 98.07 \\\\ agriculture & 83.87 & 85.74 & 84.79 \\\\ clear & 96.10 & 97.25 & 96.67 \\\\ water & 87.72 & 73.90 & 80.22 \\\\ habitation & 77.07 & 72.36 & 74.64 \\\\ road & 83.22 & 85.55 & 84.37 \\\\ cultivation & 67.48 & 48.58 & 56.49 \\\\ slash\\_burn & 0.00 & 0.00 & 0.00 \\\\ cloudy & 87.53 & 77.27 & 82.08 \\\\ partly\\_cloudy & 91.07 & 92.12 & 91.60 \\\\ conventional\\_mine & 58.33 & 60.87 & 59.57 \\\\ bare\\_ground & 58.24 & 28.65 & 38.41 \\\\ artisinal\\_mine & 78.79 & 78.79 & 78.79 \\\\ blooming & 19.05 & 6.25 & 9.41 \\\\ selective\\_logging & 43.48 & 15.15 & 22.47 \\\\ blow\\_down & 50.00 & 5.00 & 9.09 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table D.64: Per label results for the pre-trained Swin Transformer model on the PlanetUAS dataset. Figure D.63: Confusion matrix for the pre-trained Swin Transformer model on the PlanetUAS dataset. ### _AID multi-label_ Hua et al. [57] extend the AID dataset for multi-label classification. They manually relabeled some images in the AID dataset. With extensive human visual inspections, 3000 aerial images from 30 scenes in the AID dataset were selected and assigned with multiple object labels. The dataset has 17 labels with 5.2 labels per image on average. The labels are: bare soil, airplane, building, car, charparral, court, dock, field, grass, mobile home, pavement, sand, sea, ship, tank, tree and water. The authors provide a proposed train-test split. Figure D.64 show some example images from the AID multi-label dataset. The distribution of the labels for the train, validation and test splits is shown in Figure D.65 from which we can observe an imbalanced distribution, some of the labels are heavily populated with images/samples, and some of the labels are with only few images/samples (for example the label mobile-home has only one image in the respective train, validation and test splits). Detailed results for all pre-trained models are shown on Table D.65 and for all the models learned from scratch are presented on Table D.66. The best performing model is the pre-trained ConvNeXt model. The results on a class level are show on Table D.67 along with a confusion matrix on Figure D.66. \\begin{table} \\begin{tabular}{l r r r} \\hline \\hline Label & Precision & Recall & F1 score \\\\ \\hline airplane & 100.00 & 25.00 & 40.00 \\\\ bare-soil & 77.30 & 77.30 & 77.30 \\\\ buildings & 93.72 & 96.64 & 95.16 \\\\ cars & 94.13 & 94.13 & 94.13 \\\\ chaparral & 100.00 & 2.70 & 5.26 \\\\ court & 80.43 & 49.33 & 61.16 \\\\ dock & 82.93 & 68.00 & 74.73 \\\\ field & 85.71 & 61.54 & 71.64 \\\\ grass & 95.30 & 95.71 & 95.50 \\\\ mobile-home & 0.00 & 0.00 & 0.00 \\\\ pavement & 97.82 & 97.82 & 97.82 \\\\ sand & 97.78 & 84.62 & 90.72 \\\\ sea & 100.00 & 90.91 & 95.24 \\\\ ship & 82.22 & 78.72 & 80.43 \\\\ tanks & 100.00 & 90.48 & 95.00 \\\\ trees & 95.42 & 94.82 & 95.12 \\\\ water & 80.99 & 64.61 & 71.88 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 67: Per label results for the pre-trained ConvNeXt model on the AID multi-label dataset.
We present _AiTLAS: Benchmark Arena_ - an open-source benchmark suite for evaluating state-of-the-art deep learning approaches for image classification in Earth Observation (EO). To this end, we present a comprehensive comparative analysis of more than 500 models derived from ten different state-of-the-art architectures and compare them to a variety of multi-class and multi-label classification tasks from 22 datasets with different sizes and properties. In addition to models trained entirely on these datasets, we benchmark models trained in the context of transfer learning, leveraging pre-trained model variants, as it is typically performed in practice. All presented approaches are general and can be easily extended to many other remote sensing image classification tasks not considered in this study. To ensure reproducibility and facilitate better usability and further developments, _all of the experimental resources_ including the trained models, model configurations, and processing details of the datasets (with their corresponding splits used for training and evaluating the models) are _publicly available on the repository_: [https://github.com/biasvariancelabs/aitlas-arena](https://github.com/biasvariancelabs/aitlas-arena). keywords: Deep learning (DL), Earth observation (EO), Image Classification, Benchmark study
Condense the content of the following passage.
249
arxiv-format/2004_06402v1.md
StandardGAN: Multi-source Domain Adaptation for Semantic Segmentation of Very High Resolution Satellite Images by Data Standardization Onur Tasar\\({}^{1}\\) Yuliya Tarabalka\\({}^{2}\\) Alain Giros\\({}^{3}\\) Pierre Alliez\\({}^{1}\\) Sebastien Clerc\\({}^{4}\\) \\({}^{1}\\)Universite Cote d'Azur, Inria \\({}^{2}\\)LuxCarta \\({}^{3}\\)Centre National d'Etudes Spatiales \\({}^{4}\\)ACRI-ST [email protected] ## 1 Introduction Over the years, semantic segmentation of remote sensing data has become an important research topic, due to its wide range of applications such as navigation, autonomous driving, and automatic mapping. In the last decade, a significant progress has been made, especially after _convolutional neural networks (CNNs)_ had revolutionized the computer vision community. Among CNNs, _U-net_[26] has gained an increasing attention due to its capability to generate highly precise semantic segmentation from remote sensing data. Nonetheless, it is a known issue that the performance of U-net or other CNNs immensely depends on the representativeness of the training data [33]. However, in remote sensing, having data that are representative to classify the whole world is challenging, because various atmospheric effects, intra-class variations, and differences in acquisition usually cause the images collected over different locations to have largely different data distributions. Such differences induce CNNs to generate unsatisfactory segmentation. This problem is referred to as _domain adaptation_ in the literature [33]. One way to overcome this issue is to manually annotate a small portion of test data to _fine-tune_ the already trained classifier [20]. However, every time when new data are received, annotating even a small portion of them is labor-intensive. Oftentimes, it is a good practice to perform _data augmentation_[4] to enlarge the training data and to reduce the risk of over-fitting. For example, in remote sensing, color jittering with random gamma correction or random contrast change is commonly used [31]. However, common data augmentation methods are limited to perform complex data transformations, which would greatly help the classifiers to better generalize. A more powerful data augmentation method would be to use _generative adversarial networks (GANs)_[12] to generate fake source domains with the style Figure 1: Real cities and the standardized data generated by StandardGAN. of target domain. Here, the main drawback is that the generated samples are representative only for the target domain. However, in multi-source case, we want the generated samples to be representative for all the domains we have at hand. In addition, style transfer needs to be performed between the target and each source domain; therefore, it is inconvenient. In the field of remote sensing, each satellite image can be regarded as a domain. In our multi-source domain adaptation problem definition, we assume that each source and target domains have significantly different data distributions (see the real data in the first row of Fig. 1). Our method aims at finding a common representation for all the domains by _standardizing_ the samples belonging to each domain using GANs. As shown in Fig. 1, in a way, the standardized data could be considered as spectral interpolation across the domains. Adopting such a standardization strategy has two advantages. Firstly, in the training stage, it prevents the classifier from capturing the idiosyncrasies of each source domain. The classifier rather learns from the common representation. Secondly, since in the common representation the samples belonging to source domains and target domain have distributions close to each other, we expect the classifier trained on the standardized source domains to segment well the standardized target domain. Standardizing multiple domains using GANs raises several challenges. Firstly, when training GANs, one needs real data so that the generator can generate fake data with the distribution that is as close as possible to the distribution of the real data. However, in our case, the standardized data do not exist. In other words, we wish to generate data without showing samples drawn from a similar distribution. Secondly, all the standardized domains need to have similar data distributions. Otherwise, the advantages mentioned above would be lost. Thirdly, the standardized data and the real data themselves must be semantically consistent. For example, when generating the standardized data, the method should not replace some objects by the others, add artificial objects, or remove some objects existing in the real data. Otherwise, the standardized data and the ground-truth for the real data would not match, and we could not train a model. Finally, the method should be efficient. If the number of networks and their structures are not kept as small as possible, depending on the number of domains, we could face with issues in terms of memory occupation and computational time. In this work, we present novel StandardGAN, which overcomes all the aforementioned challenges. The main contributions are three fold. Firstly, we introduce the use of GANs in the context of data standardization. Secondly, we present a GAN that is able to generate data samples without providing it with data coming from the same or similar distribution. Finally, we propose to apply this multi-source domain adaptation solution to the semantic segmentation of Pleiades data collected over several geographic locations. ## 2 Related Work Adapting the classifier.These methods aim at adapting the classifier to target domain. A common approach is to perform multi-task learning, where one of the tasks is to train a classifier from the source domain via common supervised learning approaches, and the other one is to align the features extracted from both source and target domains by adversarial training [14, 32, 15]. A similar approach [7] has also been applied to remote sensing data (SpaceNet challenge [9]). Other approaches include self learning [35, 40], using task-specific decision boundaries [28], introducing new normalization [25, 22] or regularization methods [27], and adding specific loss functions for domain adaptation [39]. Adapting the inputs.These methods, in general, try to perform image-to-image translation (I2I) or style transfer between domains to generate target stylized fake source data. The fake data are then used to train or to fine-tune the classifier. For example, CyCADA [13] uses CycleGAN [38] to generate target stylized fake source data. CycleGAN has also been applied to aerial images [2]. For the style transfer between satellite images, Tasar _et al_. have recently introduced ColorMapGAN [29] that learns to map each color of the source image to another one, and SemI2I [30] that switches the styles of the source and the target domains. To accomplish the same task, one can also consider using other I2I approaches in the computer vision community such as UNIT [19], MUNIT [17], DRIT [18], or common approaches like histogram matching [11]. Multi-source domain adaptation (MDA).The most straightforward approach would be to perform I2I between each source and target domains to stylize all of the source domains as target domain. However, this method is extremely cumbersome, because the training must be performed for each source domain and the target domain pair. In addition, the data distribution of each source domain is made similar to the distribution of only one domain (i.e., target domain). Instead, finding a common representation that is representative for all the domains is desired. Recently, specifically for MDA, a few methods focusing on image classification have been proposed [36, 34, 23]. However, it may not be possible to extend these works to semantic segmentation, as precisely structured output is required. To address the issue of MDA for semantic segmentation, Zhao _et al_. have proposed MADAN [37], which is an extension of CyCADA, but it is extremely compute-intensive. JCPOT [24] investigates optimal transport for MDA problem. Elshamli _et al_. have recently proposed a method con sisting in patch based networks [8]. However, since the network architectures are not fully convolutional, the method may not be suitable for classes requiring high precision such as buildings and roads. **Data standardization.** In machine learning, one of the most commonly used data standardization approach is referred to as Z-score normalization and computed as: \\[x^{\\prime}=\\frac{x-\\mu}{\\sigma}, \\tag{1}\\] where \\(x\\), \\(\\mu\\), \\(\\sigma\\) correspond to original data, mean value, and standard deviation. In addition, histogram equalization [11] is also a common pre-processing step. However, these approaches do not take into account the contextual information, they just follow certain heuristics. One may also think of applying color constancy algorithms [1] such as gray-world [3] and gamut [10] approaches. These algorithms assume that colors of the objects are highly affected by the color of the illuminant and try to remove this effect. ## 3 Method In this section, we first explain how to perform style transfer between two domains. We then describe how StandardGAN standardizes two domains. Finally, we detail how we extend StandardGAN to multi-domain case. StandardGAN consists of one content encoder, one decoder, one discriminator, and \\(n\\) style encoders, where \\(n\\) is the number of domains. Fig. 2 illustrates the generator to perform style transfer between two domains. The discriminator performs multi-task learning as in StarGAN [5] by adding an auxiliary classifier on top of the discriminator of CycleGAN [38]. The first task allows the fake source and the target domains to have as similar data distributions as possible, whereas the other task helps the discriminator to understand between which fake and real data it is discriminating. We provide detailed explanations for both tasks in _style transfer_ and _classification loss_ parts of the following sub-section. ### Style Transfer Between Two Domains We denote both domains by A and B. In the following, we explain the main steps that are required for style transfer between two domains. **Style Transfer.** The goal of style transfer is to generate fake A with the style of B and fake B having a similar data distribution as real A. To perform style transfer, we use two types of encoders. One is domain agnostic content encoder, and the other one is domain specific style encoder. The content encoder is used to map the data into a common space, irrespective of which domain the data come from. On the other hand, the style encoder helps the decoder to generate Figure 3: Combining the content of one city with the style of another city. Figure 2: Style transfer between two cities. In this example, there exists 2 style encoders, 1 content encoder, 1 decoder, and 1 discriminator. output with the style of its specific domain. We use adaptive instance normalization (AdaIN) [16] to combine the content of A with the style of B (or vice versa). AdaIN is defined as: \\[\\text{AdaIN}(x,\\gamma,\\beta)=\\gamma\\left(\\frac{x-\\mu(x)}{\\sigma(x)}\\right)+\\beta, \\tag{2}\\] where \\(x\\) is the activation of the content encoder's final convolutional layer, and \\(\\gamma\\) and \\(\\beta\\) correspond to the parameters that are learned by the style encoder. As can be seen in Eq. 2, \\(\\gamma\\) and \\(\\beta\\) are used to scale and shift the activation, which results in changing the style of the output. After the activation is normalized by AdaIN, as depicted by Fig. 3, it is fed to the decoder to generate the fake data. In order to force real A and fake B, and real B and fake A to have as similar data distributions as possible, we compute and minimize an adversarial loss between them. We use the adversarial loss functions described in LSGAN [21]. The discriminator adversarial loss between real A and fake B (or real B and fake A) is defined as: \\[\\begin{split}\\mathcal{L}_{adv\\_D}=\\mathbb{E}_{x\\sim p(x)}[(D_{ adv}(x)-1)^{2}]\\ +\\\\ \\mathbb{E}_{y\\sim p(y)}[(D_{adv}(G(y)))^{2}]\\end{split} \\tag{3}\\] where \\(\\mathbb{E}\\) denotes the expected value, \\(G\\) and \\(D_{adv}\\) stand for the generator and the adversarial output of the discriminator (the first task), and \\(x\\) and \\(y\\) correspond to data for both domains drawn from the distributions of \\(p(x)\\) and \\(p(y)\\). The generator adversarial loss is computed as: \\[\\mathcal{L}_{adv\\_G}=\\mathbb{E}_{y\\sim p(y)}[(D_{adv}(y)-1)^{2}]. \\tag{4}\\] The overall generator adversarial loss \\(\\mathcal{L}_{adv\\_G}\\) and the discriminator adversarial loss \\(\\mathcal{L}_{adv\\_D}\\) are calculated by simply summing the adversarial losses between real A and fake B, and real B and fake A. Classification loss.To force real A and fake B, and real B and fake A to have similar styles, normally, we need two discriminators. One is used for discriminating between real A and fake B, and the other is responsible for distinguishing between real B and fake A. However, as mentioned in Sec. 1, we want to keep the number of networks as small as possible to easily extend StandardGAN to multi-domain case. In order to use only one discriminator, we adopt the strategy explained in StarGAN [5]. Let us assume that A is the source and B is the target domain. We suppose that the labels of A and B are indicated by \\(c\\_s\\) and \\(c\\_t\\) (e.g., \\(c\\_s=0\\) and \\(c\\_t=1\\)), and the image patch sampled from A is denoted by \\(x\\). On top of the discriminator, we add a classifier. Both the discriminator and the generator have a role on this classifier. On the one hand, the discriminator wants the classifier to predict the label of A correctly. On the other hand, the generator tries to generate fake A in a way that the classifier predicts it as B. The classification loss for the discriminator is defined as: \\[\\mathcal{L}_{cls\\_D}=\\mathbb{E}[-\\text{log}D_{cls}(c\\_s\\mid x)], \\tag{5}\\] where \\(D_{cls}(c\\_s\\mid x)\\) denotes the probability distribution over domain labels generated by \\(D\\). By minimizing this function, \\(D\\) learns from which domain \\(x\\) come. The classification loss for the generator is computed as: \\[\\mathcal{L}_{cls\\_G}=\\mathbb{E}[-\\text{log}D_{cls}(c\\_t\\mid G(x))]. \\tag{6}\\] Minimizing this function causes \\(D\\) to label fake A (\\(G(x)\\)) as B. We sum the classification losses between real A and fake B, and real B and fake A to compute the overall domain classification losses \\(\\mathcal{L}_{cls\\_D}\\) and \\(\\mathcal{L}_{cls\\_G}\\). In the training stage, minimizing Eqs. 5 and 6 allows the discriminator to understand whether it needs to distinguish between real A and fake B or between real B and fake A. As a result, the style transfer can be performed with only one discriminator. The classification loss is particularly useful when we extend StandardGAN to multi-domain adaptation case. Semantic Consistency.As mentioned in Sec. 1, it is crucial to perform the style transfer without spoiling the semantics of the real data. Otherwise, the fake data and the ground-truth for the real data would not overlap. Thus, they cannot be used to train a model. For this reason, our decoder is architecturally quite simple. It consists of only one convolution and two deconvolution blocks (see Fig. 3). After scaling and shifting the content embedding of one domain with the AdaIN parameters learned by the style encoder from another domain, we directly decode the embedding, instead of adding further residual blocks. Moreover, we have additional constraints enforcing semantic consistency. As shown in Fig. 2, after we generate fake A with the style of B and fake B with the style of real A, we switch the styles once again to obtain A\\({}^{\\prime\\prime}\\) and B\\({}^{\\prime\\prime}\\). In an ideal case, A and A\\({}^{\\prime\\prime}\\), and B and B\\({}^{\\prime\\prime}\\) must be the same. Hence, we minimize the cross reconstruction loss \\(\\mathcal{L}_{cross}\\) that is the sum of L1 norms between A and A\\({}^{\\prime\\prime}\\), and between B and B\\({}^{\\prime\\prime}\\). Similarly, when we combine the content information of a domain with its own style information, we should be reconstructing itself (see A\\({}^{\\prime}\\) and B\\({}^{\\prime}\\) in Fig. 2). We also minimize the self reconstruction loss \\(\\mathcal{L}_{self}\\), which is computed by summing the L1 norms between A and A\\({}^{\\prime}\\), and between B and B\\({}^{\\prime}\\). Training.The overall generator loss is calculated as: \\[\\mathcal{L}_{G}=\\lambda_{1}\\mathcal{L}_{cross}+\\lambda_{2}\\mathcal{L}_{self}+ \\lambda_{3}\\mathcal{L}_{cls\\_G}+\\lambda_{4}\\mathcal{L}_{adv\\_G}, \\tag{7}\\] where \\(\\lambda_{1},\\lambda_{2},\\lambda_{3}\\), and \\(\\lambda_{4}\\) denote the weights for the individual losses. The discriminator loss is defined as: \\[\\mathcal{L}_{D}=\\lambda_{3}\\mathcal{L}_{cls\\_D}+\\lambda_{4}\\mathcal{L}_{adv\\_D}. \\tag{8}\\]We minimize \\(\\mathcal{L}_{G}\\) and \\(\\mathcal{L}_{D}\\) simultaneously. As can be seen in Fig. 3, to generate fake data, content encoder, decoder, and the AdaIN parameters learned by the style encoder of the other domain are required. The issue is that the style encoder produces different AdaIN parameters for each image patch depending on the context of the patch. For instance, we cannot expect patches from a forest and an industrial area to have similar parameters, because they have different styles. For each domain, to capture the global AdaIN parameters, we first initialize domain specific \\(\\gamma\\) and \\(\\beta\\) parameters with zeros. We then propose to update them in each training iteration as: \\[p=0.95\\times p+0.05\\times p\\text{\\_current}, \\tag{9}\\] where \\(p\\) is the global domain specific AdaIN parameter (i.e., \\(\\gamma\\) or \\(\\beta\\)) and \\(p\\text{\\_current}\\) is the parameter from the current training patch. After a sufficiently long training process, Eq. 9 estimates the global AdaIN parameters for each domain. These estimations can then be used in the test stage. ### StandardGAN for Image Standardization As mentioned previously, the domain agnostic content encoder learns to map domains into a common space. To generate target stylized fake source data, the content embedding extracted by the content encoder from the source domain is normalized with the global AdaIN parameters of the target domain. The normalized embedding is then given to the decoder to generate the fake data. We have discovered that instead of normalizing the embedding with the AdaIN parameters for one of the domains, if we normalize it with the arithmetic average of the global AdaIN parameters of both domains, StandardGAN learns to generate standardized data. The standardization process for two domains is depicted in Fig. 4. As shown in the figure, real A and real B have considerably different data distributions. On the other hand, standardized A and standardized B look quite similar, and their data distributions are somewhere between the data distributions of real A and real B. To standardize multiple domains, we propose Alg. 1. In multi-domain case, \\(c\\text{\\_s}\\) and \\(c\\text{\\_t}\\) in Eqs. 5 and 6 can range between 0 and \\(n\\) - 1, where \\(n\\) is the number of domains. As shown in Fig. 5, we perform adaptation between each pair of domains. We then take the average of the global AdaIN parameters of each domain and use the average to normalize Figure 4: Standardizing two domains. Dashed lines correspond to arithmetic average. Figure 5: Standardizing multiple domains. Solid arrows represent adaptation between two domains. Dashed lines correspond to arithmetic average. \\(\\gamma_{avg}\\) and \\(\\beta_{avg}\\) are used for standardization. the embeddings extracted by the content encoder from all the domains. We finally decode the normalized embeddings via the decoder to generate the standardized data. ## 4 Experiments In our experiments, we use Pleiades images captured from 5 cities in Austria, 2 cities in France, and 1 city in Liechtenstein. The spectral channels consist of red, green, and blue bands. The spatial resolution has been reduced to 1 m by the data set providers. The annotations for building, road, and tree classes have been provided 1. Table 1 reports, for each city, the name of the city, percentage of the pixels belonging to each class, and the total covered area. Footnote 1: The authors would like to thank LuxCartA Technology for providing the annotated data that enabled us to conduct this research. We have two experimental setups. In the first experiment, we use the images from Salzburg Stadt, Villach, Lianz, and Sankt Polten for training and the image from Bad Ischl for test. In the second experiment, we choose Salzburg Stadt, Villach, Bourges, and Lille as the training cities and Vaduz as the test city. In the first experiment, we want to observe how well our method generalize to a new city from the same country. On the other hand, the goal of the second experiment is to investigate the generalization abilities of our approach when training and test data come from different countries. Let us also remark that, as confirmed by Table 1, classes in the test cities (i.e., Bad Ischl and Vaduz) are highly imbalanced, which makes the domain adaptation problem even more difficult. For example, in both cases, the number of pixels labeled as tree is significantly larger than the number of pixels labeled as building and road. In the pre-processing step, we split all the cities into 256\\(\\times\\)256 patches with 32 pixels of overlap. We set \\(\\lambda_{1},\\lambda_{2},\\lambda_{3}\\), and \\(\\lambda_{4}\\) in Eqs. 7 and 8 to 10, 10, 1, and 1, respectively. We train StandardGAN for 20 epochs with Adam optimizer, where the initial learning rate is 0.0002, the exponential decay rates for the moment estimates are 0.5 and 0.999, respectively. In each training iteration of StandardGAN, we randomly sample 1 patch from each domain. After the 10\\({}^{th}\\) epoch, we progressively reduce the learning rate in each epoch as: \\[\\text{learn. rate}=\\text{init\\_lr}\\times\\frac{\\text{num\\_epochs}-\\text{epoch\\_no }}{\\text{num\\_epochs}-\\text{decay\\_epoch}}, \\tag{10}\\] where init_lr, num_epochs, epoch_no, and decay_epoch correspond to the initial learning rate (0.0002 in our case), the total number of epochs (we set it to 20), the current epoch no, and the epoch no in which we start reducing the learning rate (we determine it as 10). Table 2 reports the total number of training patches in both experiments and the training time of StandardGAN. We first standardize all the data. We then train a model on the standardized source data and classify the standardized target data. We compare \\begin{table} \\begin{tabular}{c|c|c|c|c} \\hline \\multirow{2}{*}{**City (Country)**} & \\multicolumn{2}{c|}{**Class percentages (\\(\\%\\))**} & \\multicolumn{1}{c}{**Area**} \\\\ \\cline{2-5} & **building** & **road** & **tree** & **(km\\({}^{2}\\))** \\\\ \\hline Bad Ischl (AT) & 5.51 & 6.0 & 35.38 & 27.71 \\\\ Salzburg Stadt (AT) & 9.44 & 8.69 & 23.88 & 134.71 \\\\ Villach (AT) & 9.26 & 10.63 & 19.91 & 43.59 \\\\ Lianz (AT) & 6.96 & 8.16 & 15.37 & 28.38 \\\\ Sankt Polten (AT) & 6.68 & 6.39 & 25.13 & 87.17 \\\\ Bourges (FR) & 9.81 & 10.52 & 14.83 & 72.20 \\\\ Lille (FR) & 18.36 & 12.71 & 15.40 & 117.58 \\\\ Vaduz (LI) & 3.57 & 4.30 & 33.69 & 96.08 \\\\ \\hline \\end{tabular} \\end{table} Table 1: The data set. \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\multicolumn{2}{c}{**Method**} & **building** & **road** & **tree** & **Overall** \\\\ \\hline \\multirow{5}{*}{**City (Country)**} & U-net & 45.36 & 18.81 & **82.43** & 48.87 \\\\ & Gray-world & 49.39 & 42.25 & 66.31 & 52.65 \\\\ & Hist. Equaliz. & 45.33 & 39.07 & 73.03 & 52.48 \\\\ & Z-score norm. & 51.22 & 46.56 & 77.62 & 58.47 \\\\ & StandardGAN & **56.41** & **50.26** & 80.59 & **62.42** \\\\ \\hline \\end{tabular} \\end{table} Table 4: IoU scores for Vaduz (the second experiment). \\begin{table} \\begin{tabular}{c c c c c} \\hline \\multicolumn{2}{c}{**GPU**} & **Exp.** & \\(\\#\\) **of patches** & **Tr. time (secs.)** \\\\ \\hline Nvidia Tesla & 1 & 5712 & 6077.82 \\\\ V100 SMX2 & 2 & 8226 & 9929.52 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Training time of StandardGAN for both experiments. Figure 6: Histograms for green band of the cities used in the first experiment. (a) Before standardization, (b) After standardization. Figure 8: Real cities used in the second experiment, and the standardized data generated by StandardGAN. Figure 7: Real data used in the first experiment and the outputs generated by StandardGAN. Left column: the real data. Matrix on the right: The standardized data are highlighted by red bounding boxes. The rest of the cells depict the \\(i^{th}\\) domain with the style of \\(j^{th}\\) domain. The domain ids are indicated inside parentheses. our approach with the other standardization algorithms described in Sec. 2, namely gray-world [3], histogram equalization [11], and Z-score normalization (Eq. 1). We use U-net [26] as the classifier. We also provide the experimental results for naive U-net without applying any domain adaptation methods. For each comparison, we train a U-net for 35 epochs via Adam optimizer with the learning rate of 0.0001 and the exponential decays rates of 0.9 and 0.999. In each training iteration of U-net, we use a mini-batch of 32 randomly sampled patches. We perform online data augmentation with random rotations and flips. In Fig. 7, we depict close-ups from the cities used in the first experiment and the fake data generated by StandardGAN. Note that to train a model, we do not use the target stylized source data, we use only the standardized data that are highlighted by red bounding boxes in the figure. The style transfer between each domain is the prior step to the standardization. We can clearly observe that there exists a substantial difference between the data distributions of the real data, whereas the standardized data look similar. Moreover, Fig. 6 verifies that color histograms of the standardized data are considerably closer to each other than those of the real data. Fig. 8 shows closeups from the cities in the second experiment and their standardized versions by StandardGAN. The standardized and the real data for Salzburg Stadt and Lille seem quite similar. The reason is the data distributions of these two cities are already somewhere between the distributions of all five cities. However, the radiometry of Villach, Bourges, and Vaduz significantly changes after the standardization process. Besides, all the standardized data have similar data distributions. Tables 3 and 4 report the intersection over union (IoU) [6] values for both experiments. The training data acquired over a single country are usually more representative for a city from the same country than a city from another country. For this reason, the quantitative results for the first experiment are generally higher. Besides, in some cases, the representativeness of the samples belonging to different classes may vary. For instance, in the first experiment, the traditional U-net already exhibits a relatively good performance for tree class, as the tree samples from the source domains represent well the samples in the target data. For this class, the performance of our method is slightly worse. It is probably because of some artifacts generated by the proposed GAN architecture when standardizing the domains. On the other hand, for the other classes, our approach achieves a better performance than all the other methods. In the second experiment, unlike the first one, none of the class samples in the source domains are representative for the target domain. Hence, the performance of U-net is poor. In addition, the common heuristic based pre-processing methods do not help improving the results. However, the StandardGAN better allow the classifier to generalize completely different geographic locations. Fig. 9 illustrates the improvement of our framework against the naive U-net in terms of predicted maps. ## 5 Concluding Remarks In this study, we presented novel StandardGAN, which is a new pre-processing approach proposed with the purpose of standardizing multiple domains. In our experiments, we verified that the standardized data generated by StandardGAN enable the classifier to significantly better generalize to new Pleiades data. Note that StandardGAN has only one encoder, one discriminator, one decoder, and \\(n\\) style encoders. Although there are multiple style encoders, their architecture is fairly simple. Thus, it is feasible to use StandardGAN to standardize larger number of domains than the number of cities in our experiments. As future work, we plan to use StandardGAN for adaptation of more domains and for other types of remote sensing data such as Sentinel, aerial, and hyper-spectral images. In addition, we plan to investigate whether StandardGAN could be used for other real-world applications such as change detection. Figure 9: Comparison between the traditional U-net and our framework. Red, green, and white pixels represent building, road, and tree classes, respectively. The pixels in black do not belong to any class. ## References * [1] V. Agarwal, B. R. Abidi, A. Koschan, and M. A. Abidi. An overview of color constancy algorithms. _Journal of Pattern Recognition Research_, 1(1):42-54, 2006. * [2] B. Benjdira, Y. Bazi, A. Koubaa, and K. Ouni. Unsupervised domain adaptation using generative adversarial networks for semantic segmentation of aerial images. _Remote Sensing_, 11(11):1369, 2019. * [3] G. Buchsbaum. A spatial processor model for object colour perception. _Journal of the Franklin institute_, 1980. * [4] A. Buslaev, A. Parinov, E. Khvedchenya, V. I. Iglovikov, and A. A. Kalinin. Albumentations: fast and flexible image augmentations. _arXiv preprint arXiv:1809.06839_, 2018. * [5] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo. StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 8789-8797, 2018. * [6] G. Csurka, D. Larlus, and F. Perronnin. What is a good evaluation measure for semantic segmentation? In _British Machine Vision Conference_, volume 27, page 2013, 2013. * [7] X. Deng, H. L. Yang, N. Makkar, and D. Lunga. Large scale unsupervised domain adaptation of segmentation networks with adversarial learning. In _IEEE International Geoscience and Remote Sensing Symposium_, pages 4955-4958, 2019. * [8] A. Elshamli, G. W. Taylor, and S. Areibi. Multisource domain adaptation for remote sensing using deep neural networks. _IEEE Transactions on Geoscience and Remote Sensing_, 2019. * [9] A. V. Etten, D. Lindenbaum, and T. Bacastow. SpaceNet: A remote sensing dataset and challenge series. _arXiv preprint arXiv:1807.01232_, 2018. * [10] D. A. Forsyth. A novel algorithm for color constancy. _International Journal of Computer Vision_, 5(1):5-35, 1990. * [11] R. C. Gonzalez and R. E. Woods. _Digital Image Processing (3rd Edition)_. Pearson International Edition, 2006. * [12] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In _Advances in Neural Information Processing Systems_, pages 2672-2680, 2014. * [13] J. Hoffman, E. Tzeng, T. Park, J.-Y. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell. CyCADA: Cycle-consistent adversarial domain adaptation. _arXiv preprint arXiv:1711.03213_, 2017. * [14] J. Hoffman, D. Wang, F. Yu, and T. Darrell. FCNs in the wild: Pixel-level adversarial and constraint-based adaptation. _arXiv preprint arXiv:1612.02649_, 2016. * [15] H. Huang, Q. Huang, and P. Krahenbuhl. Domain transfer through deep activation matching. In _Proceedings of the European Conference on Computer Vision_, pages 590-605, 2018. * [16] X. Huang and S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 1501-1510, 2017. * [17] X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz. Multimodal unsupervised image-to-image translation. In _Proceedings of the European Conference on Computer Vision_, pages 172-189, 2018. * [18] H.-Y. Lee, H.-Y. Tseng, J.-B. Huang, M. Singh, and M.-H. Yang. Diverse image-to-image translation via disentangled representations. In _Proceedings of the European Conference on Computer Vision_, pages 35-51, 2018. * [19] M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. In _Advances in Neural Information Processing Systems_, pages 700-708, 2017. * [20] E. Maggiori, Y. Tarabalka, G. Charpiat, and P. Alliez. Convolutional neural networks for large-scale remote-sensing image classification. _IEEE Transactions on Geoscience and Remote Sensing_, 55(2):645-657, 2016. * [21] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley. Least squares generative adversarial networks. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 2794-2802, 2017. * [22] X. Pan, P. Luo, J. Shi, and X. Tang. Two at once: Enhancing learning and generalization capacities via ibn-net. In _Proceedings of the European Conference on Computer Vision_, pages 464-479, 2018. * [23] X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang. Moment matching for multi-source domain adaptation. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 1406-1415, 2019. * [24] I. Redko, N. Courty, R. Flamary, and D. Tuia. Optimal transport for multi-source domain adaptation under target shift. _arXiv preprint arXiv:1803.04899_, 2018. * [25] R. Romijnders, P. Meletis, and G. Dubbelman. A domain agnostic normalization layer for unsupervised adversarial domain adaptation. In _Winter Conference on Applications of Computer Vision_, pages 1866-1875. IEEE, 2019. * [26] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In _International Conference on Medical Image Computing and Computer-assisted Intervention_, pages 234-241. Springer, 2015. * [27] K. Saito, Y. Ushiku, T. Harada, and K. Saenko. Adversarial dropout regularization. _arXiv preprint arXiv:1711.01575_, 2017. * [28] K. Saito, K. Watanabe, Y. Ushiku, and T. Harada. Maximum classifier discrepancy for unsupervised domain adaptation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 3723-3732, 2018. * [29] O. Tasar, S L Happy, Y. Tarabalka, and P. Alliez. ColorMapGAN: Unsupervised domain adaptation for semantic segmentation using color mapping generative adversarial networks. _arXiv preprint arXiv:1907.12859_, 2019. * [30] O. Tasar, S L Happy, Y. Tarabalka, and P. Alliez. SemI2I: Semantically consistent image-to-image translation for domain adaptation of remote sensing data. _arXiv preprint arXiv:2002.05925_, 2020. * [31] O. Tasar, Y. Tarabalka, and P. Alliez. Incremental learning for semantic segmentation of large-scale remote sensing data. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 12(9):3524-3537, 2019. * [32] Y.-H. Tsai, W.-C. Hung, S. Schulter, K. Sohn, M.-H. Yang, and M. Chandraker. Learning to adapt structured output space for semantic segmentation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 7472-7481, 2018. * [33] D. Tuia, C. Persello, and L. Bruzzone. Domain adaptation for the classification of remote sensing data: An overview of recent advances. _IEEE Geoscience and Remote Sensing Magazine_, 4(2):41-57, 2016. * [34] R. Xu, Z. Chen, W. Zuo, J. Yan, and L. Lin. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 3964-3973, 2018. * [35] J. Zhang, C. Liang, and C.-C. J. Kuo. A fully convolutional tri-branch network (fctn) for domain adaptation. In _International Conference on Acoustics, Speech, and Signal Processing_, pages 3001-3005. IEEE, 2018. * [36] H. Zhao, S. Zhang, G. Wu, J. M. F. Moura, J. P. Costeira, and G. J. Gordon. Adversarial multiple source domain adaptation. In _Advances in neural information processing systems_, pages 8559-8570, 2018. * [37] S. Zhao, B. Li, X. Yue, Y. Gu, P. Xu, R. Hu, H. Chai, and K. Keutzer. Multi-source domain adaptation for semantic segmentation. In _Advances in Neural Information Processing Systems_, pages 7285-7298, 2019. * [38] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 2223-2232, 2017. * [39] X. Zhu, H. Zhou, C. Yang, J. Shi, and D. Lin. Penalizing top performers: Conservative loss for semantic segmentation adaptation. In _Proceedings of the European Conference on Computer Vision_, pages 568-583, 2018. * [40] Y. Zou, Z. Yu, B. V. K. Kumar, and J. Wang. Domain adaptation for semantic segmentation via class-balanced self-training. _arXiv preprint arXiv:1810.07911_, 2018.
Domain adaptation for semantic segmentation has recently been actively studied to increase the generalization capabilities of deep learning models. The vast majority of the domain adaptation methods tackle single-source case, where the model trained on a single source domain is adapted to a target domain. However, these methods have limited practical real world applications, since usually one has multiple source domains with different data distributions. In this work, we deal with the multi-source domain adaptation problem. Our method, namely StandardGAN, standardizes each source and target domains so that all the data have similar data distributions. We then use the standardized source domains to train a classifier and segment the standardized target domain. We conduct extensive experiments on two remote sensing data sets, in which the first one consists of multiple cities from a single country, and the other one contains multiple cities from different countries. Our experimental results show that the standardized data generated by StandardGAN allow the classifiers to generate significantly better segmentation.
Condense the content of the following passage.
187
arxiv-format/2408_08774v1.md
# Speckle Noise Analysis for Synthetic Aperture Radar (SAR) Space Data Sanjjushri Varshini R ([email protected]), Rohith Mahadevan ([email protected]), Bagiya Lakshmi S([email protected] ), Mathivanan Periasamy ([email protected]), Raja CSP Raman ([email protected]), Lokesh M ([email protected]) ## 1 Introduction This research addresses the persistent issue of speckle noise in Synthetic Aperture Radar (SAR) space data. Speckle noise, an inherent granular noise that degrades the quality of SAR images, poses considerable challenges in accurately interpreting and analyzing these images. Byeffectively reducing speckle noise, the clarity and usability of SAR data can be greatly enhanced, fostering improved applications in various fields such as remote sensing, environmental monitoring, and geological surveying. This research delivers a remarkable and innovative strategy by employing six distinct techniques for speckle noise reduction. Unlike established methods that rely on a single filtering technique, this study explores and compares multiple approaches to determine the most effective strategy. Including various filters allows for a comprehensive analysis and offers a more robust solution tothe problem of speckle noise, exhibiting a novel methodology in the domain of SAR image processing. The techniques investigated in this study include Lee Filtering, Frost Filtering, Kuan Filtering, Gaussian Filtering, Median Filtering, and Bilateral Filtering. Each of these methods has its own set of characteristics and operational principles, providing diverse options for speckle noise reduction. Lee Filtering and Frost Filtering are known for their adaptive nature, while Kuan Filtering is valued for its statistical approach. Gaussian Filtering, with its smoothing properties, Median Filtering, known for edge preservation, and Bilateral Filtering, which combines both spatial and intensity domain filtering, add further dimensions to the analysis. The methodology adopted in this study involves the application of these six filters to SAR space data, followed by a comparative analysis of their performance. By systematically assessing the effectiveness of each filter, the study aims to identify the most appropriate technique or combination of methods for optimal speckle noise reduction. This comprehensive approach advances the understanding of speckle noise mitigation and provides practical insights for enhancing the quality of SAR Space imagery. ## 2 Literature Review The summary of the most cited articles on SAR data analysis found in the Mendeley database here. Cloude and Pottier [1] present a method for parameterizing polarimetric scattering problems using eigenvalue analysis of the coherency matrix. This method applies a three-level Bernoulli statistical model to estimate average target scattering matrix parameters, emphasizing scattering entropy as a key factor in assessing polarimetric SAR data. The method is validated using POLSAR data from NASA/JPL AIRSAR and classical random media scattering problems. Freeman and Durden [2] developed a model combining three scattering mechanisms--canopy scatter, even/double-bounce scatter, and Bragg scatter--to describe polarimetric SAR backscatter from natural scatterers. This model effectively distinguishes between various forest conditions using data from NASA/JPL's AIRSAR system and serves as a predictive tool for estimating forest inundation and disturbance effects. Ferretti and colleagues [3] present a procedure to identify and use stable natural reflectors or permanent scatterers from long series of interferometric SAR images. Their method, applied to ESA ERS data, achieves high-accuracy DEM and terrain motion detection by estimating and removing atmospheric phase contributions, demonstrated through motion measurements and DEM refinement in Ancona, Italy. Berardino and colleagues [4] introduce a differential SAR interferometry algorithm to monitor surface deformations over time. Using singular value decomposition, the technique links independent SAR data sets to increase temporal observation, filtering atmospheric phase artifacts. The approach, tested with European Remote Sensing satellite data from 1992 to 2000, tracks surface deformation dynamics in Campi Flegrei caldera and Naples, Italy. Combot and colleagues [5] use spaceborne SAR to improve descriptions of air-sea exchanges under tropical cyclones. Their database, constructed from RadarSat-2 and Sentinel-1, includes high-resolution wind fields for 161 cyclones. The methodology, validated against best track and SFMR measurements, effectively captures TC vortex structure, despite challenges in heavy precipitation. Abdel-Hamid and colleagues [6] assess drought stress on grasslands in Eastern Cape Province using Sentinel-1 SAR data. Their analysis shows a significant correlation between SAR backscattering coefficients and NDVI values. The study finds communal grasslands more affected by drought than commercial ones, highlighting the role of management in improving resilience and productivity. Sekertekin and colleagues [7] explore the potential of ALOS-2 and Sentinel-1 SAR data for soil moisture estimation. Their analysis shows Sentinel-1 outperforms ALOS-2 for bare soil surfaces, while both data sets provide satisfactory results in vegetated surfaces using the Water Cloud Model. The study highlights the higher accuracy of Sentinel-1 for soil moisture estimation. Mullissa and colleagues [8] propose a framework for preparing Sentinel-1 SAR backscatter data in Google Earth Engine, incorporating noise correction, speckle filtering, and radiometric terrain normalization. This framework generates Analysis-Ready-Data suitable for various land and water applications. Zhang and colleagues [9] review the SAR Ship Detection Dataset (SSDD), emphasizing its popularity and impact on SAR remote sensing. They address limitations in initial versions by introducing bounding box, rotatable bounding box, and polygon segmentation labels, along with strict usage standards to improve accuracy and academic exchanges. Gagliardi and colleagues [10] demonstrate the use of Sentinel-1A SAR data for monitoring runway displacements at Leonardo Da Vinci International Airport. Their geostatistical analysis compares SAR data with high-resolution COSMO-SkyMed and ground-based data, proving Sentinel-1A's effectiveness for long-term monitoring and maintenance strategies. Morishita and Kobayashi [11] propose a method to derive 3D deformation by integrating deformation data from different sources. Their approach, validated with ALOS-2 data, successfully retrieves 3Doseismic deformation with high accuracy. The method is applicable to other SAR datasets and beneficial for disaster recovery. Oveis and colleagues [12] review the application of convolutional neural networks (CNNs) in SAR data analysis, covering various subareas such as target recognition, land use classification, and change detection. The review highlights practical techniques like data augmentation and transfer learning, and discusses future research directions and challenges. Tiampo and colleagues [13] compare methods for flood inundation mapping using SAR data and machine learning. They find amplitude thresholding the most effective technique, although machine learning also successfully reproduces inundation shapes. The study demonstrates high-resolution mapping's potential for emergency hazard response. Ferguson and Gunn [14] review radar polarimetric decomposition for freshwater ice systems, discussing lake ice, river ice, and glacial systems. They recommend further development of ice models and methods to improve environmental observables extraction, highlighting areas for future research. The reviewed literature demonstrates the diverse applications of SAR data and the challenges posed by speckle noise. Filter-based methods provide a practical and effective approach to address this issue. By carefully selecting and designing filters, significant improvements in image quality can be achieved, leading to more accurate and reliable results across a wide range of SAR applications. ## 3 Proposed Solution ### Data Collection The initial step involves collecting Synthetic Aperture Radar (SAR) space data from the Alaska Satellite Facility (ASF). ASF provides a wealth of SAR data from various satellites, ensuring access to high-quality and diverse datasets. This stage is critical as the subsequent analysis's accuracy and reliability depend on the collected data's quality. The selection of datasets will be based on criteria such as resolution, coverage area, and the presence of speckle noise, which is inherent in SAR imagery. ### Data Extraction Once the SAR data is collected, the next step is to convert the raw data files, typically in the NetCDF (.nc) format, into a more accessible image format such as PNG. This conversion is necessary for applying image processing techniques. The process involves extracting the relevant image data from the.nc files, ensuring that the spatial and radiometric integrity of the data is maintained. Specialized software tools and libraries, such as GDAL (Geospatial Data Abstraction Library), can be utilized for this purpose, facilitating efficient and accurate conversion. ### Data Storing After conversion, the images are stored in a structured manner for easy access and processing. Proper metadata is also recorded to maintain the context of the images, which is essential for the accuracy of the subsequent analysis. ### Applying Speckle Noise Reduction Techniques The core of this research lies in applying various speckle noise reduction techniques to the SAR Space images. Each method has unique characteristics and offers different advantages: **Lee** Utilizes local statistics to smooth the image while preserving edges adaptively. It is particularly effective in reducing speckle noise without compromising significant image features. **Frost Filtering:** Applies an exponential kernel that adapts to local variations in the image, effectively reducing noise while maintaining the integrity of edges and fine details. **Kuan Filtering:** Uses a multiplicative noise model to analyze and reduce speckle noise statistically. It balances between smoothing homogeneous areas and preserving edges. **Gaussian Filtering:** A linear filter that smooths the image based on a Gaussian function, effectively reducing noise at the cost of slight blurring of edges. **Median Filtering:** A non-linear filter that replaces each pixel value with the median value of the neighboring pixels. It is effective in reducing impulsive noise while preserving edges. **Bilateral Filtering:** Combines spatial and intensity domain filtering to smooth images while preserving edges and fine details. It is particularly effective in maintaining high-quality image post-processing. ### Result Analysis The final step involves a thorough analysis of the results obtained from applying the different filtering techniques. **PSNR (Peak Signal-to-Noise Ratio)**: Measures the ratio between the maximum possible signal power and the power of corrupting noise, indicating the quality of the reconstructed image. **MSE (Mean Squared Error)**: Calculates the average of the squares of the differences between the original and denoised image pixels, reflecting the overall error. **SSIM (Structural Similarity Index)**: Assesses the similarity between the original and denoised images by comparing luminance, contrast, and structure, providing a perceptual quality measure. **ENL (Equivalent Number of Looks)**: Evaluates the extent of speckle noise reduction by measuring the homogeneity of uniform image areas. **SSI (Speckle Suppression Index)**: Quantifies the effectiveness of speckle noise reduction, balancing noise suppression and detail preservation in the image. ## 4 Result In our research, we have evaluated various speckle noise reduction algorithms using multiple evaluation metrics. These metrics include Peak Signal-to-Noise Ratio (PSNR), Mean Squared Error (MSE), Structural Similarity Index (SSIM), Equivalent Number of Looks (ENL), and Speckle Suppression Index (SSI). The performance of five different filters--Lee, Kuan, Gaussian, Median, and Bilateral--was assessed using these metrics. The results are summarized in the table below: \\begin{tabular}{c c c c c c} & **Metric** & **Lee Filter** & **Kuan Filter** & **Gaussian Filter** & **Median Filter** & **Bilateral Filter** \\\\ \\hline **0** & PSNR & 40.206693 & 37.984304 & 29.072036 & 29.669396 & 29.907636 \\\\ **1** & MSE & 6.200277 & 10.343083 & 80.514994 & 70.168261 & 66.422740 \\\\ **2** & SSIM & 0.969557 & 0.986636 & 0.659307 & 0.804169 & 0.840699 \\\\ **3** & ENL & 0.633798 & 0.697551 & 0.872117 & 0.645966 & 0.689973 \\\\ **4** & SSI & 1.103162 & 1.121805 & 4.188730 & 3.502629 & 2.846183 \\\\ \\end{tabular} **PSNR**: The Lee Filter achieved the highest PSNR value of 40.206693, indicating the best performance in terms of signal preservation. **MSE**: The Lee Filter also performed the best in terms of MSE, with the lowest value of 6.200277, suggesting minimal error. **SSIM**: The Kuan Filter showed the highest SSIM value of 0.986636, indicating superior structural similarity and image quality. **ENL**: The Gaussian Filter obtained the highest ENL value of 0.872117, suggesting it is the best at reducing speckle noise. **SSI**: The Lee and Kuan Filters both demonstrated strong performance in terms of SSI, with values of 1.103162 and 1.121805, respectively, indicating effective speckle suppression. ## 5 Future scope The review of literature highlights several critical areas where Synthetic Aperture Radar (SAR) data analysis has made significant advancements, particularly in polarimetric scattering, surface deformation monitoring, and environmental applications. However, there remains substantial room for future research and development. With the growing application of convolutional neural networks (CNNs) in SAR data analysis, future research could focus on developing more sophisticated machine learning models, such as deep learning architectures, to enhance the accuracy and efficiency of tasks like target recognition, land use classification, and change detection. Exploring the potential of hybrid models combining machine learning with traditional SAR processing techniques could also yield promising results. The integration of SAR data with other remote sensing data, such as optical imagery or LiDAR, presents an exciting opportunity for enhanced environmental monitoring and disaster management. Future studies could focus on developing frameworks that effectively merge data from different sources to provide more comprehensive and accurate assessments of natural phenomena, such as flooding, landslides, and forest conditions. ## 6 Conclusion Overall, the Lee Filter exhibits the best performance across several metrics, making it a strong candidate for speckle noise reduction. However, the Kuan Filter emerges as a viable alternative, offering a good balance with high PSNR, low MSE, high SSIM, and reasonable ENL. While the Lee Filter provides slightly better image detail preservation, the Kuan Filter's performance in multiple metrics suggests it could be the preferred choice for effective speckle noise reduction. ## 7 References * [1] S. R. Cloude and E. Pottier, \"An entropy based classification scheme for land applications of polarimetric SAR,\" IEEE Transactions on Geoscience and Remote Sensing, vol. 35, no. 1, 1997, doi: 10.1109/36.551935. * [2] A. Freeman and S. L. Durden, \"A three-component scattering model for polarimetric SAR data,\" IEEE Transactions on Geoscience and Remote Sensing, vol. 36, no. 3, 1998, doi: 10.1109/36.673687. * [3] A. Ferretti, C. Prati, and F. Rocca, \"Permanent scatterers in SAR interferometry,\" IEEE Transactions on Geoscience and Remote Sensing, vol. 39, no. 1, 2001, doi: 10.1109/36.898661. * [4] P. Berardino, G. Fornaro, R. Lanari, and E. Sansosti, \"A new algorithm for surface deformation monitoring based on small baseline differential SAR interferograms,\" IEEE Transactions on Geoscience and Remote Sensing, vol. 40, no. 11, 2002, doi: 10.1109/TGRS.2002.803792. * [5] C. Combot et al., \"Extensive high-resolution synthetic aperture radar (SAR) data analysis of tropical cyclones: Comparisons with SFMR flights and best track,\" Mon Weather Rev, vol. 148, no. 11, 2020, doi: 10.1175/MWR-D-20-0005.1. * [6] A. Abdel-Hamid, O. Dubovyk, V. Graw, and K. Greve, \"Assessing the impact of drought stress on grasslands using multi-temporal SAR data of Sentinel-1: a case study in Eastern Cape, South Africa,\" Eur J Remote Sens, vol. 53, no. sup2, 2020, doi: 10.1080/22797254.2020.1762514. * [7] A. Sekertekin, A. M. Marangoz, and S. Abdikan, \"ALOS-2 and Sentinel-1 SAR data sensitivity analysis to surface soil moisture over bare and vegetated agricultural fields,\" Comput Electron Agric, vol. 171, 2020, doi: 10.1016/j.compag.2020.105303. * [8] A. Mullissa et al., \"Sentinel-1 sar backscatter analysis ready data preparation in google earth engine,\" Remote Sens (Basel), vol. 13, no. 10, 2021, doi: 10.3390/rs13101954. * [9] T. Zhang et al., \"SAR ship detection dataset (SSDD): Official release and comprehensive data analysis,\" Remote Sens (Basel), vol. 13, no. 18, 2021, doi: 10.3390/rs13183690. * [10] V. Gagliardi et al., \"Testing sentinel-1 sar interferometry data for airport runway monitoring: A geostatistical analysis,\" Sensors, vol. 21, no. 17, 2021, doi: 10.3390/s21175769. [11] Y. Morishita and T. Kobayashi, \"Three-dimensional deformation and its uncertainty derived by integrating multiple SAR data analysis methods,\" Earth, Planets and Space, vol. 74, no. 1, 2022, doi: 10.1186/s40623-022-01571-z. * [12] A. H. Oveis, E. Giusti, S. Ghio, and M. Martorella, \"A Survey on the Applications of Convolutional Neural Networks for Synthetic Aperture Radar: Recent Advances,\" IEEE Aerospace and Electronic Systems Magazine, vol. 37, no. 5, 2022, doi: 10.1109/MAES.2021.3117369. * [13] K. F. Tiampo, L. Huang, C. Simmons, C. Woods, and M. T. Glasscoe, \"Detection of Flood Extent Using Sentinel-1A/B Synthetic Aperture Radar: An Application for Hurricane Harvey, Houston, TX,\" Remote Sens (Basel), vol. 14, no. 9, 2022, doi: 10.3390/rs14092261. * [14] J. E. Ferguson and G. E. Gunn, \"Polarimetric decomposition of microwave-band freshwater ice SAR data: Review, analysis, and future directions,\" Remote Sens Environ, vol. 280, 2022, doi: 10.1016/j.rse.2022.113176.
This research tackles the challenge of speckle noise in Synthetic Aperture Radar (SAR) space data, a prevalent issue that hampers the clarity and utility of SAR images. The study presents a comparative analysis of six distinct speckle noise reduction techniques: Lee Filtering, Frost Filtering, Kuan Filtering, Gaussian Filtering, Median Filtering, and Bilateral Filtering. These methods, selected for their unique approaches to noise reduction and image preservation, were applied to SAR datasets sourced from the Alaska Satellite Facility (ASF). The performance of each technique was evaluated using a comprehensive set of metrics, including Peak Signal-to-Noise Ratio (PSNR), Mean Squared Error (MSE), Structural Similarity Index (SSIM), Equivalent Number of Looks (ENL), and Speckle Suppression Index (SSI). The study concludes that both the Lee and Kuan Filters are effective, with the choice of filter depending on the specific application requirements for image quality and noise suppression. This work provides valuable insights into optimizing SAR image processing, with significant implications for remote sensing, environmental monitoring, and geological surveying. Synthetic Aperture Radar (SAR), Speckle noise, Noise reduction techniques, Lee Filtering, Kuan Filtering, Structural Similarity Index (SSIM), Remote sensing
Write a summary of the passage below.
256
arxiv-format/2109_09091v1.md
# Taylor bubble motion in stagnant and flowing liquids in vertical pipes. Part I: Steady-states H. A. Abubakar\\({}^{1,2}\\) and O. K. Matar\\({}^{1}\\) \\({}^{1}\\)Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK \\({}^{2}\\)Department of Chemical Engineering, Ahmadu Bello University, Zaria 810107, Nigeria ## 1 Introduction Slug flow is a regime observed in gas-liquid flows in pipes, which is of central importance to steam production in geothermal power plants, hydrocarbons production in oil wells and their transportation in pipelines, and emergency cooling of nuclear reactors (Capponi _et al._, 2016; Fabre & Line, 1992; Mao & Dukler, 1990; Taha & Cui, 2006). This flow regime also features in geological systems such as volcanic eruptions (Pering & McGonigle, 2018). In vertical pipes, slug flow exhibits pseudo-periodic rise of large bullet-shaped _Taylor bubbles_ separated by liquid slug. The starting point for understanding slug flow in vertical pipes is elucidating the behaviour of a single Taylor bubble rising through a liquid, which is governed by the interaction of gravitational, interfacial, viscous, and inertial forces parameterised by a number of dimensionless groups; these include the inverse viscosity, \\(Nf\\), Eotvos, \\(Eo\\), and Froude, \\(Fr\\), numbers, defined as \\[Nf=\\frac{\\rho\\left(gD^{3}\\right)^{\\frac{1}{2}}}{\\mu},\\qquad Eo=\\frac{\\rho gD^{ 2}}{\\gamma},\\qquad Fr=\\frac{u}{\\sqrt{gD}}, \\tag{1}\\] where \\(\\rho\\), \\(\\mu\\), and \\(u\\) denote the density, dynamic viscosity, and a characteristic liquid on the bubble rise speed, and the assumptions underlying the analytical solutions of Dumitrescu (1943) and Davies & Taylor (1950) are valid. Later experimental, theoretical and numerical studies (Brown, 1965; Goldsmith & Mason, 1962; Kang _et al._, 2010; Lu & Prosperetti, 2009; Nickens & Yannitel, 1987; Zukoski, 1966) have provided further insights into the role of surface tension and viscosity on the rise speed in both inertia and non-inertia regimes through their influence on the radius of curvature of the bubble nose. Using a large pool of experimental data for \\(U_{b}\\) in stagnant liquids, Viana _et al._ (2003) developed a correlation, recently modified by Lizarraga-Garcia _et al._ (2017), for the effect of \\(Eo\\) and \\(Nf\\) on the rise speed taking into account pipe inclination. For a Taylor bubble rising in a flowing liquid, Nicklin _et al._ (1962) proposed a correlation, corroborated by theoretical investigations (Bendiksen, 1985; Collins _et al._, 1978), for upward flowing liquid, which relates \\(U_{b}\\) to \\(\\bar{U}_{L}\\) \\[U_{b}=C_{1}\\bar{U}_{L}+C_{0}, \\tag{5}\\] with \\(C_{0}\\) and \\(\\bar{U}_{L}\\) retaining their earlier definitions and \\(C_{1}\\) represents a dimensionless constant whose value depends on the velocity profile of the flowing liquid and is equal to the ratio of the maximum to mean liquid velocity (Bendiksen, 1985; Clift _et al._, 1978; Collins _et al._, 1978; Nicklin _et al._, 1962). For turbulent flow, \\(C_{1}\\approx 1.2\\) increasing with decreasing \\(Re_{L}\\) approaching \\(C_{1}\\approx 1.9\\) at \\(Re_{L}=100\\)(Nicklin _et al._, 1962). Other important features that have been studied experimentally, theoretically, and numerically are the film thickness and length of developing film (Araujo _et al._, 2012; Batchelor, 1967; Brown, 1965; Goldsmith & Mason, 1962; Kang _et al._, 2010; Llewellin _et al._, 2012; Nogueira _et al._, 2006_a_), and wake (Araujo _et al._, 2012; Campos & Guedes de Carvalho, 1988; Maxworthy, 1967; Moissis & Griffith, 1962; Nogueira _et al._, 2006\\(b\\); Pinto _et al._, 1998), and wall stress features (Araujo _et al._, 2012; Feng, 2008; Nogueira _et al._, 2006_a_). Despite the volume of previous research, there is still a need for a systematic study of the influence of the fluid properties and flow conditions on the bubble behaviour. This is motivated by the experimental evidence for Taylor bubble feature transitions, such as a change in the flow pattern in the wake region and bubble shape from symmetric to asymmetric in downward liquid flow, as well as bubble breakup under certain conditions (most likely caused by fluctuations in a turbulent environment). The critical conditions at which these transitions occur, and their underlying mechanisms, can be understood by examining the stability of the axisymmetric steady-states for the corresponding parameter values. In the present work, we calculate the steady shape of axisymmetric Taylor bubbles, and their associated flow fields moving in stagnant and downward-flowing liquids in vertical pipes, characterised by \\(Nf\\), \\(Eo\\), and \\(Fr\\). Plots showing the influence of these parameters on the Taylor bubble shape are presented and the results of the associated impact on the steady-state features characterising the three distinct bubble regions, nose, body, and bottom, discussed above. Comparisons are made between our numerical predictions and those based on theoretical analysis or empirical correlations; insights into the physical mechanisms governing the observed influence are provided. In a companion paper (Aububakar & Matar, 2021), Part II of this two-part study, the linear stability of these steady-state solutions is examined together with an energy analysis to pinpoint the destabilising mechanisms. The rest of this paper is organised as follows. Section 2 is devoted to details of the problem formulation and the numerical simulation strategy based on the use of a finite-element technique. The results of the steady-state simulations in stagnant and downward-flowing liquids are discussed in Sections 3 and 4, respectively. Finally, in section 5, concluding remarks are provided. ## 2 Problem formulation ### Governing equations We consider the motion of an axisymmetric Taylor bubble of volume, \\(v_{b}\\), moving at a velocity of magnitude \\(u_{b}\\) through an incompressible fluid of density \\(\\rho\\), viscosity \\(\\mu\\), and interfacial tension \\(\\gamma\\) in a vertically-oriented, circular pipe of diameter \\(D\\); \\(v_{b}\\), \\(u_{b}\\), and \\(\\gamma\\) are considered to be constants. In addition, we also assume that the density, \\(\\rho_{g}\\), and viscosity, \\(\\mu_{g}\\), of the gas bubble are very small as compared to those of the liquid, and that the pressure within the bubble, \\(p_{b}\\), is also a constant; hence, the influence of the gas phase is restricted to the interface separating the liquid and gas phases (Bae & Kim, 2007; Feng, 2008; Fraggedakis _et al._, 2016; Kang _et al._, 2010; Lu & Prosperetti, 2009; Tsamopoulos _et al._, 2008; Zhou & Dusek, 2017). A cylindrical coordinate system, \\((r,\\theta,z)\\), is adopted so that the coordinates along and perpendicular to the axis of symmetry are \\(z\\) and \\(r\\), respectively, with the interface located at \\((r,z)=\\Gamma_{b}^{0}\\), and the \\(z\\) origin chosen to coincide with the bubble nose, as shown in Figure 1. The Navier-Stokes and continuity equations which govern the bubble motion are rendered dimensionless by scaling the length, velocity, and pressure on \\(D,\\sqrt{gD}\\) and \\(\\rho gD\\), respectively. These equations, expressed in a frame of reference translating with the velocity \\(\\mathbf{u}_{b}=-U_{b}\\mathbf{i}_{z}\\) of the bubble nose, wherein \\(U_{b}=u_{b}/\\sqrt{gD}\\), are written compactly in dimensionless forms as: \\[\\frac{\\partial\\mathbf{u}}{\\partial t}+\\left(\\mathbf{u}\\cdot\ abla\\right) \\mathbf{u}-\ abla\\cdot\\mathbf{T}=0, \\tag{1}\\] \\[\ abla\\cdot\\mathbf{u}=0, \\tag{2}\\] where \\(\\mathbf{u}\\) is the fluid velocity vector in the moving frame of reference, \\(t\\) denotes time, \\(\ abla\\) is the gradient operator, and \\(\\mathbf{T}\\) is the stress tensor: \\[\\mathbf{T}=-p\\mathbf{I}+Nf^{-1}\\left(\ abla\\mathbf{u}+\ abla\\mathbf{u}^{T} \\right), \\tag{3}\\] in which \\(p\\) represents the dynamic pressure, and \\(\\mathbf{I}\\) unit tensor. In order to impose boundary conditions on the solutions of equations (1)-(3), the boundary of the domain, \\(\\Gamma^{0}\\) is divided into \\(\\Gamma_{\\mathrm{in}}^{0},\\ \\Gamma_{\\mathrm{out}}^{0},\\ \\Gamma_{\\mathrm{wall}}^{0}\\), \\(\\Gamma_{\\mathrm{sym}}^{0}\\), and \\(\\Gamma_{b}^{0}\\), as shown in Figure 1, which represent the domain inlet and outlet, the wall, and the symmetry axis, respectively. At the wall, no-slip and no-penetration boundary conditions are imposed, \\[\\mathbf{u}=-\\mathbf{u}_{b},\\quad\\mathrm{on}\\quad\\Gamma_{\\mathrm{wall}}^{0}, \\tag{4}\\] while at the inlet, prescribed values, \\(\\mathbf{u}_{in}\\) are specified for the velocity: \\[\\mathbf{u}=\\mathbf{u}_{in}-\\mathbf{u}_{b}\\quad\\mathrm{on}\\quad\\Gamma_{ \\mathrm{in}}^{0}. \\tag{5}\\] Along \\(\\Gamma_{\\mathrm{out}}^{0}\\), we impose an outlet condition: \\[\\mathbf{n}\\cdot\\mathbf{T}=0. \\tag{6}\\] Finally, at the interface, we impose the normal stress, tangential stress, and kinematic boundary conditions, expressed respectively by \\[\\mathbf{n}\\cdot\\mathbf{T}\\cdot\\mathbf{n}+P_{b}-z-Eo^{-1}\\kappa=0, \\tag{7}\\] \\[\\mathbf{n}\\cdot\\mathbf{T}\\times\\mathbf{n}=\\mathbf{0}, \\tag{8}\\] \\[\\frac{d\\mathbf{r}_{b}}{dt}\\cdot\\mathbf{n}-\\mathbf{u}\\cdot\\mathbf{n}=0, \\tag{9}\\]where \\(\\kappa\\) is the curvature of the interface, \\(P_{b}=p_{b}/\\rho gD\\) denotes the dimensionless bubble pressure, \\(\\mathbf{r}_{b}(t)\\) represents the position vector for the location of the interface \\(\\Gamma_{b}^{0}\\), and \\(\\mathbf{n}\\) and \\(\\mathbf{t}\\) correspond to the outward-pointing unit normal and the tangent vectors to the interface, respectively. The \\(z\\) term in the normal stress condition, given by equation (7), corresponds to the hydrostatic pressure. In order to determine the dimensionless bubble pressure, \\(P_{b}\\), a constraint of constant dimensionless bubble volume, \\(V_{b}=v_{b}/D^{3}\\), is imposed: \\[V_{b}+\\frac{1}{3}\\oint_{\\Gamma_{b}^{0}}\\left[\\mathbf{r}_{b}\\cdot\\mathbf{n} \\right]d\\Gamma_{b}^{0}=0. \\tag{10}\\] In order to obtain a solution for the shape of the bubble of volume \\(V_{b}\\), speed \\(U_{b}\\), and pressure \\(P_{b}\\) associated with its steady motion through a liquid of dimensionless velocity \\(U_{m}\\), for given \\(Nf\\) and \\(Eo\\), we implemented a technique based on the kinematic update of the interface shape with an implicit treatment of the curvature (Slikkeveer & Van Loohuizen, 1996); the numerical procedure is described next. ### Numerical method The steady-state versions of the governing equations and boundary conditions given by (1)-(10) are solved using a consistent penalty Galerkin finite-element method implemented within FreeFem++ (Hecht, 2012) based on the standard Taylor-Hood element and piecewise quadratic element approximations for the flow field variables and interface deformation magnitude, respectively. The system of partial differential equations \\((\\ref{eq:2.1})-(\\ref{eq:2.2})\\) subject to the boundary conditions \\((\\ref{eq:2.4})-(\\ref{eq:2.10})\\) are transformed into their weak forms, the dependent variables in the equations approximated using suitable basis functions. The computational domain is divided into subdomains around which the approximated variables are defined to obtain a set of nonlinear algebraic relations among the unknown parameters of the approximations. Due to the system Figure 1: Schematic of the domain used to model the steady motion of an axisymmetric Taylor bubble nonlinearity, the set of equations was solved using Newton's method. In the determination of the interface shape, kinematic update is used based on a pseudo-time-step technique, allowing for the gradual satisfaction of the no-penetration condition on the interface. The numerical solution begins by providing an initial guess for the bubble steady speed, \\(U_{b}\\), the flow field variables, \\((\\mathbf{u},p)\\), and position vector of the interface, \\(\\mathbf{r}_{b}\\). For the first simulation carried out, \\(Nf=40\\), \\(Eo=60\\), and \\(U_{m}=0\\), corresponding to bubble rise in a stagnant liquid, \\(U_{b}\\) was initially taken to be 0.35 and the bubble interface position was assumed to be described by a quarter-circle top, a cylindrical body, and a quarter-circle bottom. The initial guess for the flow field was then obtained by solving the Stokes equation in the domain formed by the assumed bubble interface. For subsequent simulations, the previous steady-state solutions for the condition closest to the new condition was used as an initial guess. With a known initial guess, the solution proceeded in three stages: solution for the variables, steady bubble speed determination, and then domain deformation. In the variable solution stage, the resulting system of linear equations in the Newton method is solved using MUltifrontal Massively Parallel sparse direct Solver (MUMPS) to obtain updated values for the velocity, pressure, and interface deformation magnitudes. The updated velocity field is then transformed from a moving to a fixed frame of reference from which the axial velocity at the bubble nose is extracted and set as the steady bubble speed. Using the interface deformation magnitude obtained in the variable solution stage, the magnitude of the deformation of the domain is then determined. For all other nodes in the domain, the size of their deformations is adapted to that of the interface in a way that ensures that the mesh quality does not degrade rapidly by assuming that the computational mesh is an elastic body whose interior deforms in response to the boundary deformation. This assumption forms the basis of the Elastic Mesh Update Method of treating interior nodes which involves solving a linear elasticity equation for the mesh deformation subject to the boundary conditions that equals the desired deformation on the boundaries (Ganesan & Tobiska, 2008; Johnson & Tezduyar, 1994). The iterative process is halted when the interface position vector and the values of the flow field variables, steady bubble speed and pressure no longer change, and the no-penetration condition is satisfied. The implementation details are described in Abubakar (2019). The numerical method was validated by simulating the experiment of Bugg & Saad (2002) where the velocity field around a Taylor bubble rising in a stagnant olive oil in a pipe of diameter 19 mm was measured using Particle Image Velocimetry (PIV) at five different positions. The fluid properties used in the experiment and the corresponding dimensionless parameters are given in Table 1. In this table, \\(H_{b}\\) denotes the dimensionless height of a cylinder of the same diameter as that of the pipe used in the experiment that has the same volume as the gas phase, which is the aspect ratio for the bubble. \\begin{table} \\begin{tabular}{c c c c c c c c} & Fluid properties & \\multicolumn{6}{c}{Dimensionless parameters} \\\\ \\hline \\(\\rho\\left(\\mathrm{kgm}^{-3}\\right)\\) & \\(\\mu\\left(\\mathrm{Nsm}^{-2}\\right)\\) & \\(\\gamma\\left(\\mathrm{Jm}^{-2}\\right)\\) & \\(v_{b}\\left(\\mathrm{m}^{3}\\right)\\) & \\(Nf\\) & \\(Eo\\) & \\(U_{m}\\) & \\(H_{b}\\) \\\\ 911 & 84 \\(\\times 10^{-3}\\) & 3.28 \\(\\times 10^{-2}\\) & 10 \\(\\times 10^{-6}\\) & 88.95 & 98.33 & 0 & 2.00 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Dimensionless parameters corresponding to the fluid properties used to validate the numerical predictions against the experimental work of Bugg & Saad (2002). and outlet boundaries have no influence on the steady-state results, periodic conditions can be imposed in place of boundary conditions (5) and (6). This approach was used by Lu & Prosperetti (2009) in their numerical study of Taylor bubble dynamics and can easily be implemented within FreeFem++. The predicted dimensionless bubble rise speed is \\(0.2928\\), corresponding to a deviation of \\(3.4\\%\\) from the experimentally measured value of \\(0.303\\). Further comparisons with the experiment were carried out using the flow field results at five measurement positions around the bubble. Ahead of the bubble, velocity measurements were taken along the pipe axis and in the radial direction at an axial distance of \\(0.111D\\). Figures 3a and 3b show the velocity profiles for these two locations and are well predicted by our simulation. Figure 3c compares the velocity measurement taken at an axial distance of \\(0.504D\\) below the bubble nose. At this point, the magnitude of the radial velocity component is still developing. When the velocity in the film is fully-developed, the magnitude of the radial velocity at all points in the radial direction is approximately zero. By progressively plotting the radial velocity profile at various points below the bubble nose, a point is reached at which the radial velocity becomes zero. The axial velocity profile at this location is shown in Figure 3d and the dimensionless film thickness was measured to be \\(0.1235\\). Although no experimental measurement of the film thickness was reported in Bugg & Saad (2002), the deviation of the numerical simulation results from the theoretical estimated value of Brown (1965) using (3), which predicts the film thickness to be \\(0.1193\\), is \\(3.52\\%\\). As the liquid emerges from the falling film region into the wake of the bubble, the radial component of its velocity reappears in order to redirect the liquid from the film back towards the center of the pipe. Figure 3e shows the velocity profile in the wake of the Taylor bubble at an axial distance of \\(0.2D\\) below the bubble bottom. While the radial component of the experimental velocity profile is reasonably well predicted by the numerical simulation, it is obvious that there are larger discrepancies associated with the prediction of the axial velocity. We note that a similarly large deviation of the axial velocity was observed by Bugg & Saad (2002); Lu & Prosperetti (2009) in their numerical simulations of the same experiment. We therefore agree with Lu & Prosperetti (2009) that it is possible that the error bars associated with the experimental data for the wake region may be relatively large. \\begin{table} \\begin{tabular}{c c c c} Boundary region(s) & Triangle edge length & Boundary length & Number of triangles \\\\ \\hline 1 and 3 & 0.5 & 0.042 & 12 \\\\ 2a and 6b & varies & 0.042 & varies \\\\ 2b and 6a & 1.0 & 0.004 & 250 \\\\ 2c & varies & 0.004 & varies \\\\ 2d and 4b & 0.45 & 0.007 & 64 \\\\ 2e and 4a & 0.55 & 0.042 & 13 \\\\ 5 & varies & varies & 700-800 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Number and length of the edge of triangle elements at different sections of the domain boundaries used in mesh generation (see Figure 2a)Figure 3: Validation of the numerical predictions (lines) for the velocity profiles for the positions indicated in Fig. 2 against the PIV measurements (symbols) of Bugg & Saad (2002); (a) axial velocity component, \\(u_{z}\\), along the pipe axis (position 1); (b) axial, \\(u_{z}\\), and radial, \\(u_{r}\\), velocity components at \\(\\frac{z}{D}=0.111\\) ahead of the bubble nose (position 2); (c) axial and radial velocity components in the developing film at \\(\\frac{z}{D}=0.504\\) below the bubble nose (position 3) and (d) axial velocity component in the fully-developed film (position 4); (e) axial and radial components of velocity at distance \\(\\frac{z}{D}=0.20\\) below the bubble bottom (position 5). ## 3 Steady-state bubble rise in stagnant liquids (\\(U_{m}=0\\)) In this section, we present a discussion of our parametric study of a Taylor bubble of dimensionless volume \\(V_{B}=0.3389\\pi\\), equivalent to aspect ratio \\(H_{B}=1.3556\\), in a stagnant liquid (\\(U_{m}=0\\)). The effects of \\(Nf\\) and \\(Eo\\) on the hydrodynamic features of a steadily rising Taylor bubble depicted in Figure 2b are examined. ### Qualitative analysis of steady-state shapes and flow field Inspired by Kang _et al._ (2010), for each Taylor bubble, the steady-state shape is presented as a sectional plane through the center of its three-dimensional axisymmetric shape, coloured using the velocity magnitude, with streamlines and vector fields superimposed on the left and right sides of the axis of symmetry, respectively. The inverse viscosity number \\(Nf\\) is a measure of the relative importance of the magnitude of gravity to the viscous force. At constant \\(Eo\\) and \\(U_{m}\\), an increase in \\(Nf\\) is associated with a decrease in liquid viscosity and its influence on the bubble shape and the surrounding flow field is shown in Figure 4a for \\(Eo=220\\) and \\(U_{m}=0.00\\). It is seen that by increasing \\(Nf\\), the viscous drag on the bubble is reduced as reflected by an increase in the rise speed, \\(U_{b}\\), whose value saturates for large \\(Nf\\); this is in agreement with experimental observations (Llewellin _et al._, 2012; Nogueira _et al._, 2006\\(a\\); White & Beardmore, 1962) It is also discernible from Figure 4a that the thickness of the film between the bubble and the pipe wall decreases with \\(Nf\\) due to the decrease in viscous normal stress in this region, as expected. The decrease in the magnitude of the normal viscous stress component with increasing \\(Nf\\) is also accompanied by a decrease in the bubble length as well as its pressure \\(P_{b}\\). It can also be seen from Figure 4a that the size and intensity of the counter-rotating vortices in the wake region increase with \\(Nf\\). This is related to the adverse pressure drop that accompanies the jetting of the liquid in the film into the bottom of the bubble, leading to flow separation. The magnitude of the jetting velocity, highlighted by the colour map in this figure, increases with \\(Nf\\), resulting in increased wake length and volume. Another effect of the increase in the intensity of the recirculation in the wake region with \\(Nf\\) is the more pronounced dimpling of the bubble bottom. It is anticipated that as \\(Nf\\) is increased further, the bubble bottom will eventually form a skirted tail and ultimately undergo breakup into small bubbles. Therefore, it is expected that at very high \\(Nf\\) (and \\(Eo\\)), a topological transition is approached, and reaching a converged steady-state solution becomes increasingly difficult. For a fixed value of \\(Nf\\) and \\(U_{m}\\), changes in \\(Eo\\) are related to variations in the relative influence of buoyancy to surface tension forces. To assess the effect of \\(Eo\\) on the steady-state shape and flow field around a Taylor bubble, four simulation cases with \\(Nf=100\\) are shown in Figure 4b. Under the influence of \\(Eo\\), changes in the concavity of the bubble bottom are most noticeable. As \\(Eo\\) increases, the bubble bottom becomes more deformed with the tails of the Taylor bubbles becoming elongated due to the decrease in the tendency of the interface to resist deformation. Unlike the case of varying \\(Nf\\), changes in \\(Eo\\) result in a marginal influence on the pressure inside the bubble, and bubble length, particularly beyond \\(Eo=100\\), as shown in Figure 4b. In Figure 5, we focus on the region in parameter space wherein \\(Eo<20\\), which has been highlighted by White & Beardmore (1962) as being the one in which surface tension effects are expected to be significant; here, we show the effect of \\(Nf\\) on the bubble steady-state shapes and flow fields at \\(Eo=10\\) and \\(20\\). In contrast to what was observed at higher values of \\(Eo\\) in Figure 4a, an increase in the value of \\(Nf\\) has little influence (and this influence decreases with decreasing \\(Eo\\)) on the bubble length andFigure 4: Steady shapes, streamlines, and flow fields associated with bubble rise in stagnant liquids: (a) effect of \\(Nf\\) for \\(Eo=220\\); (b) effect of \\(Eo\\) for \\(Nf=100\\). In each panel, we show the streamlines and vector fields superimposed on the velocity magnitude pseudocolour plot on the right and left sides of the symmetry axis, respectively. For each case, we provide numerical predictions of the bubble rise speed, \\(U_{b}\\), and pressure, \\(P_{b}\\). deformation of the bubble bottom. What is seen instead is the emergence of a bulge in the film region close to the bubble bottom, which becomes more pronounced and appears to propagate towards the nose in the form of a capillary wave as \\(Nf\\) and \\(Eo\\) are increased and decreased, respectively. We now turn our attention to examining the principal regions of the bubble starting with the nose region which is discussed next. ### The nose region The hydrodynamic features around the nose region (a precise definition of the spatial extent of this region is provided below) are the rise speed \\(U_{b}\\), the distance ahead of the nose \\(L_{n}\\) (in a moving frame of reference) at which the flow becomes fully-developed, and the nose curvature. In Figure 5(a), the numerical results for \\(U_{b}\\) are compared with the predictions from the empirical correlation of Viana _et al._ (2003) given by \\[U_{b}=\\frac{0.34\\left[1+\\left(14.793/Eo\\right)^{3.06}\\right]^{-0.58}}{\\left[1+ \\left(Nf\\left[31.08\\left(1+\\left(29.868/Eo\\right)^{1.96}\\right)^{0.49}\\right]^ {-1}\\right)^{\\Theta}\\right]^{-1.0295\\Theta^{-1}}}, \\tag{1}\\] where the parameter \\(\\Theta\\) is expressed by \\[\\Theta=-1.45\\left[1+\\left(24.867\\ Eo\\right)^{9.93}\\right]^{0.094}.\\] The overall agreement between the numerical predictions and those obtained from equation (1) is satisfactory and improves with increasing \\(Nf\\). This is because a large proportion of the data used in generating the correlation are based on experiments conducted in the inertia regime (Viana _et al._, 2003). It is also seen clearly from Figure 5(a) that for all \\(Nf\\) values investigated, the magnitude of \\(U_{b}\\) increases steeply with \\(Eo\\) at low \\(Eo\\) then gradually with rising \\(Eo\\) before reaching a plateau at large \\(Eo\\). Saturation Figure 5: Effect of variation of \\(Nf\\) on the steady Taylor bubble shapes at low \\(Eo\\): (a) \\(Eo=20\\) and (b) \\(Eo=10\\). of \\(U_{b}\\) with \\(Nf\\) is also observed at high \\(Nf\\). For conditions in which \\(U_{b}\\) is essentially independent of \\(Eo\\), which can be deduced from Figure 6a to be around \\(Eo=100\\), the limiting value of \\(Nf\\) and the corresponding \\(U_{b}\\), as established by numerous previous studies (Brown 1965; Dumitrescu 1943; Griffith & Wallis 1961; Kang _et al._ 2010; Lu & Prosperetti 2009; Viana _et al._ 2003; White & Beardmore 1962; Zukoski 1966), are 300 and 0.35, respectively, also in agreement with the numerical results shown in Figure 6a. Figure 6b shows a typical profile of the radial component of the velocity along the interface of a Taylor bubble generated with \\(Nf=80\\) and \\(Eo=140\\). Starting from the nose of the bubble, which is a stagnation point in a frame of reference that moves with the bubble rise speed, the general observation is that the radial velocity component increases until it peaks before gradually diminishing, approaching zero in the fully-developed film. The region starting from the nose and ending at the point at which the radial velocity on the interface attains its maximum value is referred to as the 'nose region'. For all points Figure 6: Flow characteristics associated with the nose region for bubbles rising in stagnant liquids: (a) effect of \\(Nf\\) and \\(Eo\\) on steady-state bubble rise speed showing a comparison between numerical results (coloured marker symbols) and analytical prediction of equation (1) (coloured continuous line); (b) typical radial velocity profile (blue) along the interface (red) for \\(Nf=80\\) and \\(Eo=140\\); (c) frontal radius \\(R_{F}\\) normalised by the maximum Taylor bubble radius for the respective \\((Nf,Eo)\\) pairing \\(r_{max}\\) showing convergence towards a constant value of 0.815 for \\(Nf\\geq 80\\) with the inset displaying an enlarged view of the \\(10\\leq Eo\\leq 60\\) range; (d) effect of \\(Nf\\) and \\(Eo\\) on the stabilisation length ahead of the bubble nose. in this region, the mean radius of curvature \\(R_{m}\\) is related to the total curvature \\(\\kappa\\) by \\[\\frac{2}{R_{m}}=2\\kappa_{m}=\\kappa_{a}+\\kappa_{b}=\\kappa, \\tag{2}\\] where \\(\\kappa_{m}\\) denotes the mean curvature while \\(\\kappa_{a}\\) and \\(\\kappa_{b}\\) are the principal components of \\(\\kappa\\) in the \\(r-z\\) and \\(r-\\theta\\) planes, respectively. The average of the mean radius of curvature is computed and reported as the frontal radius, \\(R_{F}\\). The effects of \\(Nf\\) and \\(Eo\\) on \\(R_{F}\\) normalised by the maximum bubble radius \\(r_{\\rm max}\\) are shown in Figure 6c from which it is seen that for \\(Eo<100\\), \\(R_{F}/r_{\\rm max}\\) is a non-monotonic function of \\(Eo\\): it decreases with \\(Eo\\) before increasing again beyond a certain \\(Eo\\) value. This value of \\(Eo\\), at the turning point of \\(R_{F}\\), decreases with increasing \\(Nf\\), approaching a constant that lies between \\(Eo=20\\) and \\(Eo=30\\), probably related to the emergence of the bulge around the lower part of the film region. For \\(Eo>100\\), the frontal radius is weakly-dependent on \\(Eo\\), increases with \\(Nf\\) becoming essentially independent of \\(Nf\\) at high \\(Nf\\). These trends are consistent with those associated with the effects of \\(Nf\\) and \\(Eo\\) on \\(U_{b}\\) confirming the fact that the rise speed is related to the curvature of the bubble nose. We also find that for \\(Eo>100\\) and \\(Nf=(40,60,80,100,120,140,160)\\), the frontal radius is \\(R_{F}=(0.2818,0.2951,0.3043,0.3108,0.3155,0.3188,0.3216)\\), respectively, in agreement with previous studies (Bugg _et al._ 1998; Fabre & Line 1992; Feng 2008; Funada _et al._ 2005); these results suggest that the bubble nose is prolate-like rather than spherical in shape in which \\(R_{F}\\approx 0.4\\). Under inertial conditions, Brown (1965) demonstrated that the frontal radius of the Taylor bubbles normalised by its respective maximum bubble radius \\(r_{\\rm max}\\) is the same for all liquids and takes a value of 0.75. The results shown in Figure 6c indicate that the normalised \\(R_{F}\\) approaches a value of 0.815 for \\(Nf>80\\) which demarcates the limit in \\(Nf\\) at which viscosity has a strong influence on the curvature of the bubble nose. Beyond a certain axial distance along the axis of symmetry, commonly known in the Taylor bubble literature as the'stabilisation length', the stagnant nature of the liquid into which the bubble is rising is attained. In this study, in a frame of reference moving with the bubble velocity, we define \\(L_{n}\\) as the distance at which the axial velocity equals 99% of the magnitude of the axial velocity far ahead of the bubble nose. The influence of \\(Nf\\) and \\(Eo\\) on \\(L_{n}\\) is shown in Figure 6d. Just like the bubble rise speed, \\(L_{n}\\) initially increases with \\(Eo\\) before plateauing beyond \\(Eo=100\\) for all \\(Nf\\); at constant \\(Eo\\), \\(L_{n}\\) increases with \\(Nf\\) becoming weakly-dependent on it for sufficiently large \\(Nf\\) values. The reason for this can be attributed to the increase in the momentum imparted on the liquid ahead of the bubble nose in a fixed frame of reference as the bubble rise speed increases with \\(Nf\\) and some \\(Eo\\) ranges. ### The film region The features that define the hydrodynamics of the film region are the stabilisation length \\(L_{f}\\), the equilibrium film thickness \\(\\Delta_{f}\\), and the velocity profiles in the fully-developed film. The first two features are crucial parameters as it is expected that the flow pattern in the wake of a Taylor bubble becomes independent of the bubble length for bubbles of lengths greater than \\(L_{f}\\) and heavily-dependent on \\(\\Delta_{f}\\) (Nogueira _et al._ 2006_b_). The stabilisation length \\(L_{f}\\) is determined to be the point at which the radial velocity component, and the rate of change in the axial velocity component along the interface are less than 1% of their maximum interfacial values. Figure 7a shows that \\(L_{f}\\) increases steeply with \\(Eo\\) before plateauing at high \\(Eo\\) for all values of \\(Nf\\) studied. For a fixed \\(Eo\\) value, \\(L_{f}\\) increases with \\(Nf\\) indicating that the film needs to travel a longer distance below the bubble nose before it becomes fully-developed. However, unlike the dependence on \\(Nf\\) of the bubble rise speed, or the nose stabilisation length, \\(L_{f}\\) does not appear to saturate with increasing \\(Nf\\). The results, therefore, indicate that as the viscosity is decreased, it becomes increasingly difficult to obtain a truly fully-developed film around Taylor bubbles that are not extremely long. Below the developing length in the film region, the liquid film is deemed to have attained equilibrium, and the thickness is from there onward constant until the Taylor bubble tail region is approached. The film thickness at the point where the equilibrium film thickness is first attained is measured from our numerical predictions and the result is compared with the theoretical prediction of the liquid film thickness. Figure 7: Flow characteristics associated with the film region for bubbles rising in stagnant liquids: stabilisation length \\(L_{f}\\) and equilibrium film thickness \\(\\Delta_{f}\\), depicted in (a) and (b), respectively, showing a comparison between numerical simulations (coloured markers) and theoretical prediction using (3.3) and (3.1) (coloured continuous solid line) for different \\(Nf\\) and \\(Eo\\); effect of \\(Eo\\) on the axial velocity in the fully-developed film region \\(u_{z}\\) normalized by \\(U_{b}\\) with \\(Nf=40,100,160\\) shown in (c)-(e), respectively; effect of \\(Nf\\) on \\(u_{z}/U_{b}\\) with \\(Eo=20,140,260\\) shown in (f)-(h). In (c)-(h), the numerical simulations are represented by the coloured markers and the theoretical predictions of (3.5) by coloured solid lines. of Brown (1965). From Brown (1965), the equation that relates the equilibrium film thickness to the bubble rise speed, in dimensionless form, can be written as \\[\\frac{4Nf}{3U_{b}}\\Delta_{f}^{3}+2\\Delta_{f}-1=0. \\tag{10}\\] Using equation (10) together with (11), \\(\\Delta_{f}\\) is computed for different \\(Nf\\) and \\(Eo\\), and the results are compared with our numerical prediction in Figure 7b. The numerical and theoretical predictions are in good agreement particularly at higher \\(Nf\\), as expected, since the thin liquid film assumption becomes more valid with increasing inverse viscosity number. The decline in the equilibrium film thickness with \\(Nf\\) is due to the decrease in the magnitude of the normal stress exerted on the interface as the fluid viscosity is decreased. It is noteworthy that despite the apparent dependence of \\(L_{f}\\) on \\(Eo\\) with increasing \\(Nf\\), \\(\\Delta_{f}\\) remains almost constant beyond \\(Eo=100\\). In order to obtain an approximation of the axial velocity component in the fully-developed film, \\(u_{z}\\), the following reduced version of the dimensionless form of the axial momentum equation in this region is considered (Brown 1965): \\[\\frac{1}{r}\\frac{d}{dr}\\left[r\\frac{du_{z}}{dr}\\right]=-Nf; \\tag{11}\\] the solution of equation (11) is expressed by \\[u_{z}=-Nf\\left[\\left(\\frac{0.25-r^{2}}{4}\\right)-\\frac{1}{2}\\left(0.5-\\Delta_ {f}\\right)^{2}\\ln\\left(\\frac{0.5}{r}\\right)\\right]. \\tag{12}\\] The predictions from equation (12), scaled using the bubble rise speed and compared to our numerical results are shown in Figures 7c-7e and 7f-7h, which highlight the effect of \\(Nf\\) and \\(Eo\\) on \\(u_{z}/U_{b}\\), respectively. The improvement in the agreement between the numerical results and the theoretical predictions is noticeable with increasing \\(Eo\\) particularly at high \\(Nf\\). ### Hydrodynamic features at the wall and interface #### 3.4.1 Wall shear stress From equation (8), the shear stress at any boundary is defined as \\[\\boldsymbol{\\tau}=\\mathbf{n}\\cdot\\mathbf{T}\\times\\mathbf{n}. \\tag{13}\\] For an axisymmetric boundary, the nonzero component of equation (13) simplifies to \\[\\tau=Nf^{-1}\\left[\\mathbf{n}\\cdot\\frac{d\\mathbf{u}}{ds}+\\mathbf{t}\\cdot\\frac{ d\\mathbf{u}}{dn}\\right], \\tag{14}\\] which when evaluated at the wall, gives \\[\\tau_{w}=-Nf^{-1}\\frac{du_{z}}{dr}, \\tag{15}\\] where \\(\\tau_{w}\\) denotes the dimensionless wall shear stress. In the fully-developed film region, using equation (12), \\(\\tau_{w}\\) reads \\[\\tau_{w}=0.25-\\left(0.5-\\Delta_{f}\\right)^{2}, \\tag{16}\\] which is a constant whose dependence on \\(Nf\\) and \\(Eo\\) enters equation (16) through the variation of \\(\\Delta_{f}\\) with these parameters via equations (11) and (10). A comparison of the predictions of equation with the numerically computed results for \\(\\tau_{w}\\) using (15) is shown in Figures (8a)-(8f). Beyond the limit at which \\(Eo\\) exerts a strong influence on the dynamics of the bubble, i.e., for \\(Eo\\gtrsim 100\\), equation (3.9) adequately predicts the effect of \\(Nf\\) and \\(Eo\\) on \\(\\tau_{w}\\) in the developed film region. While an increase in \\(Nf\\) leads to a reduction in \\(\\tau_{w}\\), \\(Eo\\) has no significant impact on it beyond \\(Eo\\sim 100\\). Both effects can be related to that of the parameters on the equilibrium film thickness and its velocity profiles, shown in Figures (7b), (7c)-(7e), and (7f)-(7h), respectively. The apparent peaks observed in figures (8a)-(8c) and (8d)-(8f) when surface tension effects are strong for small \\(Eo\\) can be related to the undulation that appears towards the end of the liquid film, with the influence becoming more pronounced as \\(Nf\\) is increased and \\(Eo\\) decreased. Lastly, the maximum wall shear stress, \\(\\tau_{w}^{m}\\), for the combined effect of \\(Nf\\) and \\(Eo\\), is plotted in Figure (8g). Figure 8: Shear stress at the wall boundary: effect of \\(Nf\\) with \\(Eo=20,140,260\\) shown in (a)-(c), respectively; effect of \\(Eo\\) with \\(Nf=40,100,160\\) shown in (d)-(f), respectively; (g) effects of \\(Nf\\) and \\(Eo\\) on the maximum wall shear stress. In (a)-(f), our numerical results are shown using broken lines and the predictions of equation (3.9) in the fully-developed film region using solid lines. #### 3.4.2 Interface normal stress From equation (7), the normal stress at the interface in the direction of unit normal to the interface is defined as \\[\\sigma_{n}=-\\mathbf{n}\\cdot\\mathbf{T}\\cdot\\mathbf{n}=-\\left[-p+2Nf^{-1}\\mathbf{ n}\\cdot\\frac{d\\mathbf{u}}{dn}\\right]. \\tag{10}\\] Expressing the normal stress in terms of the total pressure by adding the gravity term to the hydrodynamic pressure, (7) becomes \\[\\sigma_{n}^{*}=-\\left[-p_{T}+2Nf^{-1}\\mathbf{n}\\cdot\\frac{d\\mathbf{u}}{dn} \\right]=P_{b}-Eo^{-1}\\kappa, \\tag{11}\\] where \\(p_{T}=p+z\\). Figures (9a)-(9c) and (9d)-(9f) show the effects of \\(Nf\\) and \\(Eo\\) on the interface normal stress and total pressure. It is apparent that the normal stress decreases with \\(Nf\\) and it becomes weakly-dependent on \\(Eo\\) for \\(Eo\\gtrsim 100\\). In the fully-developed liquid film region, both the pressure and the normal stress match in order to satisfy (11). This is because in this region, the interface has approximately zero curvature, and \\(u_{r}=du_{r}/dn=0\\), making the viscous stress and the stress due to curvature in the \\(r-z\\) plane contributions zero. Thus, equation (11) reduces to \\(\\sigma_{n}^{*}=p_{T}=P_{b}-Eo^{-1}\\kappa_{b}\\approx P_{b}\\). Since the bubble pressure is a constant, the implication of this is that the viscous and curvature forces are only important in the nose and bottom of the bubble and it is the interplay between them that determines the shape of these regions. For the observed sharp peaks in the interface normal stress around the bubble bottom, particularly for higher \\(Eo\\) and \\(Nf\\) such as the ones shown in Figures 9c, 9e, and 9f, it is clear from Figures 10a-10d, the insets shown in these figures that the bubble bottom and the tail regions are well resolved. In Figure 9g, the maximum normal stress, \\(\\sigma_{n}^{m}\\) exerted on the interface was extracted to highlight its dependence on \\(Nf\\) and \\(Eo\\). ### Hydrodynamic features of bottom region The features discussed here encompass those that define the bottom of the bubble which are the shape of the bottom and the length of the developing length below the bottom, and those that define the wake, if present, which are the length of the wake and the position vector of the vortex eye. #### 3.5.1 Curvature radius of bubble bottom and shape The effects of varying flow conditions on the Taylor bubble bottom shape are quantitatively examined using the sign of the radius of curvature. Because of the varying shapes that are associated with bubble bottom, it is more convenient and sufficient to define the shape of the bubble bottom based on the curvature evaluated at the bottom along the axis of symmetry. Essentially, a positive (negative) radius of curvature signifies a convex (concave) bottom shape with respect to the liquid phase. Figure 10e shows the mean radius of curvature \\(R_{b}\\) for different \\(Nf\\) and varying \\(Eo\\). It is clear that \\(R_{b}\\) becomes independent of \\(Eo\\) for \\(Eo\\gtrsim 100\\). For \\(Eo<100\\), it is seen that \\(R_{b}\\) exhibits a non-monotonic dependence on \\(Eo\\) which becomes particularly pronounced for increasing \\(Nf\\). The behaviour depicted in Figure 10e is reflected in the shape of the bubble bottom and its dependence on \\(Eo\\) and \\(Nf\\) as illustrated in Figures 10f and 10g, respectively. Inspection of these figures reveals that with increasing \\(Nf\\) and \\(Eo\\) the bubble tail becomes more pointed. It is possible that for larger values of \\(Nf\\) and \\(Eo\\) a skirted bubble may form followed by the eventual breakup of the protruding tail structure into smaller bubbles. #### 3.5.2 Wake structure below bubble bottom The wake structure is characterised by its length and the position vector of the eye of the vortex, with reference to the position vector of the bubble bottom along the axis of symmetry (Araujo _et al._, 2012; Nogueira _et al._, 2006_b_). The wake length \\(L_{w}\\) is defined as the distance between the bottom of the bubble, along the axis of symmetry of the pipe and the stagnation point, which is the point of flow separation, behind the bubble (Nogueira _et al._, 2006_b_). Thus, \\(L_{w}\\) is calculated by taking the difference between the axial position of the bubble rear and the stagnation point, and the results are shown in Figure 11a. As expected, \\(L_{w}\\) increases with \\(Nf\\) for a fixed \\(Eo\\), and at constant \\(Nf\\) remains zero-valued over a range of \\(Eo\\) before increasing at sufficiently large \\(Eo\\). It is noticeable that the \\(Eo\\) value at which the wake emerges depends on \\(Nf\\), decreasing Figure 9: Normal stress (solid lines) and total pressure (broken lines) at the interface: effect of \\(Nf\\) with \\(Eo=20,140,260\\) shown in (a)-(c), respectively; effect of \\(Eo\\) with \\(Nf=40,100,160\\) shown in (d)-(f), respectively; (g) effects of \\(Nf\\) and \\(Eo\\) on the maximum interface normal stress. Panels (c), (e), and (f) show an enlarged view of the curves for \\(Nf=160\\), \\(Eo=300\\), and \\(Eo=300\\), respectively, for \\(2.5\\leqslant s\\leqslant 3\\). Figure 10: Flow characteristics of the bottom region for a bubble rising in a stagnant liquid: shape, (a), and mesh structure, (b), for \\(Nf=160\\) and \\(Eo=300\\); enlarged views of the bottom, (c), and tail tip mesh structures, (d); (e) influence of \\(Nf\\) and \\(Eo\\) on the Taylor bubble bottom radius of curvature \\(R_{b}\\); bottom deformation: influence of \\(Eo\\) with \\(Nf=160\\), (f), and influence of \\(Nf\\) for \\(Eo=300\\), (g). as \\(Nf\\) is increased. For all \\(Nf\\), \\(L_{w}\\) becomes progressively more weakly-dependent on \\(Eo\\) at high \\(Eo\\). The dependence of \\(L_{w}\\) on \\(Nf\\) is explained by considering the fact that with increasing \\(Nf\\) the velocity of the liquid jet emanating from liquid film into the region behind the bubble increases, making the liquid travel a longer distance before flow separation occurs. The location of the vortex centre was extracted from the streamline images, generated using open source visualisation tool, VisIt 2.10.3. (Childs _et al._, 2012). For conditions where the wake structure exists, the numerical results for the dimensionless radial, \\(R_{v}\\), and axial, \\(Z_{v}\\), coordinates of the vortex eye are plotted as a function of \\(Eo\\) in Figures 11b and 11c, respectively. The trend for all simulation sets is similar and may be closely described by a function in which the values for both \\(R_{v}\\) and \\(Z_{v}\\) eventually plateau. For a given \\(Nf\\), these indicate that an increase in \\(Eo\\) shifts the overall vortex center towards the tip of the tail, until no further axial or radial movement occurs. Overall, when juxtaposed with increasing the length of the wake, shown in Figure 11a and deformation of the bubble bottom, shown in Figures 10f and 10g, it appears the combined effect of increasing \\(Nf\\) and \\(Eo\\) is to stretch the wake structure in the axial direction about the vortex eye. Utilising the information from the results of Figures 10e, 11c, and 11b following Araujo _et al._ (2012), a map that demarcates the boundaries where the bubble bottom shape is Figure 11: Characteristics of the wake region for bubble rise in stagnant liquids showing the influence of \\(Nf\\) and \\(Eo\\) on the wake length \\(L_{w}\\), (a), the radial and axial locations of the vortex eye with reference to bubble bottom, (c) and (d), and the stabilisation length below the bubble bottom \\(L_{b}\\), (d), respectively. convex or concave, and indicates whether or not the shape is associated with the presence of a wake as a function of \\(Nf\\) and \\(Eo\\) is shown in Figure 12. #### 3.5.3 Developing length below bubble bottom The dimensionless stabilisation length below the bubble bottom, \\(L_{b}\\), similar to the stabilisation length ahead of the bubble, \\(L_{n}\\), refers to the distance below the bottom of the bubble in a fixed frame of reference at which the flow field far behind the bubble bottom is attained. This length, in the context of two consecutive rising bubbles, is the minimum distance below the leading bubble bottom, beyond which there is no interaction with the trailing bubble. Numerically, in a moving frame of reference, \\(L_{b}\\) is determined as the difference between the axial locations of the bubble bottom and the point where the magnitude of the axial velocity along the symmetry axis, starting from the far end of the bubble, is less than 99% of its magnitude at the far end. The computed length as a function of the model dimensionless parameters is plotted in Figure 11d, displaying similar trends to those associated with the wake length \\(L_{w}\\) discussed above. ## 4 Steady-state bubble motion in flowing liquids (\\(U_{m}\ eq 0\\)) In this section, we focus on situations wherein the bubble rises in flowing liquids in a fixed frame of reference. The flow in the liquid is characterised using a Froude number based on the maximum liquid velocity, which corresponds to that at the pipe center. The focus in the literature has been on the dynamics of Taylor bubbles rising in upwardly-flowing liquids characterised by a steady rise speed. In contrast, there is a relative dearth of studies concerning Taylor bubble motion in downward liquid flow, which is known to be accompanied by a transition to asymmetric bubble shapes (Fabre & Figueroa-Espinoza 2014; Fershtman _et al._ 2017; Figueroa-Espinoza & Fabre 2011; Lu & Prosperetti 2006; Martin 1976; Nicklin _et al._ 1962). ### Bubble rise speed in upward liquid flow In Figure 13a, the numerical simulation results for upward liquid flow are compared with predictions based on the correlation of Nicklin _et al._ (1962) given by equation (5) Figure 12: Map showing the regions in \\(Eo\\)-\\(Nf\\) space where the bubble bottom takes on a concave or convex shape and whether or not this is accompanied by wake formation. with expressions for \\(C_{0}\\) and \\(C_{1}\\) provided by Bendiksen (1985) taking into consideration the effect of \\(Eo\\) as \\[C_{0}=\\frac{0.486}{\\sqrt{2}}\\sqrt{1+20\\left(1-\\frac{6.8}{Eo}\\right)}\\left\\{\\frac {1.-0.96e^{-0.0165Eo}}{1.-0.52e^{-0.0165Eo}}\\right\\}, \\tag{10}\\] \\[C_{1}=1.145\\left[1-\\frac{20}{Eo}\\left(1-e^{-0.0125Eo}\\right)\\right]. \\tag{11}\\] It is evident that equations (5) with (10) and (11) over-predict the bubble rise speed. This is because the expressions for \\(C_{0}\\) and \\(C_{1}\\) were derived for cases in which flow due to the bubble motion was considered to be inviscid, an assumption that gains with increasing \\(Nf\\). The agreement with the numerical results improves significantly when Figure 13: Effect of imposed upward liquid flow speed \\(U_{m}\\) on the bubble rise speed \\(U_{b}\\) for varying \\(Nf\\), (a): comparison between the numerical results (coloured markers), predictions based on the Nicklin _et al._ (1962) correlation (5) (black solid line) with the Bendiksen (1985) relations (10) and (11) used for coefficients \\(C_{0}\\) and \\(C_{1}\\), and predictions using the Viana _et al._ (2003) correlation for \\(C_{0}\\) given by equation (10) and the Bendiksen (1985) relation for \\(C_{1}\\) expressed by (11) (coloured dashed lines); effect of \\(Nf\\) and \\(Eo\\) on the numerically-generated \\(C_{0}\\) (normalised by \\(U_{b}\\)), (b), and \\(C_{1}\\), (c). the correlation of Viana _et al._ (2003) is used to calculate \\(C_{0}\\); this correlation accounts for the effects of viscosity and surface tension and the agreement improves further with increasing \\(Nf\\). We can estimate values for \\(C_{0}\\) and \\(C_{1}\\) from our numerical simulations for various \\(Nf\\) and \\(Eo\\), and the results are shown in Figures 13b and 13c, respectively. It is seen that \\(C_{0}/U_{b}\\) remains approximately equal to unity over the range of \\(Nf\\) and \\(Eo\\) studied, while \\(C_{1}\\) increases monotonically with \\(Eo\\) for all \\(Nf\\) considered reaching a plateau at high \\(Eo\\). ### Steady bubble shapes and flow fields in flowing liquids For a constant \\(Nf=80\\) and \\(Eo=140\\), the effect of imposed upward and downward liquid flow is shown in Figure 14a. It is seen clearly that a decrease (increase) in the intensity of the wake flow, accompanied by a decrease (increase) of the concavity of the bubble bottom, is observed with an increase in the magnitude of the downward (upward) liquid flow. This, as noted earlier when discussing the stagnant liquid case, can be linked to the decrease (increase) in the magnitude of the liquid emerging from the film into the liquid slug, which is a manifestation of the decrease (increase) in the bubble rise speed, as the downward (upward) liquid velocity is increased. Quantitatively, the effect of \\(U_{m}\\) on \\(U_{b}\\) is shown in Figure 14b whence we deduce the existence of a critical \\(U_{m}\\) value for downward flow that leads to bubble arrest characterised by \\(U_{b}=0\\), which increases with \\(Nf\\) and decreases (increases) with \\(Eo\\) for \\(Eo\\geqslant 100\\) (\\(Eo<100\\)), respectively. It is also noticeable from Figure 14a that there is an increase (decrease) in the radius of curvature of the bubble nose with increasing magnitude of the downward (upward) liquid flow (see also Figures 15a and 15b). This flattening (sharpening) of the bubble nose can be attributed to the increase (decrease) in the normal stress exerted on the bubble nose relative to that in stagnant liquid as a result of the increased opposing (reinforcing) inertial force in the downward (upward) liquid flow. It is clear from Figure 15c that the interface normal stress is an increasing (decreasing) function of the increased liquid velocity in the downward (upward) liquid flow. As explained in the previous section for stagnant liquids, within the equilibrium film, the normal stress, total pressure, and the bubble pressure are approximately equal, which is responsible for the observed increase (decrease) in bubble pressure with increasing downward (upward) liquid flow (see Figure 14a). Also, outside the equilibrium film region, we had stated that it is the interplay between the viscous stress and curvature that determines the shape of the regions. To buttress this claim, the normal stress is again modified by choosing the reference pressure to be the bubble pressure such that \\[\\sigma_{n}^{**}=-\\left[-p_{T}*+2Nf^{-1}\\mathbf{n}\\cdot\\frac{d\\mathbf{u}}{dn} \\right]=-Eo^{-1}\\kappa, \\tag{10}\\] where \\(p_{T}*=p_{T}-P_{b}\\), thereby making the normal stress in the equilibrium film region approximately zero as the stress due to interfacial curvature \\(\\kappa_{b}\\) is negligibly small. As the nose region is approached, the net effect of the viscous stress on the normal stress in downward (upward) liquid flow is to increase (decrease) the normal stress relative to that in a stagnant liquid, which in order to satisfy the normal stress balance at the interface, the curvature stress has to decrease (increase), leading to the observed increase (decrease) in the radius of curvature of the nose. We have also carried out a full parametric study of the effect of \\(U_{m}\\) on the steady bubble shape and associated flow field for a wide range of \\(Nf\\) and \\(Eo\\). As shown in Figures (16)-(20), a transition from downward to upward flow, characterised by a change in the sign of \\(U_{m}\\) has a similar effect to an increase in \\(Eo\\) for constant \\(Nf\\) or a rise in \\(Nf\\) with \\(Eo\\) held fixed; this transition results in longer bubbles with more pointed noses and concave tails accompanied by wake formation for sufficiently large \\(Eo\\) and/or \\(Nf\\). For the lowest values of \\(Eo\\) investigated, the bubbles develop bulges in the zone connecting the thin film and the bottom regions of the bubble which become more pronounced with increasingly negative \\(U_{m}\\) values (see Figure 17a). For sufficiently large and negative \\(U_{m}\\), we see the emergence of bubbles with dimpled tops and/or bottoms, an indication of a steadily falling bubble, which is confirmed by the negative value of their rise velocity, see Figure 17b. Figure 14: Effect of \\(U_{m}\\) on the steady-state bubble shape and the surrounding flow field with \\(Nf=80\\) and \\(Eo=140\\), (a); here, the streamlines and vector fields are superimposed on velocity magnitude pseudocolour plot on the right and left sides of the symmetry axis, respectively; variation of the steady bubble rise speed \\(U_{b}\\) with \\(U_{m}\\) (b); for different \\(Nf\\) and with \\(Eo=140\\) (c); for different \\(Eo\\) and with \\(Nf=80\\). ## 5 Summary and conclusions Numerical solutions of an axisymmetric Taylor bubble moving steadily in stagnant and flowing liquids are computed by solving the steady-state Navier-Stokes equations using a Galerkin finite-element method based on kinematic update of the interface. Our validation of the numerical simulation strategy using the experimental data of Bugg & Saad (2002) shows a good agreement between the numerical results and the experiment. Utilising the strategy, we computed the steady-state shapes and evaluated the hydrodynamic features characterising the nose, film, interface, and bottom regions around the bubble for different dimensionless inverse viscosity numbers, Eotvos, and Froude numbers based on the liquid centreline velocity. The results show that above \\(Eo\\sim 100\\), surface tension has insignificant influence on the hydrodynamic features studied. For the interval \\(Eo=(10,30]\\), analysis of the results indicates that the influence of increased \\(Nf\\) results in a distinct feature that is not observed at higher \\(Eo\\); emergence of a bulge in the film region close to the bubble bottom which becomes more pronounced and appears to propagates towards the nose as \\(Eo\\) is decreased. Thus the intervals \\(Eo=(20,30]\\) is considered as the limit below which surface tension has strong influence on Taylor bubble dynamics. Similarly, from Figure 15: Effect of \\(U_{m}\\) on the steady-state bubble interface features (a); variation of frontal radius, \\(R_{F}\\) with \\(U_{m}\\) and \\(Eo\\) for \\(Nf=80\\) (b); variation of frontal radius, \\(R_{F}\\) with \\(U_{m}\\) and \\(Nf\\) for \\(Eo=140\\) (c); spatial variation of the steady, modified interface normal stress \\(\\sigma_{n}^{**}\\) for different \\(U_{m}\\) and with \\(Nf=80\\) and \\(Eo=140\\); the inset shows an enlarged view of \\(\\sigma_{n}^{**}\\) for \\(2.5\\leqslant s\\leqslant 3.1\\) for \\(U_{m}=0.2\\) which demonstrates that this quantity is well-resolved in this boundary-like region of rapid variation. the normalised frontal radius, we show that interval \\(Nf=(60,80]\\) can be considered as the limit below which viscous effects are significant. Based on our analysis of the normal stress at the interface, we deduced that it is the interaction between the stresses due to curvature and viscosity that modifies the shape of the nose and bottom regions. In the bottom region, we made use of our results for the dependence of the bubble bottom shape and existence of the wake on \\(Nf\\) and \\(Eo\\) to produce a flow pattern map depicting regions of dimensionless parameters space that are associated with the presence or absence of wake formation together with the prevailing bubble bottom shape. Qualitative analysis of the effect of imposed liquid flow on the steady-state solution Figure 16: The effect of \\(U_{m}\\) and \\(Eo\\) on the steady bubble shapes and flow fields with \\(Nf=40\\). In each panel, the streamlines and vector fields are superimposed on velocity magnitude pseudocolour plot on the right and left sides of the symmetry axis, respectively. Figure 17: Steady-state bubble shapes in flowing liquids: (a) effect of \\(U_{m}\\) for \\(Nf=80\\) and \\(Eo=20\\); (b) effect of \\(U_{m}\\) for \\(Nf=60\\) and \\(Eo=220\\). In each panel, we show the streamlines and vector fields superimposed on the velocity magnitude pseudocolour plot on the right and left sides of the symmetry axis, respectively. For each case, we provide numerical predictions of the bubble rise speed, \\(U_{b}\\). shows that the influence is more pronounced in the features that characterise the nose and bottom regions. For upward liquid flow, the nose becomes increasingly pointed and the bottom more concave as the liquid speed is increased. In contrast, increased downward liquid flow leads to the flattening of the bubble nose and increased convexity of the bubble bottom relative to the liquid. For sufficiently large speeds of downward-flowing liquids, it is not clear that the bubble bottom is more pronounced in the features that characterise the nose and bottom regions. Figure 18: The effect of \\(U_{m}\\) and \\(Eo\\) on the steady bubble shapes and flow fields with \\(Nf=60\\). In each panel, the streamlines and vector fields are superimposed on velocity magnitude pseudocolour plot on the right and left sides of the symmetry axis, respectively. becomes difficult to distinguish the bubble nose and bottom regions which acquire very similar shapes as the bubble falls steadily. Although we have obtained axisymmetric solutions for the parameter space investigated, it is uncertain whether some of the solutions, particular the ones associated with the downward-flowing liquid cases, are physically observable in experiments. In fact, Figure 19: The effect of \\(U_{m}\\) and \\(Eo\\) on the steady bubble shapes and flow fields with \\(Nf=80\\). In each panel, the streamlines and vector fields are superimposed on velocity magnitude pseudocolour plot on the right and left sides of the symmetry axis, respectively. experimental observations have shown that for certain downward liquid flow conditions, the shape of Taylor bubbles becomes asymmetric. In a companion paper, Part II of this two-part study Abubakar and Matar (2021), we examine the linear stability of the axisymmetric steady-state solutions obtained here and determine the influence of \\(Nf\\), \\(Eo\\), and \\(U_{m}\\) on the transition to asymmetry. In addition, we carry out an energy analysis in order to pinpoint the dominant destabilising mechanisms depending on the choice of parameter values. Figure 20: The effect of \\(U_{m}\\) and \\(Eo\\) on the steady bubble shapes and flow fields with \\(Nf=100\\). In each panel, the streamlines and vector fields are superimposed on velocity magnitude pseudocolour plot on the right and left sides of the symmetry axis, respectively. ### Acknowledgements This work is supported by a Petroleum Technology Development Fund scholarship for HAA, and the Engineering & Physical Sciences Research Council, United Kingdom, through the EPSRC MEMPHIS (EP/K003976/1) and PREMIERE (EP/T000414/1) Programme Grants. OKM also acknowledges funding via the PETRONAS/Royal Academy of Engineering Research Chair in Multiphase Fluid Dynamics. We also acknowledge HPC facilities provided by the Research Computing Service (RCS) of Imperial College London for the computing time. **Declaration of interests:** The authors report no conflict of interest. ## References * Abubakar (2019)Abubakar, H. A. 2019 Taylor bubble rise in circular tubes: steady-states and linear stability analysis. PhD thesis, Imperial College London. * Abubakar & Matar (2021)Abubakar, H. A. & Matar, O. K. 2021 Taylor bubble motion in stagnant and flowing liquids in vertical pipes. part ii: Linear stability analysis. _Submitted to J. Fluid Mech._. * Anjos _et al._ (2014)Anjos, G., Mangiavacchi, N., Borhani, N. & Thome, J. R. 2014 3D ALE finite-element method for two-phase flows with phase change. _Heat Transfer Engineering_**35** (5), 537-547. * Araujo _et al._ (2012)Araujo, J. D. P., Miranda, J. M., Pinto, A. M. F. R. & Campos, J. B. L. M. 2012 Wide-ranging survey on the laminar flow of individual Taylor bubbles rising through stagnant Newtonian liquids. _Int. J. Multiph. Flow_**43**, 131-148. * Bae & Kim (2007)Bae, S.H. & Kim, D.H. 2007 Computational study of the axial instability of rimming flow using Arnoldi method. _Int. J. Numer. Meth. Fluids_**53**, 691-711. * Batchelor (1967)Batchelor, G.K. 1967 _An introduction to fluid dynamics._ UK: Cambridge University Press. * Bendiksen (1985)Bendixsen, K. 1985 On the motion of long bubbles in vertical tubes. _Int. J. Multiphase Flow_**11**, 797-812. * Brown (1965)Brown, R.A.S. 1965 The mechanics of large gas bubbles in tubes I. Bubble velocities in stagnant liquids. _Can. J. Chem. Eng_**43**, 217-223. * Bugg _et al._ (1998)Bugg, J.D., Mack, K. & Rezkallah, K.S. 1998 A numerical model of Taylor bubbles rising through stagnant liquids in vertical tubes. _Int. J. Multiphase Flow_**24**, 271-281. * Bugg & Saad (2002)Bugg, J. D. & Saad, G. A. 2002 The velocity field around a Taylor bubble rising in a stagnant viscous fluid: Numerical and experimental results. _Int. J. Multiphase Flow_**28**, 791-803. * Campos & Guedes de Carvalho (1988)Campos, J.B.L.M. & Guedes de Carvalho, J.R.F. 1988 An experimental study of the wake of gas slugs rising in liquids. _J. Fluid Mech._**196**, 27-37. * Capponi _et al._ (2016)Capponi, A., James, M.R. & Lane, S.J. 2016 Gas slug ascent in a stratified magma: Implications of flow organisation and instability for Strombolian eruption dynamics. _Earth Planet. Sci. Lett._**435**, 159-170. * Childs _et al._ (2012)Childs, Hank, Brugger, Eric, Whitlock, Brad, Meredith, Jeremy, Ahern, Sean, Pugmire, David, Biagas, Kathleen, Miller, Mark, Harrison, Cyrus, Weber, Gunther H., Krishnan, Hari, Fogal, Thomas, Sanderson, Allen, Garth, Christoph, Bethel, E. Wes, Camp, David, Rubel, Oliver, Durant, Marc, Favre, Jean M. & Navratil, Paul 2012 VisIt: An End-User Tool For Visualizing and Analyzing Very Large Data. In _High Performance Visualization-Enabling Extreme-Scale Scientific Insight_, pp. 357-372. * Cliff _et al._ (1978)Clift, R., Grace, J.R. & Weber, M.E. 1978 _Bubbles, drops and particles_. NY: Academic Press. * Collins _et al._ (1978)Collins, R., De Moraes, F., Davidson, J. & Harrison, D. 1978 The motion of a large gas bubble rising through liquid flowing in a tube. _J. Fluid Mech_**89**, 497-514. * Davies & Taylor (1950)Davies, R.M. & Taylor, G. 1950 The mechanics of large bubbles rising through extended liquids and through liquids in tubes. _Proc. R. Soc. Lond. A_**200**, 375-390. * Dumitrescu (1943)Dumitrescu, D.T. 1943 Stromung an einer Luftblase im senkrechten Rohr. _Z. Angew. Math. Mech_**23** (3), 139-149. * Fabre (2016)Fabre, J. 2016 A long bubble rising in still liquid in a vertical channel: a plane inviscid solution. _J. Fluid Mech._**794**, R4. * Fabre & Figueroa-Espinoza (2014)Fabre, J. & Figueroa-Espinoza, B. 2014 Taylor bubble rising in a vertical pipe against laminar or turbulent downward flow: symmetric to asymmetric shape transition. _J. Fluid Mech._**755**, 485-502. * Fabre & Line (1992)Fabre, J. & Line, A. 1992 Modeling of two-phase slug flow. _Annu. Rev. Fluid Mech._**24**, 21-46. * Feng (2008)Feng, J.Q. 2008 Buoyancy-driven motion of a gas bubble through viscous liquid in a round tube. _J. Fluid Mech._**609**, 377-410. * Fershtman _et al._ (2017)Fershtman, A., Babin, V., Barnea, D. & Shemer, L. 2017 On shapes and motion of an elongated bubble in downward liquid pipe flow. _Physics of Fluids_**29**, 112103. * Figueroa-Espinoza & Fabre (2011)Figueroa-Espinoza, B. & Fabre, J. 2011 Taylor bubble moving in a flowing liquid in vertical channel: transition from symmetric to asymmetric shape. _J. Fluid Mech._**679**, 432-454. * Fraggedakis _et al._ (2016)Fraggedakis, D., Pavlidis, M., Dimakopoulos, Y. & Tsamopoulos, J. 2016 On the velocity discontinuity at critical volume of a bubble rising in a viscoelastic fluid. _J. Fluid Mech._**789**, 310-346. * Funada _et al._ (2005)Funada, T., Joseph, D., Maehara, T. & Yamashita, S. 2005 Ellipsoidal model of the rise of a Taylor bubble in a round tube. _Int. J. Multiph. Flow_**31**, 473-491. * Ganesan & Tobiska (2008)Ganesan, S. & Tobiska, L. 2008 An accurate finite element scheme with moving meshes for computing 3D-axisymmetric interface flows. _Int. J. Multiph. Flow_**57**, 119-138. * Goldsmith & Mason (1962)Goldsmith, H.L. & Mason, S.G. 1962 The movement of single large bubbles in closed vertical tubes. _J. Fluid Mech._**14**, 42-58. * Griffith & Wallis (1961)Griffith, P. & Wallis, G. B. 1961 Two phase slug flow. _ASME: J. Heat Transfer_**83**, 07-320. * Hecht (2012)Hecht, F. 2012 New development in FreeFem++. _J. Numer. Math._**20** (3-4), 251-265. * Johnson & Tezduvar (1994)Johnson, A.A. & Tezduvar, T.E. 1994 Mesh update strategies in parallel finite element computations of flow problems with moving boundaries and interfaces. _Comp. Methods Appl. Mech. Eng._**119**, 73-94. * Kang _et al._ (2010)Kang, C.W., Quan, S.P. & Lou, J. 2010 Numerical study of a Taylor bubble rising in stagnant liquids. _Phys. Rev. E._**81**, 1539-3755. * Lizarraga-Garcia _et al._ (2017)Lizarraga-Garcia, E., Buongiorno, J., Al-Safran, E. & Lakehal, D. 2017 A broadly applicable unified closure relation for Taylor bubble rise velocity in pipes with stagnant liquid. _Int. J. Multiph. Flow_**89**, 345-358. * Llewellin _et al._ (2012)Llewellin, E.W., Del Bello, E., Taddeucci, J., Scarlato, P. & Lane, S.J. 2012 The thickness of the falling film of liquid around a Taylor bubble. _Proc. R. Soc. A_**468**, 1041-1064. * Lu & Prosperetti (2006)Lu, X & Prosperetti, A. 2006 Axial stability of Taylor bubbles. _J. Fluid Mech._**568**, 173-192. * Lu & Prosperetti (2009)Lu, X & Prosperetti, A. 2009 A numerical study of Taylor bubbles. _Ind. Eng. Chem. Res._**48**, 242-252. * Mao & Dukler (1989)Mao, Z.S. & Dukler, A.E. 1989 An experimental study of gas-liquid slug flow. _Experiments in Fluids_**8**, 169-1821. * Mao & Dukler (1990)Mao, Z.S. & Dukler, A.E. 1990 The motion of Taylor bubbles in vertical tubes I. A numerical simulation for the shape and rise velocity of Taylor bubbles in stagnant and flowing liquids. _J. Comput. Phys._**91**, 2055-2064. * Mao & Dukler (1991)Mao, Z.S. & Dukler, A.E. 1991 The motion of Taylor bubbles in vertical tubes II. Experimental data and simulations for laminar and turbulent flow. _Chem. Eng. Sci._**46**, 132-160. * Martin (1976)Martin, C.S. 1976 Vertically downward two-phase slug flow. _Trans. ASME J. Fluids Engng_**98**, 715-722. * Maxworthy (1967)Maxworthy, T. 1967 A note on the existence of wakes behind large, rising bubbles. _J. Fluid Mech._**27**, 367-368. * Moissis & Griffith (1962)Moissis, R. & Griffith, P. 1962 Entrance effect in a two-phase slug flow. _J. Heat Transf._**84**, 29-38. * Nickens & Yannitel (1987)Nickens, H. & Yannitel, D. 1987 The effect of surface tension and viscosity on the rise velocity of a large gas bubble in a closed vertical liquid-filled tube. _Int. J. Multiphase Flow_**13**, 57-69. * Nicklin _et al._ (1962)Nicklin, D., Wilkes, J. & Davidson, J. 1962 Two-phase flow in vertical tubes. _Trans. Inst. Chem. Engrs_**40**, 61-68. * Nogueira _et al._ (2006_a_)Nogueira, S., Riethmuller, M.L., Campos, J.B.L.M. & Pinto, A.M.F.R. 2006\\(a\\) Flow in the nose region and annular film around a Taylor bubble rising through vertical columns of stagnant and flowing Newtonian liquids. _Chem. Eng. Sci._**61**, 845-857. Nogueira, S., Riethmuller, M.L., Campos, J.B.L.M. & Pinto, A.M.F.R. 2006\\(b\\) Flow patterns in the wake of a Taylor bubble rising through vertical columns of stagnant and flowing Newtonian liquids: an experimental study. _Chem. Eng. Sci._**61**, 7199-7212. * Pering & McGonigle (2018)Pering, T.D. & McGonigle, A.J.S. 2018 Combining spherical-cap and Taylor bubble fluid dynamics with plume measurements to characterize basaltic degassing. _Geosciences_**8** (2), 42. * Pinto _et al._ (1998)Pinto, A.M.F.R., Coelho Pinheiro, M.N. & Campos, J.B.L.M. 1998 Coalescence of two gas slugs rising in a co-current flowing liquid in vertical tubes. _Chem. Eng. Sci._**53** (16), 2973-2983. * Polonsky _et al._ (1999)Polonsky, S., Shemer, L. & Barnea, D. 1999 The relation between the Taylor bubble motion and the velocity field ahead of it. _Int. J. Multiphase Flow_**25**, 957-975. * Pringle _et al._ (2015)Pringle, C.C.T., Ambrose, S., Azzopardi, B.J. & Rust, A.C. 2015 The existence and behaviour of large diameter Taylor bubbles. _Int. J. Multiphase Flow_**72**, 318-323. * Rana _et al._ (2015)Rana, B.K., Das, A.K. & Das, P.K. 2015 Mechanism of bursting Taylor bubbles at free surfaces. _Langmuir_**31**, 9870-9881. * Slikkeveer & Van Loohuizen (1996)Slikkeveer, P.J. & Van Loohuizen, E.P. 1996 An implicit surface tension algorithm for Picard solvers of surface-tension-dominated free and moving boundary problems. _Int. J. Numer. Methods Fluids_**22**, 851-865. * Taha & Cui (2002)Taha, T. & Cui, Z.F. 2002 Hydrodynamic analysis of upward slug flow in tubular membranes. _Desalination_**145**, 179-182. * Taha & Cui (2006)Taha, T. & Cui, Z.F. 2006 CFD modeling of slug flow in vertical tubes. _Chem. Eng. Sci._**61**, 676-687. * Tsamopoulos _et al._ (2008)Tsamopoulos, J., Dimakopoulos, Y., Chatzidai, N., Karapetsas, G. & Pavlidis, M. 2008 Steady bubble rise and deformation in Newtonian and viscoplastic fluids and conditions of bubble entrapment. _J. Fluid Mech._**601**, 123-164. * Viana _et al._ (2003)Viana, F., Pardo, R., Yanez, R., Trallero, J.L. & Joseph, D.D. 2003 Universal correlation for the rise velocity of long gas bubbles in round pipes. _J. Fluid Mech._**494**, 379-398. * White & Beardmore (1962)White, E.T. & Beardmore, R.H. 1962 The velocity of rise of single cylindrical air bubbles through liquids contained in vertical tubes. _Chem. Eng. Sci._**17**, 351-361. * Zhou & Dusek (2017)Zhou, W. & Dusek, J. 2017 Marginal stability curve of a deformable bubble. _Int. J. Multiphase Flow_**89**, 218-227. * Zukoski (1966)Zukoski, E.E. 1966 Influence of viscosity, surface tension, and inclination angle on motion of long bubbles in closed tubes. _J. Fluid Mech._**25**, 821-837.
Taylor bubbles are a feature of the slug flow regime in gas-liquid flows in vertical pipes. Their dynamics exhibits a number of transitions such as symmetry-breaking in the bubble shape and wake when rising in downward-flowing and stagnant liquids, respectively, as well as breakup in sufficiently turbulent environments. Motivated by the need to examine the stability of a Taylor bubble in liquids, a systematic numerical study of a steadily-moving Taylor bubble in stagnant and flowing liquids is carried out, characterised by a dimensionless inverse viscosity (\\(Nf\\)), and Eotvos (\\(Eo\\)), and Froude (\\(Fr\\)) numbers based on the centreline liquid velocity, using a Galerkin finite-element method. A boundary-fitted domain is used to examine the dependence of the steady bubble shape on a wide range of \\(Nf\\) and \\(Eo\\). Our analysis of the bubble nose and bottom curvatures shows that the intervals \\(Eo=[20,30)\\) and \\(Nf=[60,80)\\) are the limits below which surface tension and viscosity, respectively, have a strong influence on the bubble shape. In the interval \\(Eo=(60,100]\\), all bubble features studied are weakly-dependent on surface tension. This is Part I of a two-part publication in which its companion paper (Abubakar & Matar 2021) reports the results of a linear stability analysis of the steady-states discussed herein.
Condense the content of the following passage.
306
arxiv-format/2408_03677v3.md
# LADR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection Xun Huang1, Ziyu Xu1, Hai Wu1, Jinlong Wang1, Qiming Xia1, **Yan Xia2, Jonathan Li3, Kyle Gao3, Chenglu Wen1\\({}^{*}\\) Cheng Wang1** Corresponding author.Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ## Introduction 3D object detection is a fundamental vision task of unmanned platforms, extensively utilized in applications such as intelligent robot navigation [14, 15] and autonomous driving [1]. For example, Full Driving Automation (FDA, Level 5) relies on weather-robust 3D object detection, which provides precise 3D bounding boxes even under various challenging adverse weather conditions [1]. Owing to the high resolution and strong interference resistance of LiDAR sensors, LiDAR-based 3D object detection has emerged as a mainstream area of research [16, 15, 17]. However, LiDAR sensors exhibit considerable sensitivity to weather conditions. In adverse scenarios, the scanned LiDAR point clouds suffer from substantial degradation and increased noise [14, 1]. This degradation negatively impacts 3D detectors, compromising the reliability of autonomous perception systems. Aside from LiDAR, 4D (range, azimuth, Doppler, and elevation) millimeter-wave radar is gaining recognition as a vital perception sensor [16, 15]. As shown in Fig.1 (a), 4D radar outperforms LiDAR in weather robustness, velocity measurement, and detection range. The millimeter-wave signals of 4D radar have wavelengths much larger than the tiny particles in fog, rain, and snow [18, 19], exhibiting reduced susceptibility to weather disturbances. As shown in Fig.1 (b), the performance gap between LiDAR and 4D radar decreases as the severity of the weather rises. Hence, the 4D radar sensor is more suitable for adverse weather conditions than the LiDAR sensor. However, LiDAR has advantages in terms of higher distance and angular point resolution. These circumstances make it feasible to promote the full fusion of 4D radar and LiDAR data to improve 3D object detection. Pioneering approaches such as InterFusion [17], M\\({}^{2}\\)Fusion [17], and 3D-LRF [1] conducted initial attempts to fuse LiDAR and 4D radar data, have demonstrated significant performance improvements over single-sensor models. Despite progress, the significant data quality disparities and varying sensor degradation in adverse weather conditions remain largely unaddressed. As shown in Fig.2 (a), a primary challenge arises from the significant quality disparity Figure 1: (a) The radar chart illustrates the complementary nature of LiDAR and 4D radar sensors. (b) AP gaps between LiDAR and 4D radar in real different weather (the extent to which LiDAR is superior to 4D radar). between the LiDAR and 4D radar sensors. A second challenge pertains to varying sensor degradation under adverse weather conditions. Fig.2 (b) shows LiDAR sensors undergo severe data degradation in adverse weather. Conversely, the data quality decrease in 4D radar is significantly lower [11, 12] (Details are given in supplementary material). The varying degradation of sensors leads to fluctuations in the fused features, resulting in difficulty for the model to maintain high performance under adverse weather conditions. Therefore, developing a weather-robust 3D object detector requires overcoming the challenges of significant quality disparities and varying sensor degradation. To address the above challenges, we propose L4DR, a novel two-stage fusion framework for LiDAR and 4D radar (shown in Fig. 2 (c)). The first-stage fusion is performed by the 3D Data Fusion Encoder, which contains the Foreground-Aware Denoising (FAD) module and the Multi-Modal Encoder (MME) module. This first-stage data fusion augments LiDAR and 4D radar with each other to tackle the challenge of significant data quality disparities between LiDAR and 4D radar. The second-stage Feature Fusion is to fuse features between 2D Backbones to address the second challenge of the varying sensor degradation. It consists of the Inter-Modal and Intra-Modal (\\(\\{\\)IM\\(\\}^{2}\\)) backbone and the Multi-Scale Gated Fusion (MSGF) module. Different from traditional methods that fuse features before extraction on the left side of Fig.2 (c), L4DR integrates a continuous fusion process throughout the feature extraction, adaptively focusing on the significant modal features in different weather conditions. In Fig. 3, our comprehensive experimental results showcase the superior performance of L4DR across various simulated and real-world adverse weather disturbances. Our main contributions are as follows: * We introduce the innovative Multi-Modal Encoder (MME) module, which achieves LiDAR and 4D radar data early fusion without resorting to error-prone processes (e.g., depth estimation). It effectively bridges the substantial LiDAR and 4D radar data quality disparities. * We designed an \\(\\{\\)IM\\(\\}^{2}\\) backbone with a Multi-Scale Gated Fusion (MSGF) module, adaptively extracting salient features from LiDAR and 4D radar in different weather conditions. This enables the model to adapt to varying levels of sensor degradation under adverse weather conditions. * Extensive experiments on the two benchmarks, VoD and K-Radar, demonstrate the effectiveness of our L4DR under various levels and types of adverse weather, achieving new state-of-the-art performances on both datasets. ## Related work LiDAR-based 3D object detection.Researchers have developed single-stage and two-stage methods to tackle challenges for 3D object detection. Single-stage detectors such Figure 3: Performance comparison of our L4DR and LiDAR-only in (a) various simulated fog levels (FL denotes fog level) and (b) real-world adverse weather. Figure 2: (a) Significant quality disparity between the LiDAR and the 4D radar. (b) Severe degradation of LiDAR data quality in adverse weather. (c) Comparison of previous LiDAR-4DRadar fusion and our fusion, highlighting our innovative framework designs to address challenges (a) and (b). as VoxelNet [22], PointPillars [14], 3DSSD [23], DSVT [24] utilize PointNet++ [25], sparse convolution, or other point feature encoder to extract features from point clouds and perform detection in the Bird's Eye View (BEV) space. Conversely, methods such as PV-RCNN [26], PV-RCNN++ [27], Voxel-RCNN [13], and VirConv [28] focus on two-stage object detection, integrating RCNN networks into 3D detectors. Even though these mainstream methods have gained excellent performance in normal weather, they still lack robustness under various adverse weather conditions. **LiDAR-based 3D object detection in adverse weather.** LiDAR sensors may undergo degradation under adverse weather conditions. Physics-based simulations [29, 10, 11, 12] have been explored to reproduce point clouds under adverse weather to alleviate the issue of data scarcity. [10, 11] utilized the DROR, DSOR, or convolutional neural networks (CNNs) to classify and filter LiDAR noise points. [23] designed a general completion framework that addresses the problem of domain adaptation across different weather conditions. [11] designed a general knowledge distillation framework that transfers sunny performance to rainy performance. However, these methods primarily rely on single-LiDAR modal data, which will be constrained by the decline in the quality of LiDAR under adverse weather conditions. **LiDAR-radar fusion-based 3D object detection.** MVDNet [12] has designed a framework for fusing LiDAR and radar. ST-MVDNet [13] and ST-MVDNet++ [12] incorporate a self-training teacher-student to MVDNet to enhance the model. Bi-LRFusion [24] framework employs a bidirectional fusion strategy to improve dynamic object detection performance. However, these studies only focus on 3D radar and LiDAR. As research progresses, the newest studies continue to drive the development of LiDAR-4D radar fusion. M\\({}^{2}\\)-Fusion [24], InterFusion [24], and 3D-LRF [14] explore LiDAR and 4D radar fusion. However, these methods have not considered and overcome the challenges of fusing 4D radar and LiDAR under adverse weather conditions. ## Methodology ### Problem Statement and Overall Design **LiDAR-4D radar fusion-based 3D object detection.** For an outdoor scene, we denote LiDAR point cloud as \\(\\mathcal{P}^{l}=\\{p_{i}^{l}\\}_{i=1}^{N_{l}}\\) and 4D radar point cloud as \\(\\mathcal{P}^{r}=\\{p_{i}^{r}\\}_{i=1}^{N_{r}}\\), where \\(p\\) denotes 3D points. Subsequently, a multi-modal model \\(\\mathcal{M}\\) extract deep features \\(\\mathcal{F}^{m}\\) from \\(\\mathcal{P}^{m}\\), written as \\(\\mathcal{F}^{m}=g(f_{\\mathcal{M}}(\\mathcal{P}^{m};\\Theta))\\), where \\(m\\in\\{l,r\\}\\). The fusion features are then obtained by \\(\\mathcal{F}^{f}=\\phi(\\mathcal{F}^{l},\\mathcal{F}^{r})\\), where \\(\\phi\\) donates fusion method. The objective of 3D object detection is to regress the 3D bounding boxes \\(B=\\{b_{i}\\}_{i=1}^{N_{b}}\\), \\(B\\in\\mathbb{R}^{N_{b}\\times\\mathcal{T}}\\). **Significant data quality disparity.** As mentioned before, there is a dramatic difference between \\(\\mathcal{P}^{l}\\) and \\(\\mathcal{P}^{r}\\) in the same scene. To fully fuse two modalities, we can use \\(P^{l}\\) to enhance the highly sparse \\(P^{r}\\) that lacks discriminative details. Therefore, our L4DR includes a **M**ulti-**M**odal **E**ncoder (**MME**, Figure 4 (b)), which performs data early-fusion complementarity at the encoder. However, we found that direct data fusion would also cause noises in \\(\\mathcal{P}^{r}\\) to spread to \\(\\mathcal{P}^{l}\\). Therefore, we integrated **F**oreground-A**ware **D**enoising (**FAD**, Figure 4 (a)) into L4DR before MME to filter out most of the noise in \\(\\mathcal{P}^{r}\\). **Varying sensor degradation in adverse weather.** Compared to 4D radar, the quality of LiDAR point cloud \\(P^{l}\\) is Figure 4: L4DR framework. (a) Foreground-**A**ware **D**enoising (FAD) performs denoising by segmenting foreground semantics per 4D radar point. Next, (b) **M**ulti-**M**odal **E**ncoder (MME) fuses bi-directional data for both LiDAR and 4D radar modalities at the Encoder stage to obtain higher quality BEV features. Finally, (c) **I**nter-**M**odal and **I**ntra-**M**odal (\\(\\{\\text{IM}\\}^{2}\\)) backbone coupled with **M**ulti-**Scale **G**ated **F**usion (MSGF) uses a gating strategy to filter features to avoid redundant information while extracting inter-modal and intra-modal features in parallel. more easily affected by adverse weather conditions, leading to varying feature presentations \\(\\mathcal{F}^{l}\\). Previous backbones focusing solely on fused inter-modal features \\(\\mathcal{F}^{f}\\) overlook the weather robustness of 4D radar, leading to challenges in addressing the frequent fluctuation of \\(\\mathcal{F}^{l}\\). To ensure robust fusion across diverse weather conditions, we introduce the **I**nter-**M**odal and **I**ntra-**M**odal (\\(\\{\\)IM\\(\\}^{2}\\), Figure 4 (c)) backbone. This design simultaneously focuses on inter-modal and intra-modal features, enhancing model adaptability. However, redundancy between these features arises. Inspired by gated fusion techniques [14, 19], we propose the **M**ulti-**S**cale **G**ated **F**usion (MSGF, Figure 4 (d)) module. MSGF utilizes inter-modal features \\(\\mathcal{F}^{f}\\) to filter intra-modal features \\(\\mathcal{F}^{l}\\) and \\(\\mathcal{F}^{r}\\), effectively reducing feature redundancy. ### Foreground-Aware Denoising (FAD) Due to multipath effects, 4D radar contains significant noise points. Despite applying the Constant False Alarm Rate (CFAR) algorithm to filter out noise during the data acquisition process, the noise level remains substantial. It is imperative to further reduce clutter noise in 4D radar data before early data fusion to avoid spreading noise. Considering the minimal contribution of background points to object detection, this work introduces point-level foreground semantic segmentation to 4D radar denoising, performing a Foreground-Aware Denoising. Specifically, we first utilize PointNet++ [12] combined with a segmentation head as \\(\\chi\\), to predict the foreground semantic probability \\(\\mathcal{S}=\\chi(\\mathcal{P}^{r})\\) for each point in the 4D radar. Subsequently, points with a foreground probability below a predefined threshold \\(\\tau\\) are filtered out, that is \\(\\mathcal{P}^{r}_{new}=\\{p^{r}_{i}|\\mathcal{S}_{i}\\geq\\tau\\}\\). FAD effectively filters out as many noise points as possible while preventing the loss of foreground points. ### Multi-Modal Encoder (MME) Even following denoising using FAD, there remains a significant quality disparity between LiDAR and 4D radar due to limitations in resolution. We thus design a Multi-Modal Encoder module that fuses LiDAR and radar points at an early stage to extract richer features. As illustrated in Figure 5, we innovate the traditional unimodal Pillars coding into multimodal Pillars coding to perform initial fusion at the data level, extracting richer information for subsequent feature processing. Firstly, referring [10], we encode the LiDAR point cloud into a pillar set \\(P^{l}=\\{p^{l}_{i}\\}_{i=1}^{N}\\). Each LiDAR point \\(p^{l}_{(i,j)}\\) in pillar \\(p^{l}_{i}\\) encoded using encoding feature \\(\\mathbf{f}^{l}_{(i,j)}\\) as \\[\\mathbf{f}^{l}_{(i,j)}=[\\mathbf{\\mathcal{X}}^{l},\\mathbf{\\mathcal{Y}}^{l}_{cl},\\mathbf{ \\mathcal{Z}}^{l},\\lambda], \\tag{1}\\] where \\(\\mathbf{\\mathcal{X}}^{l}=[x^{l},y^{l},z^{l}]\\) is the coordinate of the LiDAR point, \\(\\mathbf{\\mathcal{Y}}^{l}_{cl}\\) denotes the distance from the LiDAR point to the arithmetic mean of all LiDAR points in the pillar, \\(\\mathbf{\\mathcal{Z}}^{l}\\) denotes the (horizontal) offset from the pillar center in \\(x,y\\) coordinates, and \\(\\lambda\\) is the reflectance. Similarly, each 4D radar point \\(p^{r}_{(i,j)}\\) in pillar \\(p^{r}_{i}\\) encoded using encoding feature \\(\\mathbf{f}^{r}_{(i,j)}\\) as \\[\\mathbf{f}^{r}_{(i,j)}=[\\mathbf{\\mathcal{X}}^{r},\\mathbf{\\mathcal{Y}}^{r}_{cr},\\mathbf{ \\mathcal{Z}}^{r},\\mathbf{\\mathcal{V}},\\Omega], \\tag{2}\\] where \\(\\mathbf{\\mathcal{X}}^{r}\\), \\(\\mathbf{\\mathcal{Y}}^{r}_{cr}\\), and \\(\\mathbf{\\mathcal{Z}}^{r}\\) are similar in meaning to those in Eq.1. \\(\\mathbf{\\mathcal{V}}=[\\mathcal{V}_{x},\\mathcal{V}_{y}]\\) is Doppler information along each axis, and \\(\\Omega\\) is Radar Cross-Section (RCS). We then perform cross-modal feature propagation for LiDAR pillar encoding features \\(\\mathbf{f}^{l}_{(i,j)}\\) and radar pillar encoding features \\(\\mathbf{f}^{r}_{(i,j)}\\) that occupy the same coordinates. The fused LiDAR pillar encoding features \\(\\widehat{\\mathbf{f}}^{l}_{(i,j)}\\) and 4D radar points \\(\\widehat{\\mathbf{f}}^{r}_{(i,j)}\\) are obtained by fusing \\(\\mathbf{f}^{l}_{(i,j)}\\) and \\(\\mathbf{f}^{r}_{(i,j)}\\) as follows: \\[\\widehat{\\mathbf{f}}^{l}_{(i,j)}=[\\mathbf{\\mathcal{X}}^{l},\\mathbf{\\mathcal{ Y}}^{l}_{cl},\\mathbf{\\mathcal{Y}}^{l}_{cr},\\mathbf{\\mathcal{Z}}^{l},\\lambda,\\overline{ \\mathbf{\\mathcal{V}}},\\overline{\\Omega}], \\tag{3}\\] \\[\\mathrm{and}\\;\\;\\widehat{\\mathbf{f}}^{r}_{(i,j)}=[\\mathbf{\\mathcal{X}}^{r },\\mathbf{\\mathcal{Y}}^{r}_{cl},\\mathbf{\\mathcal{Y}}^{r}_{cr},\\mathbf{\\mathcal{Z}}^{r}, \\overline{\\lambda},\\mathbf{\\mathcal{V}},\\Omega],\\] where the overline denotes the average of all point features of another modality in that pillar. The feature propagation is beneficial because \\(\\lambda\\) and \\(\\Omega\\) are helpful for object classification, while Doppler information \\(\\mathcal{V}\\) is crucial for distinguishing dynamic objects [19]. Cross-modal feature sharing makes comprehensive use of these advantages and cross-modal offsets \\([\\mathbf{\\mathcal{Y}}^{m}_{cl},\\mathbf{\\mathcal{Y}}^{m}_{cr}]\\), \\(m\\in\\{l,r\\}\\) also further enrich the geometric information. This MME method compensates for the data quality of 4D radar under normal weather conditions and can also enhance the quality of LiDAR in adverse weather. Subsequently, we applied a linear layer and max pooling operations to the fused pillar encoding features \\(\\widehat{\\mathbf{f}}\\) to obtain the corresponding modal BEV features \\(\\mathcal{F}\\). ### \\(\\{\\)IM\\(\\}^{2}\\) Backbone and MSGF Block To take full advantage of the respective advantages of LiDAR and 4D radar, it is necessary to focus on both inter-modal features and intra-modal features. We introduce the **I**nter-**M**odal and **I**ntra-**M**odal backbone (\\(\\{\\)IM\\(\\}^{2}\\)). \\(\\{\\)IM\\(\\}^{2}\\) serves as a multi-modal branch feature extraction module that concurrently extracts inter-modal feature (\\(\\mathcal{F}^{f}\\)) and intra-modal features (\\(\\mathcal{F}^{l},\\mathcal{F}^{r}\\)). Specifically, we fuse two Intra-Modal features to form an Inter-Modal feature, \\[\\mathcal{F}^{f}=\\phi(\\mathcal{F}^{l},\\mathcal{F}^{r}), \\tag{4}\\] Figure 5: Bidirectional Data Fusion in MME. LiDAR-specific point features (blue) and radar-specific point features (red) located in the same pillar are averaged (Avg.) for feature propagation. And the cross-modal offsets (CMO) are computed to enrich the geometric features. where \\(\\phi\\) denotes fusion approach (we use concatenation). Subsequently, we apply a convolutional block to each modal branch \\(\\mathcal{F}^{l}\\), \\(\\mathcal{F}^{r}\\), and \\(\\mathcal{F}^{f}\\) independently, \\[\\mathcal{F}^{m}_{\\mathcal{D}}=\\kappa(\\mathcal{F}^{m}_{\\mathcal{D}-1}), \\tag{5}\\] where \\(\\mathcal{D}\\in[1,3]\\) denotes layer, and \\(m\\in\\{l,r,f\\}\\) indicates different modality. \\(\\kappa\\) represents a convolutional layer with batch normalization and ReLU activation. However, while {IM}\\({}^{2}\\) addresses some deficiencies in feature representation, this naive approach inevitably introduces redundant features. Inspired by Song et al. (2024) to adaptively filter each modal feature, we design an MSGF for adaptive gated fusion on each LiDAR and 4D radar scale feature map. As depicted in Fig. 6, the gated network \\(\\mathcal{G}\\) in MSGF processes input feature maps from LiDAR \\(\\mathcal{F}^{l}\\), 4D radar \\(\\mathcal{F}^{r}\\) and fused counterpart \\(\\mathcal{F}^{f}\\). Subsequently, on the LiDAR and 4D radar branches, the adaptive gating weights \\(\\mathcal{W}^{l}\\) for \\(\\mathcal{F}^{l}\\) and \\(\\mathcal{W}^{r}\\) for \\(\\mathcal{F}^{r}\\) were obtained by a convolution block and sigmoid activation function, respectively. These weights are applied to the initial feature via element-wise multiplication, thus enabling filter \\(\\mathcal{F}^{l}\\) and \\(\\mathcal{F}^{r}\\) in the gated mechanism. Formally, the gated network \\(\\mathcal{G}\\) guides \\(\\mathcal{F}^{l}\\) and \\(\\mathcal{F}^{l}\\) at convolution layer index \\(\\mathcal{D}\\) to filter out redundant information as following: \\[\\mathcal{F}^{m}_{\\mathcal{D}}=\\mathcal{G}_{\\mathcal{D}}(\\mathcal{F}^{m}_{ \\mathcal{D}},\\mathcal{F}^{f}_{\\mathcal{D}}),m\\in\\{l,r\\}, \\tag{6}\\] where \\(\\kappa\\) is a 3x3 convolution block and \\(\\delta\\) a sigmoid function. \\(\\mathcal{F}^{f}\\) is the fused feature with information about the interactions between modalities. It discerns whether features in \\(\\mathcal{F}^{l}\\) and \\(\\mathcal{F}^{r}\\) are helpful or redundant. Using \\(\\mathcal{F}^{f}\\) for gated filtering can flexibly weight and extract features from \\(\\mathcal{F}^{l}\\) and \\(\\mathcal{F}^{r}\\) while significantly reducing feature redundancy. ### Loss Function and Training Strategy. We trained our L4DR with the following losses : \\[\\mathcal{L}_{cls}+\\beta_{loc}\\mathcal{L}_{loc}+\\beta_{fad}\\mathcal{L}_{fad}), \\tag{7}\\] where \\(\\beta_{cls}\\) = 1, \\(\\beta_{loc}\\) = 2, \\(\\beta_{cls}\\) = 0.5, the \\(\\mathcal{L}_{fad}\\) is object classification focal loss, the \\(\\mathcal{L}_{loc}\\) is object localization regression loss, and the \\(\\mathcal{L}_{fad}\\) is focal classification loss of 4D radar foreground point in FAD module. We use Adam optimizer with lr = 1e-3, \\(\\beta_{1}\\) = 0.9, \\(\\beta_{2}\\) = 0.999. ## Experiments ### Implement Details. We implement L4DR with PointPillars Lang et al. (2019), the most commonly used base architecture in radar-based, LiDAR and 4D radar fusion-based 3D object detection. This can effectively verify the effectiveness of our L4DR and avoid unfair comparisons caused by inherent improvements in the base architecture. We set \\(\\tau\\) in section 3.2 as 0.3 while training and 0.2 while inferring. We conduct all experiments with a batch size of 16 on 2 RTX 3090 GPUs. Other parameter settings refer to the default official configuration in the OpenPCDet Team et al. (2020) tool. ### Dataset and Evaluation Metrics K-Radar dataset.The K-Radar dataset Paek et al. (2022) contains 58 sequences with 34944 frames of 64-line LiDAR, camera, and 4D radar data in various weather conditions. According to the official K-Radar split, we used 17458 frames for training and 17536 frames for testing. We adopt two evaluation metrics for 3D object detection: \\(AP_{3D}\\) and \\(AP_{BEV}\\) of the class \"Sedan\" at IoU = 0.5. View-of-Delft (VoD) dataset.The VoD dataset Palffy et al. (2022) contains 8693 frames of 64-line LiDAR, camera, and 4D radar data. Following the official partition, we divided the dataset into a training and validation set with 5139 and 1296 frames. All of the methods used the official radar with 5 scans accumulation and single frame LiDAR. Meanwhile, to explore the performance under different fog intensities, following a series of previous work Qian et al. (2021); Li et al. (2022), we similarly performed fog simulations Hanner et al. (2021) (with fog level \\(\\mathcal{L}\\) from 0 to 4, fog density (\\(\\alpha\\)) = [0.00, 0.03, 0.06, 0.10, 0.20]) on the VoD dataset and kept 4D Radar unchanged to simulate various weather conditions. We named it the **Vod-Fog** dataset in the following. _Noteworthy, we used two evaluation metrics groups on the VoD dataset. The VoD official metrics are used to compare the results reported by previous state-of-the-art methods. The KITTI official metrics are used to demonstrate and analyze the performance of \"easy\", \"moderate\" and \"hard\" objects with different difficulties under foggy weather._ ### Results on K-Radar Adverse Weather Dataset Following 3D-LRF Chae et al. (2024), we compare our L4DR with different modality-based 3D object detection methods: PointPillars Lang et al. (2019), RTNH Paek et al. (2022), InterFusion Wang et al. (2022b) Figure 6: Gated fusion block in our MSGF. The gated block processes input features from LiDAR and 4D radar by using the fused inter-modal features. It generates adaptive gating weights from fused features and applies these weights via element-wise multiplication to filter out redundant information in intra-modal features. and 3D-LRF [30]. The results in Table 1 highlight the superior performance of our L4DR model on the K-Radar dataset. Our L4DR model surpasses 3D-LRF by 8.3% in total \\(AP_{3D}\\). This demonstrates that compared to previous fusion, our method utilizes the advantages of LiDAR and 4D Radar more effectively. We observed that all methods perform better in many adverse weather conditions (e.g., Overcast, Fog, etc.) than in normal weather. A possible reason is the distribution differences of labeled objects in different weather conditions. We discuss this counter-intuitive phenomenon in detail in the supplementary material, as well as other valuable results, such as different IoU thresholds and new version labels, for a more comprehensive comparison. _Note that we only compare 3D-LRF on the K-Radar dataset because its code is not open-sourced, the results of 3D-LRF are available on the K-Radar dataset only_. ### Results on Vod-Fog Simulated Dataset We evaluated our L4DR model in comparison with LiDAR and 4D radar fusion methods using the Vod-Fog dataset using the KITTI metrics across varying levels of fog. Table 2 demonstrates that our L4DR model outperforms LiDAR-only PointPillars in different difficulty categories and fog intensities. Particularly in the most severe fog conditions (fog level = 4), our L4DR model achieves performance improvements of 17.43%, 17.8%, and 24.81% mAP in moderate difficulty categories, surpassing the gains obtained by InterFusion. Furthermore, our approach consistently exhibits superior performance compared to InterFusion across various scenarios, showcasing the adaptability of our L4DR fusion under adverse weather conditions. ### Results on VoD Dataset We compared our L4DR fusion performance with different state-of-the-art methods of different modalities on the VoD dataset with _VoD metric_. As shown in Table 3, our L4DR fusion outperforms the existing LiDAR and 4D radar fusion method InterFusion [23] in all categories. We outperformed by 6.8% in the Cyc. class in the Driving Area. Meanwhile, our L4DR also significantly outperforms other modality-based state-of-the-art methods such as LXL [21]. These experimental results demonstrate that our method can comprehensively fuse the two modalities of LiDAR and 4D radar. As a consequence, our L4DR method \\begin{table} \\begin{tabular}{c c|c c c c c c c c c} \\hline \\hline Methods & Modality & Metric & Total & Normal & Overcast & Fog & Rain & Sleet & Lightsnow & Heavywsony \\\\ \\hline \\hline RTNH & \\multirow{2}{*}{4DR} & \\(AP_{BEV}\\) & 41.1 & 41.0 & 44.6 & 45.4 & 32.9 & 50.6 & 81.5 & 56.3 \\\\ (NeurIPS 2022) & & \\(AP_{3D}\\) & 37.4 & 37.6 & 42.0 & 41.2 & 29.2 & 49.1 & 63.9 & 43.1 \\\\ \\hline PointPillars & \\multirow{2}{*}{L} & \\(AP_{BEV}\\) & 49.1 & 48.2 & 53.0 & 45.4 & 44.2 & 45.9 & 74.5 & 53.8 \\\\ (CVPR 2019) & & \\(AP_{3D}\\) & 22.4 & 21.8 & 28.0 & 28.2 & 27.2 & 22.6 & 23.2 & 12.9 \\\\ \\hline RTNH & \\multirow{2}{*}{L} & \\(AP_{BEV}\\) & 66.3 & 65.4 & 87.4 & 83.8 & 73.7 & 48.8 & 78.5 & 48.1 \\\\ (NeurIPS 2022) & & \\(AP_{3D}\\) & 37.8 & 39.8 & 46.3 & 59.8 & 28.2 & 31.4 & 50.7 & 24.6 \\\\ \\hline InterFusion & \\multirow{2}{*}{L+4DR} & \\(AP_{BEV}\\) & 52.9 & 50.0 & 59.0 & 80.3 & 50.0 & 22.7 & 72.2 & 53.3 \\\\ (IROS 2023) & & \\(AP_{3D}\\) & 17.5 & 15.3 & 20.5 & 47.6 & 12.9 & 9.33 & 56.8 & 25.7 \\\\ \\hline 3D-LRF & \\multirow{2}{*}{L+4DR} & \\(AP_{BEV}\\) & 73.6 & 72.3 & 88.4 & 86.6 & 76.6 & 47.5 & 79.6 & **64.1** \\\\ (CVPR 2024) & & \\(AP_{3D}\\) & 45.2 & 45.3 & 55.8 & 51.8 & 38.3 & 23.4 & **60.2** & 36.9 \\\\ \\hline L4DR & \\multirow{2}{*}{L+4DR} & \\(AP_{BEV}\\) & **77.5** & **76.8** & **88.6** & **89.7** & **78.2** & **59.3** & **80.9** & 53.8 \\\\ (Ours) & & \\(AP_{3D}\\) & **53.5** & **53.0** & **64.1** & **73.2** & **53.8** & **46.2** & 52.4 & **37.0** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Quantitative results of different 3D object detection methods on K-Radar dataset. We present the modality of each method (L: LiDAR, 4DR: 4D radar) and detailed performance for each weather condition. Best in **bold**, second in underline. \\begin{table} \\begin{tabular}{c c c|c c c|c c c|c c c} \\hline \\hline \\multirow{2}{*}{Fog Level} & \\multirow{2}{*}{Methods} & \\multirow{2}{*}{Modality} & \\multicolumn{3}{c|}{Car (IoU = 0.5)} & \\multicolumn{3}{c|}{Pedestrian (IoU = 0.25)} & \\multicolumn{3}{c}{Cyclist (IoU = 0.25)} \\\\ & & & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard \\\\ \\hline \\hline \\multirow{3}{*}{0} & PointPillars & L & 84.9 & 73.5 & 67.5 & 62.7 & 58.4 & 53.4 & 85.5 & 79.0 & 72.7 \\\\ & InterFusion & L+4DR & 67.6 & 65.8 & 58.8 & 73.7 & 70.1 & 64.7 & 90.3 & 87.0 & 81.2 \\\\ & L4DR (Ours) & L+4DR & **85.0** & **76.6** & **69.4** & **74.4** & **72.3** & **65.7** & **93.4** & **90.4** & **83.0** \\\\ \\hline \\multirow{3}{*}{1} & PointPillars & L & 79.9 & 72.7 & 67.0 & 59.9 & 55.6 & 50.5 & 85.5 & 78.2 & 72.0 \\\\ & InterFusion & L+4DR & 66.1 & 64.0 & 56.9 & 74.0 & 70.6 & 64.5 & 91.6 & 87.4 & 82.0 \\\\ & L4DR (Ours) & L+4DR & **77.9** & **73.2** & **67.8** & **75.4** & **72.1** & **66.7** & **93.8** & **91.0** & **83.2** \\\\ \\hline \\multirow{3}{*}{2} & PointPillars & L & 67.0 & 51.4 & 44.4 & 53.1 & 47.2 & 42.7 & 69.6 & 62.7 & 57.2 \\\\ & InterFusion & L+4DR & 56.0 & 48.5 & 41.5 & **63.2** & 57.8 & 52.9 & 77.3 & 71.1 & 66.2 \\\\ & L4DR (Ours) & L+4DR & **68.5** & **56.4** & **49.3** & 63.1 & **59.9** & **55.1** & **82.7** & **70.8** & **70.7** \\\\ \\hline \\multirow{3}{*}{3} & PointPillars & L & 44.5 & 31.9 & 27.0 & 40.2 & 37.7 & 34.0 & 53.2 & 46.7 & 41.8 \\\\ & InterFusion & L+4DR & 41.2 & 33.1 & 27.0 & 52.9 & 49.2 & 44.8 & 59.9 & 57.7 & 53.1 \\\\ \\cline{1-1} & L4DR (Ours) & L+4DR & **46.2** & **41.4** & **34.6** & **53.5** & **50.6** & **46.2** & **72.2** & **67.7** & **60.9** \\\\ \\hline \\multirow{3}{*}{4} & PointPillars & L & 13.0 & 8.77 & 7.19 & 10.6 & 12.9 & 11.3 & 6.15 & 4.89 & 4.57 \\\\ \\cline{1-1} & InterFusion & L+4DR & 15.2 & 10.8 & 8.40 & 25.7 & 25.1 & 22.6 & 6.68 & 7.95 & 6.99 \\\\ \\cline{1-1} & L4DR (Ours) & L+4DR & **26.9** & **26.2** & **21.6** & **33.1** & **30.7** & **27.9** & **30.3** & **29.7** & **26.3** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Quantitative results of different methods on Vod-Fog dataset with the KITTI metric under various fog levels. also shows superior performance even in clear weather. ### Ablation study Effect of each component.We systematically evaluated each component, with the results summarized in Table 4. The \\(1^{st}\\) row represents the performance of the LiDAR-only baseline model. Subsequent \\(2^{nd}\\) row and \\(3^{rd}\\) row are fused by directly concatenating the BEV features from LiDAR and 4D radar modality. The results showing the enhancements were observed with the addition of MME and FAD respectively, highlighting that our fusion method fully utilizes the weather robustness of the 4D radar while excellently handling the noise problem of the 4D radar. The \\(4^{th}\\) row indicates that the performance boost from incorporating the \\(\\{\\text{IM}\\}^{2}\\) model alone was not substantial, primarily due to feature redundancy introduced by the \\(\\{\\text{IM}\\}^{2}\\) backbone. This issue was effectively addressed by utilizing the MSGF module in the \\(5^{th}\\) row, leading to the most optimal performance. Comparison with other feature fusion.We compared different multi-modal feature fusion blocks, including basic concatenation (Concat.) and various attention-based methods such as Transformer-based [21] Cross-Modal Attention (Cross-Att.) and Self-Attention (Self-Attn.), SE Block [14], and CBAM Block [15], see supplementary material for detailed fusion implements. Experimental results (Table 5) show that while attention mechanisms outperform concatenation to some extent, they do not effectively address the challenge of fluctuating features under varying weather conditions. In contrast, our proposed MSGF method, focusing on significant features of LiDAR and 4D radar, achieves superior performance and robustness under different weather. ## References * N. Balal, G. Pinhasi, and Y. Pinhasi (2016)Atmospheric and Fog Effects on Ultra-Wide Band Radar Operating at Extremely High Frequencies. Sensors16 (5), pp. 751. Cited by: SS1. * M. Bijelic, T. Gruber, F. Mannan, F. Kraus, W. Ritter, K. Dietmayer, and F. Heide (2020)Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather. In CVPR, Cited by: SS1. * Y. Chae, H. Kim, and K. Yoon (2024)Towards Robust 3D Object Detection with LiDAR and 4D Radar Fusion in Various Weather Conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15162-15172. Cited by: SS1. * N. Charron, S. Phillips, and S. L. Waslander (2018)Designing of Lidar Point Clouds Corrupted by Snowfall. In CRV, Cited by: SS1. * J. Deng, S. Shi, P. Li, W. Zhou, Y. Zhang, and H. Li (2021)Voxel R-CNN: Towards High Performance Voxel-based 3D Object Detection. AAAI35. Cited by: SS1. * A. Ghita, B. Antoniussen, W. Zimmer, R. Greer, C. Cress, A. Mogelmose, M. M. Trivedi, and A. C. Knoll (2024)ActiveAnno3D-An Active Learning Framework for Multi-Modal 3D Object Detection. arXiv preprint arXiv:2402.03235. Cited by: SS1. * Y. Golovachev, A. Etinger, G. A. Pinhasi, and Y. Pinhasi (2018)Millimeter wave high resolution radar accuracy in fog conditions--theory and experimental verification. Sensors18 (7), pp. 2148. Cited by: SS1. * M. Hahner, C. Sakaridis, M. Bijelic, F. Heide, F. Yu, D. Dai, and L. Van Gool (2022)LiDAR Snowfall Simulation for Robust 3D Object Detection. In CVPR, Cited by: SS1. * M. Hahner, C. Sakaridis, D. Dai, and L. Van Gool (2021)Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather. In ICCV, Cited by: SS1. * Z. Han, J. Wang, Z. Xu, S. Yang, L. He, S. Xu, and J. Wang (2023)4D Millimeter-Wave Radar in Autonomous Driving: A Survey. arXiv. External Links: 2306.04242 Cited by: SS1. * R. Heinzler, F. Piewak, P. Schindler, and W. Stork (2020)CNN-Based Lidar Point Cloud De-Noising in Adverse Weather. IEEE Robotics and Automation Letters5. Cited by: SS1. * H. Hosseinpour, F. Samadzadegan, and F. D. Javan (2022)CMGFNet: A deep cross-modal gated fusion network for building extraction from very high-resolution remote sensing images. ISPRS Journal of Photogrammetry and Remote Sensing184, pp. 96-115. Cited by: SS1. * J. Hu, L. Shen, and G. Sun (2018)Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132-7141. Cited by: SS1. * X. Huang, H. Wu, X. Li, X. Fan, C. Wen, and C. Wang (2024)Sunshine to rainstorm: cross-weather knowledge distillation for robust 3d object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, pp. 2409-2416. Cited by: SS1. * V. Kilic, D. Hegde, V. A. Sindagi, A. Cooper, M. Foster, and V. M. Patel (2021)Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of Adverse Weather Conditions for 3D Object Detection. ArXiv. Cited by: SS1. * A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019)PointPillars: Fast Encoders for Object Detection From Point Clouds. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12689-12697. Cited by: SS1. * Y. Li, J. Park, M. O'Toole, and K. Kitani (2022)Modality-Agnostic Learning for Radar-Lidar Fusion in Vehicle Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 918-927. Cited by: SS1. * D. Paek, S. Kong, and K. T. Wijaya (2022)K-radar: 4d radar object detection for autonomous driving in various weather conditions. Advances in Neural Information Processing Systems35, pp. 3819-3829. Cited by: SS1. * D. Paek, S. KONG, and K. T. Wijaya (2022)K-Radar: 4D Radar Object Detection for Autonomous Driving in Various Weather Conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3819-3829. Cited by: SS1. * A. Paffry, E. Pool, S. Baratam, J. F. Kooij, and D. M. Gavrila (2022)Multi-class road user detection with 3+ 1D radar in the View-of-Delft dataset. IEEE Robotics and Automation Letters7 (2), pp. 4961-4968. Cited by: SS1. * C. R. Qi, L. Yi, H. Su, and L. J. Guibas (2017)PointNet++: deep hierarchical feature learning on point sets in a metric space. arXiv:1706.02413. Cited by: SS1. * K. Qian, S. Zhu, X. Zhang, and L. E. Li (2021)Robust Multimodal Vehicle Detection in Foggy Weather Using Complementary Lidar and Radar Signals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 444-453. Cited by: SS1. * S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li (2020)PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection. In CVPR, Cited by: SS1. * S. Shi, L. Jiang, J. Deng, Z. Wang, C. Guo, J. Shi, X. Wang, and H. Li (2022)PV-RCNN++: Point-Voxel Feature Set Abstraction With Local Vector Representation for 3D Object Detection. Int. J. Comput. Vision131. Cited by: SS1. * J. Song, L. Zhao, and K. A. Skinner (2024)LiRaFusion: Deep Adaptive LiDAR-Radar Fusion for 3D Object Detection. arXiv preprint arXiv:2402.11735. Cited by: SS1. * J. Song, L. Zhao, and K. A. Skinner (2024)LiRaFusion: Deep Adaptive LiDAR-Radar Fusion for 3D Object Detection. arXiv preprint arXiv:2402.11735. Cited by: SS1. * S. Sun and Y. D. Zhang (2021)4D automotive radar sensing for autonomous vehicles: a sparsity-oriented approach. IEEE Journal of Selected Topics in Signal Processing15 (4), pp. 879-891. Cited by: SS1. * O. Team et al. (2020)OpenpCdet: An open-source toolbox for 3d object detection from point clouds. Cited by: SS1. * O. * [Teufel et al.2022] Teufel, S.; Volk, G.; Von Bernuth, A.; and Bringmann, O. 2022. Simulating Realistic Rain, Snow, and Fog Variations For Comprehensive Performance Characterization of LiDAR Perception. In _2022 IEEE 95th Vehicular Technology Conference: (VTC2022-Spring)_. * [Vaswani et al.2017] Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is all you need. _Advances in neural information processing systems_, 30. * [Wang et al.2022a] Wang, H.; Shi, C.; Shi, S.; Lei, M.; Wang, S.; He, D.; Schiele, B.; and Wang, L. 2023a. DSVT: Dynamic Sparse Voxel Transformer With Rotated Sets. In _CVPR_. * [Wang et al.2022a] Wang, L.; Zhang, X.; Li, J.; Xv, B.; Fu, R.; Chen, H.; Yang, L.; Jin, D.; and Zhao, L. 2022a. Multi-modal and multi-scale fusion 3D object detection of 4D radar and LiDAR for autonomous driving. _IEEE Transactions on Vehicular Technology_. * [Wang et al.2022b] Wang, L.; Zhang, X.; Xu, B.; Zhang, J.; Fu, R.; Wang, X.; Zhu, L.; Ren, H.; Lu, P.; Li, J.; and Liu, H. 2022b. InterFusion: Interaction-based 4D Radar and LiDAR Fusion for 3D Object Detection. In _2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, 12247-12253. * [Wang et al.2023b] Wang, Y.; Deng, J.; Li, Y.; Hu, J.; Liu, C.; Zhang, Y.; Ji, J.; Ouyang, W.; and Zhang, Y. 2023b. Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object Detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, 13394-13403. * [Woo et al.2018] Woo, S.; Park, J.; Lee, J.-Y.; and Kweon, I. S. 2018. Cbam: Convolutional block attention module. In _Proceedings of the European conference on computer vision (ECCV)_, 3-19. * [Wu et al.2023] Wu, H.; Wen, C.; Shi, S.; Li, X.; and Wang, C. 2023. Virtual Sparse Convolution for Multimodal 3D Object Detection. In _CVPR_. * [Wu et al.2024] Wu, H.; Zhao, S.; Huang, X.; Wen, C.; Li, X.; and Wang, C. 2024. Commonsense Prototype for Outdoor Unsupervised 3D Object Detection. _arXiv preprint arXiv:2404.16493_. * [Xia et al.2023] Xia, Q.; Deng, J.; Wen, C.; Wu, H.; Shi, S.; Li, X.; and Wang, C. 2023. CoIn: Contrastive Instance Feature Mining for Outdoor 3D Object Detection with Very Limited Annotations. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_, 6254-6263. * [Xiong et al.2024] Xiong, W.; Liu, J.; Huang, T.; Han, Q.-L.; Xia, Y.; and Zhu, B. 2024. LXL: LiDAR Excluded Lean 3D Object Detection With 4D Imaging Radar and Camera Fusion. _IEEE Transactions on Intelligent Vehicles_, 9(1): 79-92. * [Xu et al.2022] Xu, G.; Khan, A.; Moshayedi, A. J.; Zhang, X.; and Shuxin, Y. 2022. The Object Detection, Perspective and Obstacles In Robotic: A Review. _EAI Endorsed Transactions on AI and Robotics_, 1: 7-15. * [Xu et al.2021] Xu, Q.; Zhou, Y.; Wang, W.; Qi, C. R.; and Anguelov, D. 2021. SPG: Unsupervised Domain Adaptation for 3D Object Detection via Semantic Point Generation. In _ICCV_. * [Yan et al.2023] Yan, J.; Liu, Y.; Sun, J.; Jia, F.; Li, S.; Wang, T.; and Zhang, X. 2023. Cross modal transformer: Towards fast and robust 3d object detection. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 18268-18278. * [Yan, Mao, and Li2018] Yan, Y.; Mao, Y.; and Li, B. 2018. SECOND: Sparsely Embedded Convolutional Detection. _Sensors_, 18. * [Yang et al.2020] Yang, Z.; Sun, Y.; Liu, S.; and Jia, J. 2020. 3DSSD: Point-Based 3D Single Stage Object Detector. In _CVPR_. ## Appendix / Supplemental Material ### Analysis of Point Distribution of LiDAR and 4D Radar under Different Weather Conditions Although the weather robustness advantage of 4D radar sensors has been mentioned as a priori knowledge in existing work [13, 16], this aspect remains less studied. Here, we utilize a variety of real-world adverse weather datasets from K-Radar to examine and corroborate this phenomenon. As depicted in Figure 7, we have compiled plots of the point counts averaged across various types of real-world adverse weather conditions for both LiDAR and 4D radar. It is observed that under different categories of adverse weather conditions, the point counts of LiDAR at different distances from the sensor location (a) exhibit a pronounced decreasing trend, reflecting the significant degradation of LiDAR data quality in adverse weather. In contrast, the point counts of 4D radar at different distances from the sensor location (b) do not show a clear correlation with weather conditions. It is important to note that the large differences in data scenes and dynamic object distributions, and the sensitivity of 4D radar to dynamic objects result in greater fluctuations in point count distribution. However, the lack of correlation between point counts and weather conditions still demonstrates to a certain extent the weather robustness advantage of 4D radar. ### Discussion about the Counter-Intuitive Results on K-Radar dataset As shown by the results in the main paper: K-Radar inverse weather dataset experiment, all methods perform better in many severe weather conditions (e.g., cloudy, foggy, etc.) than in normal weather. After analyzing the data and experimental results, a possible main reason is the distribution differences of labeled objects in different weather conditions.We will verify this in two respects. #### The performance of 4D Radar-based method in different weather condition. Benefiting from the sensor weather robustness of 4D radar, the 4D Radar-based method receives little performance impact from weather. Therefore, the performance of the 4D Radar-based method under each weather can be used to determine the difference in difficulty caused by the distribution of labeled objects for different weather on the K-radar dataset. As shown in the results demonstrated by the 4D Radar-based method (RTNH, PointPillars) in Tables 8 and 9, it can be found that the performance of the 4D Radar-based method is significantly worse than that of most of the bad weather in normal weather. This result illustrates that labeled objects in bad weather scenarios are much easier to detect in the K-Radar dataset. This makes it possible that even when LiDAR is degraded in bad weather, the performance is still higher than in normal weather due to the fact that the labeled targets in the scene are easier to detect. #### Labeled Object Distance-Number Distribution of K-Radar Dataset. We counted the number of labeled objects at different distances by weather category (normal, severe), as shown in Figure 8. It can be found that under normal weather, the proportion of the number of labeled objects (26.80%) at long distances (50-75m) is greater than that under severe weather (22.66%), while the proportion of the number of labeled objects (36.78%) at closed distances (0-25m) is smaller than that under severe weather (40.0%). This data reflects the difficulty of the scenarios in different weather, explaining the counter-intuitive results on K-Radar dataset. ### Experiments of the Hyperparameter \\(\\lambda\\) in FAD We have conducted sufficient experimental discussion on the hyperparameter \\(\\lambda\\) in FAD both in the training and testing stages. The experimental results are shown in Table 7, which shows that too small \\(\\lambda\\) will lead to too much noise residue and an insignificant denoising effect, while too large \\(\\lambda\\) will lose a large number of foreground points affecting the detection of the object. Moreover, the performance degree of different \\(\\lambda\\) under different fog levels is also different, which is due to the different importance of 4D radar under different fog levels. In the end, we chose the setting with the best overall performance with \\(\\lambda\\) = 0.3 for training and \\(\\lambda\\) = 0.2 for testing, which is also in line with our expectations. Firstly, \\(\\lambda\\) cannot be used with the 0.5 threshold for conventional binary classification, which needs to be appropriately lowered. Secondly, the threshold \\(\\lambda\\) for training should be slightly higher than that for testing due to the increased number of foreground points caused by the data augmentations, such as Ground Truth Sampling. Figure 8: Distribution of labeled bounding boxes in normal (left) and severe (right) weather in the K-Radar dataset. Figure 7: Average number of point distribution of (a) LiDAR and (b) 4D radar with distance under different weather conditions (Normal, Rain, Sleet, Overcast, Fog, Light Snow, and Heavy Snow). ### More Implement Details For the training strategy, we train the entire network with the loss of 30 epochs. We use Adam optimizer with lr= 1e-3, \\(\\beta\\)1 = 0.9, \\(\\beta\\)2 = 0.999. For the K-Radar dataset, we preprocess the 4D radar sparse tensor by selecting only the top 10240 points with high power measurement. We present the set the point cloud range as [0m, 72m] for the X axis, [6.4m, 6.4m] for the Y axis, and [-2m, 6m] for the Z axis setting the same environment with version 1.0 K-Radar. And [0m, 72m] for the X axis, [-16m, 16m] for the Y axis, and [-2m, 7.6m] for the Z axis setting the same environment with version 2.1 K-Radar. The voxel size is set to (0.4m, 0.4m, 0.4m). For the VoD dataset, following KITTI (?), we calculate the 3D Average Precision (3D AP) across 40 recall thresholds (R40) for different classes. Also, following VoD's (Palffy et al. 2022) evaluation metrics, we calculate class-wise AP and mAP averaged over classes. The calculation encompasses the entire annotated region (camera FoV up to 50 meters) and the \"Driving Corridor\" region ([-4 m! x! +4 m, z! 25 m]). For both KITTI metrics and VoD metrics, for AP calculations, we used an IoU threshold specified in VoD, requiring a 50% overlap for car class and 25% overlap for pedestrian and cyclist classes. ### More Performance on K-Radar Dataset The main text is bound by space constraints and only results using IoU=0.5 and v1.0 labeling are shown on K-Radar. Here we additionally show results using IoU=0.3 with v1.0 labels as in Table. 8 and results using IoU=0.3 with v2.0 labels as in Table. 9. The experimental results all demonstrate the superior performance of our L4DR. ### More Fusion Details Below we present the implementation details of the individual fusion methods compared in Table 6 of the main text, all of which are implemented on the PointPillars baseline. Concat.We directly concatenate the pseudo-images of LiDAR with 4DRadar in the channel dimension after PointPillar coding. Cross-Attn.We used a 32-dimensional sin/cos position-encoded 4-head attention layer to calculate the Cross-Modal Pillar feature added to the 4DRadar Pillar feature from the LiDAR Pillar feature to the 4DRadar Pillar feature, and also to calculate the 4DRadar Pillar feature to the LiDAR Pillar feature's Cross-Modal Pillar feature and added to the LiDAR Pillar feature. Self-Attn.We use a 32-dimensional sin/cos position-encoded 4-head attentional layer to compute self-attentional features on the last two BEV features of the 2D BackBone and add them to the original features. SE Block.We use 2x Squeeze's SEBlock to compute SE features for each BEV feature of the 2D BackBone and add them to the original feature. CBAM Block.We use CBAM Block to compute SE features for each BEV feature of the 2D BackBone and add them to the original feature. ### Experimental Visualization Results To better visualize how our method improves detection performance, we compare our L4DR with InterFusion (Wang et al. 2022b) under different simulated fog levels, as shown in Figure 9. Our L4DR effectively filters out a substantial amount \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\hline \\multirow{2}{*}{\\(\\lambda\\) in training} & \\multirow{2}{*}{\\(\\lambda\\) in testing} & 3D mAP & 3D mAP & 3D mAP & 3D mAP \\\\ & & (fog level = 0) & (fog level = 1) & (fog level = 2) & (fog level = 3) & (fog level = 4) \\\\ \\hline \\multirow{4}{*}{0.1} & 0.1 & 77.56 & 77.03 & 62.40 & 50.11 & 25.27 \\\\ & 0.2 & 75.92 & 75.79 & 61.09 & 49.00 & 25.28 \\\\ & 0.3 & 75.46 & 74.46 & 60.84 & 48.84 & 25.72 \\\\ & 0.5 & 73.57 & 71.79 & 59.10 & 47.15 & 24.99 \\\\ \\hline \\hline \\multirow{4}{*}{0.2} & 0.1 & 77.59 & 77.43 & 64.06 & 52.57 & 23.90 \\\\ & 0.2 & 77.21 & 76.63 & 62.94 & 51.10 & 23.99 \\\\ & 0.3 & 75.84 & 75.23 & 62.15 & 50.44 & 23.97 \\\\ & 0.5 & 73.73 & 72.61 & 59.41 & 48.87 & 22.40 \\\\ \\hline \\hline \\multirow{4}{*}{0.3} & 0.1 & 79.51 & 78.77 & 63.78 & 52.95 & 25.94 \\\\ & 0.2 & **79.80** & **78.84** & **64.73** & 53.26 & **28.87** \\\\ \\cline{1-1} & 0.3 & 79.67 & 77.91 & 63.33 & 52.02 & 26.28 \\\\ \\cline{1-1} & 0.5 & 76.71 & 75.75 & 61.28 & 51.56 & 26.46 \\\\ \\hline \\hline \\multirow{4}{*}{0.5} & 0.1 & 77.18 & 76.67 & 62.35 & 51.45 & 24.30 \\\\ \\cline{1-1} & 0.2 & 78.91 & 77.87 & 63.61 & **53.49** & 28.49 \\\\ \\cline{1-1} & 0.3 & 79.47 & 78.35 & 63.57 & 52.22 & 27.19 \\\\ \\cline{1-1} & 0.5 & 78.57 & 77.12 & 62.47 & 51.18 & 26.29 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 7: Performance with different hyperparameters \\(\\lambda\\) in FAD both in the training and testing stages. of noise in 4D radar points (depicted as colored points). Furthermore, our L4DR achieves an effective fusion of LiDAR and 4D radar to increase the precise recall of hard-to-detect objects and reduce false detections. \\begin{table} \\begin{tabular}{c c|c c c c c c c c c} \\hline \\hline Methods & Modality & IoU & Metric & Total & Normal & Overcast & Fog & Rain & Sleet & Lightsnow & Heavysnow \\\\ \\hline \\hline \\multirow{3}{*}{RTNH (NeurIPS 2022)} & \\multirow{3}{*}{4DR} & 0.5 & \\(AP_{BEV}\\) & 41.1 & 41.0 & 44.6 & 45.4 & 32.9 & 50.6 & 81.5 & 56.3 \\\\ & & & \\(AP_{3D}\\) & 37.4 & 37.6 & 42.0 & 41.2 & 29.2 & 49.1 & 63.9 & 43.1 \\\\ \\cline{3-10} & & \\(AP_{BEV}\\) & 36.0 & 35.8 & 41.9 & 44.8 & 30.2 & 34.5 & 63.9 & 55.1 \\\\ & & & \\(AP_{3D}\\) & 14.1 & 19.7 & 20.5 & 15.9 & 13.0 & 13.5 & 21.0 & 6.36 \\\\ \\hline \\multirow{3}{*}{PointPillars (CVPR 2019)} & \\multirow{3}{*}{L} & 0.5 & \\(AP_{BEV}\\) & 49.1 & 48.2 & 53.0 & 45.4 & 44.2 & 45.9 & 74.5 & 53.8 \\\\ & & & \\(AP_{3D}\\) & 22.4 & 21.8 & 28.0 & 28.2 & 27.2 & 22.6 & 23.2 & 12.9 \\\\ \\cline{3-10} & & & \\(AP_{BEV}\\) & 51.9 & 51.6 & 53.5 & 45.4 & 44.7 & 54.3 & 81.2 & 55.2 \\\\ & & & \\(AP_{3D}\\) & 47.3 & 46.7 & 51.9 & 44.8 & 42.4 & 45.5 & 59.2 & 55.2 \\\\ \\hline \\multirow{3}{*}{RTNH (NeurIPS 2022)} & \\multirow{3}{*}{L} & 0.5 & \\(AP_{BEV}\\) & 66.3 & 65.4 & 87.4 & 83.8 & 73.7 & 48.8 & 78.5 & 48.1 \\\\ & & & \\(AP_{3D}\\) & 37.8 & 39.8 & 46.3 & 59.8 & 28.2 & 31.4 & 50.7 & 24.6 \\\\ \\cline{3-10} & & & \\(AP_{BEV}\\) & 76.5 & 76.5 & 88.2 & 86.3 & 77.3 & 55.3 & 81.1 & 59.5 \\\\ & & & \\(AP_{3D}\\) & 72.7 & 73.1 & 76.5 & 84.8 & 64.5 & 53.4 & 80.3 & 52.9 \\\\ \\hline \\multirow{3}{*}{InterFusion (IROS 2023)} & \\multirow{3}{*}{L+4DR} & 0.5 & \\(AP_{BEV}\\) & 52.9 & 50.0 & 59.0 & 80.3 & 50.0 & 22.7 & 72.2 & 53.3 \\\\ & & & \\(AP_{3D}\\) & 17.5 & 15.3 & 20.5 & 47.6 & 12.9 & 9.33 & 56.8 & 25.7 \\\\ \\cline{3-10} & & & \\(AP_{BEV}\\) & 57.5 & 57.2 & 60.8 & 81.2 & 52.8 & 27.5 & 72.6 & 57.2 \\\\ & & & \\(AP_{3D}\\) & 53.0 & 51.1 & 58.1 & 80.9 & 40.4 & 23.0 & 71.0 & 55.2 \\\\ \\hline \\multirow{3}{*}{3D-LRF (CVPR 2024)} & \\multirow{3}{*}{L+4DR} & 0.5 & \\(AP_{BEV}\\) & 73.6 & 72.3 & 88.4 & 86.6 & 76.6 & 47.5 & 79.6 & **64.1** \\\\ & & & \\(AP_{3D}\\) & 45.2 & 45.3 & 55.8 & 51.8 & 38.3 & 23.4 & **60.2** & 36.9 \\\\ \\cline{3-10} & & & \\(AP_{BEV}\\) & **84.0** & 83.7 & 89.2 & **95.4** & 78.3 & 60.7 & 88.9 & **74.9** \\\\ \\cline{3-10} & & & \\(AP_{3D}\\) & 74.8 & **81.2** & **87.2** & 86.1 & 73.8 & 49.5 & **87.9** & **67.2** \\\\ \\hline \\multirow{3}{*}{L4DR (Ours)} & \\multirow{3}{*}{L+4DR} & 0.5 & \\(AP_{BEV}\\) & **77.5** & **76.8** & **88.6** & **89.7** & **78.2** & **59.3** & **80.9** & 53.8 \\\\ & & & \\(AP_{3D}\\) & **53.5** & **53.0** & **64.1** & **73.2** & **53.8** & **46.2** & 52.4 & **37.0** \\\\ \\cline{1-1} \\cline{3-10} & & & \\(AP_{BEV}\\) & 79.5 & **86.0** & **89.6** & 89.9 & **81.1** & **62.3** & **89.1** & 61.3 \\\\ \\cline{1-1} \\cline{3-10} & & & \\(AP_{3D}\\) & **78.0** & 77.7 & 80.0 & **88.6** & **79.2** & **60.1** & 78.9 & 51.9 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 8: Quantitative results of different 3D object detection methods on K-Radar dataset. We present the modality of each method (L: LiDAR, 4DR: 4D radar) and detailed performance for each weather condition. Best in **bold**, second in underline. \\begin{table} \\begin{tabular}{c c c c c c c c c c} \\hline \\hline **Class** & **Method** & **Modality** & **Total** & **Normal** & **Li. Snow** & **He. Snow** & **Rain** & **Sleet** & **Overcast** & **Fog** \\\\ \\hline \\multirow{3}{*}{Sedan} & Pointpillars* (CVPR2019) & 4DR & 42.8 & 35.0 & 53.6 & 48.3 & 37.4 & 37.5 & 53.9 & 77.3 \\\\ & RTNH(NIPS2022) & 4DR & 48.2 & 35.5 & 65.6 & 52.6 & 40.3 & 48.1 & 58.8 & 79.3 \\\\ & Pointpillars* (CVPR2019) & L & 69.7 & 68.1 & 79.0 & 51.5 & 77.7 & 59.1 & 79.0 & 89.2 \\\\ & InterFusion* (IROS2022) & L+4DR & 69.9 & 69.0 & 79.1 & 51.7 & 77.1 & 58.9 & 77.9 & **89.5** \\\\ \\cline{2-10} & L4DR (Ours) & L+4DR & **75.8** & **74.6** & **87.5** & **58.4** & **77.8** & **61.4** & **79.2** & 89.3 \\\\ \\hline \\hline \\multirow{3}{*}{Bus or Truck} & Pointpillars* (CVPR2019) & 4DR & 29.4 & 25.8 & 64.1 & 34.9 & 0.0 & 18.0 & 21.5 & - \\\\ & RTNH(NIPS2022) & 4DR & 34.4 & 25.3 & 78.2 & 46.3 & 0.0 & 28.5 & 31.1 & - \\\\ \\cline{1-1} & Pointpillars* (CVPR2019) & L & 53.8 & 52.9 & 84.1 & 50.7 & 3.7 & 61.8 & 77.3 & - \\\\ \\cline{1-1} & InterFusion* (IROS2022) & L+4DR & 56.9 & 56.2 & **85.7** & 40.5 & 6.4 & **70.6** & 80.5 & - \\\\ \\cline{1-1} \\cline{3-10} & L4DR (Ours) & L+4DR & 59.7 & 59.4 & 84.4 & **51.9** & **8.1** & 66.1 & **86.4** & - \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 9: Performance Comparing L4DR under each type of real-world weather condition. The best performances are highlighted in **bold**. * indicates our reproduction using open-source code from original authors. - indicates that there is no object or the original author does not report performance. L indicates LiDAR and 4DR indicates 4D radar. Figure 9: Visualization performance comparison. We visualize object detection results with LiDAR-only (left), InterFusion (middle), and our L4DR (right) on the VoD-fog dataset with different simulated fog levels (0-4, from up to bottom). The red and blue 3D bounding boxes indicate groundtruths and model predictions, respectively, the grey points are LiDAR point clouds and the colored points are 4D radar point clouds. Our L4DR shows the 4D radar point cloud after denoising by the FAD module.
LiDAR-based 3D object detection is crucial for autonomous driving. However, due to the quality deterioration of LiDAR point clouds, it suffers from performance degradation in adverse weather conditions. Fusing LiDAR with the weather-robust 4D radar sensor is expected to solve this problem; however, it faces challenges of significant differences in terms of data quality and the degree of degradation in adverse weather. To address these issues, we introduce L4DR, a weather-robust 3D object detection method that effectively achieves LiDAR and 4D Radar fusion. Our L4DR proposes **M**ulti-**M**odal **E**ncoding (MME) and **F**oreground-**A**ware **D**enoising (FAD) modules to reconcile sensor gaps, which is the first exploration of the complementarity of early fusion between LiDAR and 4D radar. Additionally, we design an **I**nter-**M**odal and **I**ntra-**M**odal (\\(\\{\\)IM\\(\\}^{2}\\)) parallel feature extraction backbone coupled with a **M**ulti-**S**cale **G**ead Fusion (MSGF) module to counteract the varying degrees of sensor degradation under adverse weather conditions. Experimental evaluation on a VoD dataset with simulated fog proves that L4DR is more adaptable to changing weather conditions. It delivers a significant performance increase under different fog levels, improving the 3D mAP by up to 20.0% over the traditional LiDAR-only approach. Moreover, the results on the K-Radar dataset validate the consistent performance improvement of L4DR in real-world adverse weather conditions. 1 Ximen University, 2 Technische Universitat Munchen 3 University of Waterloo
Provide a brief summary of the text.
375
arxiv-format/1306_4584v2.md
# Topology Change and Tensor Forces for the EoS of Dense Baryonic Matter Hyun Kyu Lee1 _Department of Physics, Hanyang University, Seoul 133-791, Korea_ Mannque Rho2 _Institut de Physique Theorique, CEA Saclay, 91191 Gif-sur-Yvette cedex, France &_ _Department of Physics, Hanyang University, Seoul 133-791, Korea_ Footnote 1: e-mail: [email protected] Footnote 2: e-mail: [email protected] November 6, 2021 # ## Dedication Long before effective field theory anchored on chiral perturbation began to play a predominant role in nuclear physics - and with a large success, Gerry Brown and one of the authors (MR) started to ask in what way chiral symmetry figured in nuclear physics, in particular (Gerry) in nuclear forces and (MR) in exchange currents. This part of the story is recounted in the Gerry Brown Festschrift volume [1]. One of the early observations among many that we made then was that it could figure particularly importantly in the structure of nuclear tensor forces [2]. And this is what led us to the proposal of what is now referred to as \"Brown-Rho scaling\" (or \"BR scaling\" for short). In 23 years that have elapsed since then, a surprising, totally unanticipated, twist in the structure of the tensor forces was discovered, which, if confirmed to be correct, promises to have a novel and profound implication on dense nuclear matter, particularly for the EoS for compact stars. Gerry did not participate in this new development but we are certain that what is described in this note would have pleased him immensely. We dedicate this note -prepared for a contribution to \"EPJA Special Volume on Nuclear Symmetry Energy\" - to Gerry Brown. ## 1 Introduction In constructing EoS for compact stars, it is commonly assumed in going beyond normal nuclear matter density, \\(n_{0}=0.16\\) fm\\({}^{-3}\\), that one is dealing with nucleonic matter in the Fermi liquid state. This is the assumption that underlies a variety of nuclear models employed in the literature generically anchored on density functionals, among which the popular Skyrme potential model [3]. Both Walecka-type relativistic mean field theory (RMFT) [4] involving, in addition to nucleons (or baryons in general), vector mesons and scalar mesons [5]_and_ chiral Lagrangians containing four-fermi field operators taken at mean field [6], belong to the same class of models. They turn out to be fairly successful at near nuclear matter density because they are equivalent to Landau Fermi liquid fixed point theory [7, 8] with the parameters of the effective quasiparticle Lagrangian \"marginal\" with vanishing beta functions in the \"large N\" limit 1. Away from the fixed point but in its vicinity, one can then endow the parameters with smooth density dependence and _limit_ to two-body interactions. This can describe fairly well nuclear matter around \\(n_{0}\\) possessing thermodynamic consistency [9]. Footnote 1: Here \\(N\\propto k_{F}\\), where \\(k_{F}\\) is the Fermi momentum. However it is not at all obvious how to go far beyond \\(n_{0}\\) as required if one wants to describe what happens at a density relevant to the interior of compact stars, such as neutron stars, quark stars, hybrid stars etc. One currently popular approach is to write an effective Lagrangian with appropriate symmetries in terms of what are considered to be the relevant degrees of freedom in the given density regime, i.e, pions, vector mesons, scalar mesons, and baryons (nucleons and hyperons), including multi-dimension field operators and then to apply the mean field approximation. This approach has been surprisingly successful for finite nuclei as well as for infinite nuclearmatter [10]. In accessing higher densities, this mean-field approximation is simply extended without justification. Applying the mean-field approximation to densities higher than that of nuclear matter might be justified provided the Fermi-liquid picture continued to hold up to the density one drives the system to. However, there is no reason to expect that it will not break down as density increases: It could in fact do so if there were phase changes at increasing densities. For instance, new degrees of freedom such as condensation kaons or equivalently hyperons could emerge at higher density [11]. This could involve certain order parameters signally the change of the symmetries involved. But there can also be changes of matter that cannot be characterized by local order parameters, a possibility that has currently attracted a great deal of attention in condensed matter physics but thus far not in nuclear/hadron physics. In this paper, we address the state of matter that arises due to the change of topology as density increases above the normal nuclear matter density \\(n_{0}\\). We suggest that when baryons are considered as solitons, that is, skyrmions, an ubiquitous concept [12], there can be a phase change that does not belong to the standard Ginzburg-Landau-Wilson paradigm, with no identifiable local order parameters, and that could signal changeover from a Fermi liquid structure to a non-Fermi liquid structure triggered by a topology change. The question we will address is whether topology can provide a qualitatively different information, not evident in topology-less formulations, that can be implemented in an effective field theory framework and be used for quantitative calculations. The procedure we take is then to \"translate\" what is given in the skyrmion formulation _with_ topology - on crystal - to the structure of an effective Lagrangian _without_ topology, with which one can do systematic calculations. The assumption we make here is that topology change can be \"translated\" to change in the parameters of the Lagrangian, in a spirit perhaps analogous to what's being discussed in the context of quantum mechanics [13]. ## 2 Hidden Local Symmetry We consider baryons described as solitons. The soliton we are dealing with, skyrmion, is a topological object in a theory with meson fields only. The Lagrangian that gives rise to the skymion should be an effective one that is as close as possible to QCD with its chiral symmetry manifested appropriately in the energy regime we are interested in. The Lagrangian commonly taken up to date, such as the Skyrme Lagrangian, is a highly truncated one, anchored, for instance, on the large number-of-color (\\(N_{c}\\)) limit and other approximations, the validity of which is poorly justified from first principles. Even limited to pion fields only, there can be an infinite number of derivative terms valid in the large \\(N_{c}\\) limit. Even worse, there is no reason why vector mesons, at least the lowest-lying ones \\(\\rho\\) and \\(\\omega\\), not to mention the infinite tower, are ignored (or even integrated out) if the vector meson masses can be counted at the same chiral order as the pion mass as is the case when one is considering matter at high temperature and/or at high density [14]. But the major stumbling block is the proliferation of the number of uncontrollable parameters in the effective Lagrangian. For instance, when only the lowest-lying vector mesons \\(\\rho\\) and \\(\\omega\\) are incorporated, limited up to \\({\\cal O}(p^{4})\\) in derivative counting - to which the Skyrme quartic term belongs, there are more than fourteen independent terms. It is clearly meaningless to pick a few terms - not to mention only one, namely, the Skyrme term - out of so many without any guidance from theory and/or experiments. Fortunately a recent development from holographic QCD can improve the situation a lot better although one must admit, it is still far from realistic. Starting from the Sakai-Sugimoto model [15] of holographic QCD in 5D which is found to give a fairly good description of the nucleon in terms of an instanton [16], one can integrate out of the infinite tower in the Sakai-Sugimoto action the higher-lying vector mesons, leaving only the lowest vector mesons \\(\\rho\\) and \\(\\omega\\) as hidden local gauge fields 2. Keeping up to \\({\\cal O}(p^{4})\\) in the derivative expansion, one obtains an HLS Lagrangian with _all_ parameters of the Lagrangian fixed by two physical quantities, the pion decay constant \\(f_{\\pi}\\) and the \\(\\rho\\)-meson mass \\(m_{\\rho}\\)[17]. This miraculous simplification results thanks to a \"master formula\" that gives in terms of the two constants all the coefficients of the \\({\\cal O}(p^{4})\\) Lagrangian. It is significant that the master formula holds even for the 5D YM action in curved space arrived at bottom-up with the hidden gauge fields \"emerging\" from low energy chiral dynamics [18]. Footnote 2: Whether this integrating-out procedure is correct in the bulk gravity sector is not clear. There are a few serious caveats in this procedure that we should point out. The Lagrangian so obtained is valid only in the limit that both the \\(N_{c}\\) (number of colors) and the \\(\\lambda=g_{c}^{2}N_{c}\\) (t' Hooft constant) tend to infinity. Neither is infinite in any sense in Nature: Numerically \\(N_{c}\\) is 3 and for phenomenology, \\(\\lambda\\) is of order 10. The instanton mass or equivalently the skyrmion mass resulting from this Lagrangian goes like \\(\\sim N_{c}\\lambda\\). If one were to work with the leading order in both \\(N_{c}\\) and \\(\\lambda\\), the space would then be flat and the \\(\\omega\\) meson would be decoupled. The instanton size would then shrink to a point. In this limit, the standard collective-quantization of the soliton which gives the \\(1/N_{c}\\) correction goes haywire, leading to a totally absurd splitting between the nucleon and its rotational excitation \\(\\Delta\\)[17]. It is only at the next order in \\(\\lambda\\), i.e., \\({\\cal O}(\\lambda^{0})\\), that the \\(\\omega\\) meson couples to give a finite size to the soliton and make the collective quantization sensible. Thus \\(1/\\lambda\\) corrections make qualitatively important effects for baryon structure. What about higher order \\(1/\\lambda\\) corrections? We do not know. There is also the problem of \\(1/N_{c}\\) corrections. The HLS Lagrangian we have, valid to the chiral order \\({\\cal O}(p^{4})\\), is of leading order in \\(N_{c}\\). There must be loop corrections that are of the same chiral order but subleading in \\(N_{c}\\). Such corrections are, however, expected to be quantitatively important for certain hadron structure. For instance in the Skyrme model with pions only, the Casimir energy which contributes at subleading order, \\({\\cal O}(N_{c}^{0})\\), comes out to be \\(\\sim 1/3\\) of the leading order mass in magnitude. Unfortunately it is not known how to compute loop corrections in the bulk (gravity) sector in which the HLS Lagrangian we are working with is defined.3 Footnote 3: Elegant and sometimes powerful though it may be, the large \\(N_{c}\\) consideration fails in providing the properties of nuclear matter. In fact, in the large \\(N_{c}\\) limit, nuclear matter as it is known does not exist. There have been a flurry of papers discussing “cold nuclear matter” in the large \\(N_{c}\\) and large \\(\\lambda\\) limits based on gravity-gauge duality, none of which, as far as we know, have any resemblance to Nature. When it is pointed out that the theory does not agree with Nature, an answer offered is that “it is Nature’s problem, not the theory’s.” Given these caveats which seem to present serious obstacles, one may wonder, how can one exploit such a theory? What we are suggesting is that it is the topological structure that can be exploited independently of dynamical details. Half-Skyrmions The key observation that led to our reasoning is that when skyrmions are put on crystal lattice to simulate many-nucleon systems, the energetically most favored state is one in which the skyrmions fractionize into half-skyrmions [19, 20]. This feature is more or less independent of the crystal symmetry involved. Our basic premise in what follows is that this can be taken as established. Now when averaged over the single shell, the chiral condensate \\(\\sigma\\propto\\langle\\bar{q}q\\rangle\\) (where \\(q\\) is the chiral quark massless in the chiral limit) is found to go to zero although it is not locally zero. This property is also generic, independently of the detailed structure, symmetry and dynamics, of the Lagrangian used for the skyrmion. This is a topology change, later identified as a sort of phase transition. The resulting half-skyrmion matter possesses an enhanced symmetry that arises at certain high baryon density [20, 21]. Where that density is located depends on dynamical details of the Lagrangian and will constitute one of the most crucial practical issues for phenomenology considered in this paper. It is remarkable that the presence of such a topology change in the scheme we are adopting - which could be free of dynamical complications - is found to have a big impact on the properties of the EoS of dense matter. ### The skyrmion-half-skyrmion transition density \\(n_{1/2}\\) In order for such a topology change to be relevant to the structure of baryonic matter, it must not be too high in density, for if it were too high, then first of all it could not be accessed experimentally and secondly the HLS Lagrangian we are dealing with could not be trusted. It should not be too low relative to normal nuclear matter density either, for if it were too low, then it would be in conflict with experimental data that are accurately known. It should therefore be above \\(n_{0}\\) but not too far above [22]. There is a hint from the \\(A=4\\) nucleus described as the \\(B=4\\) skyrmion that the classical configuration is of 8 half-skyrmions [23], which could imply that the classical half-skyrmion structure could already be present in the alpha particle. To locate theoretically the transition density \\(n_{1/2}\\) in dense matter, we take the baryon HLS Lagrangian (BHLS for short) described above [24]. It has the right degrees of freedom to control the location of \\(n_{1/2}\\). Both vector mesons are found to be equally important. First of all, without them, the density \\(n_{1/2}\\) comes much too low to be realistic. With the \\(\\rho\\) but without the \\(\\omega\\), however, it lies much too high. On the other hand, scalar mesons, the only degrees of freedom that are not controlled by hidden local symmetry, have very little influence [21]. With both the \\(\\rho\\) and \\(\\omega\\) included, the reasonable density range comes out to be \\[1.5\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$} }n_{1/2}/n_{0}\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$} }2.0. \\tag{1}\\] Within this range the results we obtain are robust. ### Intrinsic parameter changes at \\(n_{1/2}\\) We now look at what takes place to baryonic matter at \\(n_{1/2}\\). As noted, the changeover at that density has no apparent local order parameter. We will see that it can be associated with qualitative changes in the EoS, that will be attributed to a changeover from Fermi liquid to non-Fermi liquid. Here we shall discuss the effects on the parameters of the hadrons propagating on the skyrmion background provided by the HLS Lagrangian. Skipping the details, we summarize the results from [24]: 1. The in-medium pion decay constant \\(f_{\\pi}^{*}\\) figuring in the _effective_ HLS Lagrangian is found to drop proportionally to4 Footnote 4: The constant \\(c\\) could be determined in precision experiments with pions. What is quoted here is a rough indication. \\[f_{\\pi}^{*}/f_{\\pi} \\approx 1/(1+cn/n_{0}),\\ \\ c\\approx 0.2\\ \\ {\\rm for}\\ \\ n<n_{1/2}\\,,\\] (2) \\[\\approx 0.8\\ \\ {\\rm for}\\ \\ n\\geq n_{1/2}.\\] What is noteworthy here is that while the pion decay constant falls with density according to what is expected in chiral perturbation theory up to \\(n_{1/2}\\) - and consistent with experiments up to \\(n_{0}\\), it stops dropping at the skyrmion-half-skyrmion changeover density and stays constant up to possible chiral restoration point \\(n_{c}\\)5. Footnote 5: It is found on the ground of RG properties of HLS theory in medium that a fixed point called “dialton limit fixed point” (DLFP) intervenes before reaching the chiral transition point [25]. Thus the theory must break down before reaching \\(n_{c}\\). This should be understood when we say “toward chiral restoration.” 2. The in-medium nucleon mass (or rather the in-medium \\(B=1\\) soliton mass) drops proportionally to the pion decay constant as predicted in the large \\(N_{c}\\) limit. Hence it has the behavior \\[m_{N}^{*}/m_{N} \\approx 1/(1+cn/n_{0}),\\ \\ c\\approx 0.2\\ \\ {\\rm for}\\ \\ n<n_{1/2}\\,,\\] (3) \\[\\approx 0.8\\ \\ {\\rm for}\\ \\ n\\geq n_{1/2}.\\] Again the remarkable feature in this prediction is that the nucleon mass stops dropping at \\(n_{1/2}\\). This means that the nucleon mass has a major portion \\(\\sim 0.8m_{N}\\) that does not disappear as the quark condensate tends toward zero. In order to extract the properties of the vector mesons, it turns out that the skyrmion crystal as worked out in [21, 24] is not sufficient. One expects on general ground the vector meson masses to scale as \\[m_{\\rho}^{*}/m_{\\rho} \\approx (f_{\\pi}^{*}/f_{\\pi})\\ \\ {\\rm for}\\ \\ n<n_{1/2}\\,, \\tag{4}\\] \\[\\approx (f_{\\pi}^{*}/f_{\\pi})(g_{\\rho}^{*}/g_{\\rho})\\ \\ {\\rm for}\\ \\ n\\geq n_{1/2},\\] and \\[m_{\\omega}^{*}/m_{\\omega} \\approx (f_{\\pi}^{*}/f_{\\pi})\\ \\ {\\rm for}\\ \\ n<n_{1/2}\\,, \\tag{5}\\] \\[\\approx (f_{\\pi}^{*}/f_{\\pi})(g_{\\omega}^{*}/g_{\\omega})\\ \\ {\\rm for}\\ \\ n\\geq n_{1/2}.\\] Here \\(g_{\\rho}\\) is the \\(SU(2)\\) hidden gauge coupling for \\(\\rho\\) and \\(g_{\\omega}\\) is defined similarly for \\(U(1)\\) gauge field for the \\(\\omega\\) meson. \\(U(2)\\) symmetry is not assumed for \\(\\rho\\) and \\(\\omega\\). The skyrmion crystal calculation does not provide information on the couplings \\(g_{\\rho,\\omega}\\). In (hidden) gauge theory, they are given by loop corrections which involve \\(1/N_{c}\\) corrections. Such loop effects are evidently not captured in the crystal calculation. In fact it is not clear how to incorporate them within the skyrmion crystal calculation. For this we have to resort to other approaches available within the HLS framework. This we do as follows. From the work of Harada and Yamawaki using the RG flow of HLS theory [14], we have, as density approaches the chiral restoration density \\(n_{c}\\), the vector manifestation (VM for short) fixed point \\[m_{\\rho}^{*}/m_{\\rho}\\sim g_{\\rho}^{*}/g_{\\rho}\\sim\\langle\\bar{q }q\\rangle^{*}/\\langle\\bar{q}q\\rangle\\to 0\\ \\ {\\rm as} \\tag{6}\\] \\[n\\to n_{c}. \\tag{7}\\] There is however no information on the \\(U(1)\\) coupling \\(g_{\\omega}\\) from the RG analysis [14], so we do not know the behavior of the \\(\\omega\\) meson for \\(n\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$>$}}n_{1/2}\\). Consider now how mesons couple to the nucleon. Here hidden local symmetry brings in something hitherto unexpected. The effective coupling of the vector meson \\(V=(\\rho,\\omega)\\) has the form \\[g_{VNN}^{*}=g_{V}^{*}F_{V}^{*}. \\tag{8}\\] The coupling to the nucleon picks up a density dependent function \\(F_{V}\\). This function is not transparent in the crystal calculation but it seems to be implicit in it as will be shown below in connection with the symmetry energy. This function runs with density as one can see in the one-loop RG analysis in baryon-implemented HLS [25]. As density approaches the \"dilaton-limit fixed point\" (DLFP) \\(n_{dl}\\), \\(F_{\\rho}\\) tends to zero: \\[F_{\\rho}^{*}\\to 0\\ \\ {\\rm as}\\ \\ n\\to n_{dl}<n_{c} \\tag{9}\\] whereas \\(F_{\\omega}^{*}\\) does not run at one-loop order. It could possibly run at higher loops, but must scale slowly. As a first approximation, we will take it to be unscaling. In short, as density goes above \\(n_{1/2}\\), the \\(\\rho\\)-NN coupling tends to zero because of the dropping \\(F_{\\rho}\\), accentuated in the vicinity of \\(n_{c}\\) by the gauge coupling \\(g_{\\rho}\\) approaching the VM fixed point. On the other hand, the \\(\\omega\\)-NN coupling may stay un-dropping until very near \\(n_{c}\\) although it is not known how the gauge coupling \\(g_{\\omega}\\) scales. In fact the phenomenological analysis in [22] requires that the \\(\\omega\\)-NN coupling scale little, if at all. ### New \"BLPR\" scaling We now incorporate the above scaling parameters into the HLS Lagrangian that can be used to compute the properties of dense matter. This amounts to transferring the topology change to changes in the parameters of the HLS Lagrangian. This brings major modifications at high density to the scaling Lagrangian of 1991 [26] referred to as \"old BR\". The modifications come only for density \\(n\\geq n_{1/2}\\), with the properties below \\(n_{1/2}\\) being essentially the same as in [26]. Since the new BR we have here is gotten from recent developments that involve also Byung-Yoon Park (skyrmion crystal) and Hyun Kyu Lee (dilaton), following [27], we refer to this new BR as \"BLPR scaling.\" We will argue that the translation of the topology change into parameter change of the Lagrangian has a support in the structure of the symmetry energy. Effects on Tensor Forces We shall now apply the above scaling properties to the symmetry energy of asymmetric nuclear matter and show that the translation is at least qualitatively validated. Since the in-medium nucleon remains heavy within the density range involved, we can use the NR approximation for the nucleon coupling to the pion and the \\(\\rho\\) and write down the tensor forces implied by the HLS Lagrangian. They take the form \\[V_{M}^{T}(r) =S_{M}\\frac{f_{NM}^{2}}{4\\pi}m_{M}\\tau_{1}\\cdot\\tau_{2}S_{12} \\tag{10}\\] \\[\\left(\\left[\\frac{1}{(m_{M}r)^{3}}+\\frac{1}{(m_{M}r)^{2}}+\\frac{1 }{3m_{M}r}\\right]e^{-m_{M}r}\\right),\\] where \\(M=\\pi,\\rho\\), \\(S_{\\rho(\\pi)}=+1(-1)\\). Note that the tensor forces come with an opposite sign between the pion and \\(\\rho\\) tensors. It is found to be a good approximation to have the pion tensor unaffected by the density for the range of density we are concerned with. As for the \\(\\rho\\) tensor, apart from the scaling mass \\(m_{\\rho}^{*}\\), it is the scaling of \\(f_{N\\rho}\\) that is crucial. Written in terms of the parameters of the baryon HLS Lagrangian (BHLS), we have the ratio \\[R\\equiv\\frac{f_{N\\rho}^{*}}{f_{N\\rho}}\\approx\\frac{g_{\\rho NN}^{*}}{g_{\\rho NN }}\\frac{m_{\\rho}^{*}}{m_{\\rho}}\\frac{m_{N}}{m_{N}^{*}}. \\tag{11}\\] It follows from the old and BLPR scaling relations that \\[R \\approx 1\\quad\\mbox{for}\\quad 0\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$}}n\\mathrel{\\hbox to 0.0pt{ \\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$}}n_{1/2} \\tag{12}\\] \\[\\approx \\left(\\frac{F_{\\rho}^{*}}{F_{\\rho}}\\right)\\Phi^{2}\\ \\ \\mbox{for}\\ \\ n_{1/2}\\mathrel{\\hbox to 0.0pt{ \\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$}}n\\mathrel{\\hbox to 0.0pt{ \\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$}}n_{c} \\tag{13}\\] where we have defined a scaling function that has - in the chiral limit - the VM fixed point, \\[\\Phi\\approx g_{\\rho}^{*}/g_{\\rho}\\to 0\\ \\ \\mbox{as}\\ \\ n\\to n_{c}. \\tag{14}\\] We note that the ratio \\(R\\) will be strongly suppressed for \\(n>n_{1/2}\\): In addition to the suppression by the factor \\(\\Phi^{2}\\) that drops in going to the vector manifestation fixed point, the approach to the dilaton-limit fixed point \\(F_{\\rho}^{*}=0\\) (see Eq. (16)) would make \\(R\\) drop _faster_ approaching the VM fixed point. This would make the \\(\\rho\\) tensor killed extremely rapidly in the region \\(n>n_{1/2}\\). The drastic change in the tensor forces going over to the half-skyrmion phase is illustrated in Fig. 1. Here scaling parameters that are considered to be reasonable are used. The general feature will not be modified for small variation of parameters in the vicinity of what's picked. It is seen that with the old BR where the skyrmion-half-skyrmion phase change is not taken into account, the net tensor force decreases continuously in density with the attraction vanishing at \\(n\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$}}3n_{0}\\) for the given parameters. In a stark contrast, with the BLPR, the \\(\\rho\\) tensor force nearly disappears at \\(n\\sim 3n_{0}\\) leaving only the pion tensor active, making the strength of the tensor force increase at \\(n_{1/2}\\). Taking into account the factor \\(F_{\\rho}^{*}/F_{\\rho}\\) with its fixed point would make the suppression of the \\(\\rho\\) tensor even more dramatic than what's given in Fig. 1. ## 5 Breakdown of Mean Field Theory in the Half-Skyrmion Phase: A Conjecture One of the most direct and immediate consequences of the change in tensor forces is on the symmetry energy \\(S\\), the coefficient multiplying the factor \\(\\alpha^{2}=[(N-P)/(N+P)]^{2}\\) (where \\(N(P)\\) is the number of neutrons(protons)) in the expansion of the energy per particle \\(E\\) of asymmetric nuclear system. Assuming with others [28, 29, 30] that near the equilibrium nuclear matter density, the symmetry energy is dominated by the tensor forces, we have, using closure approximation [28], \\[S\\sim\\frac{12}{\\bar{E}}\\langle V_{T}^{2}(r)\\rangle \\tag{15}\\] where \\(\\bar{E}\\approx 200\\) MeV is the average energy typical of the tensor force excitation and \\(V_{T}\\) is the radial part - with the scaling factor \\(R\\) taken into account - of the net tensor force. One can see from Fig. 1 that \\(S\\) decreases until \\(n_{1/2}\\) but then turns over and increases as the pion tensor takes over. This predicts a cusp at \\(n_{1/2}\\). This cusp structure is precisely reproduced in the skyrmion description as shown in [31]. In the skyrmion crystal, the symmetry energy arises as a \\(1/N_{C}\\) term coming from the collective quantization. It is subleading in \\(N_{c}\\), but it is well-defined and expected to be robust, independent of dynamical details. We now argue that the above cusp structure is lost if we apply the mean-field approximation to the Lagrangian BHLS we are using. In order to do the mean field which is justified near \\(n_{0}\\) as mentioned, we need to implement a scalar field to the baryon HLS Lagrangian. Otherwise we cannot have nuclear saturation. With the dilaton scalar field \\(\\chi\\) introduced via trace anomaly as pseudo-Goldstone field to encode spontaneously broken conformal symmetry, one can do the mean field calculation to describe matter at the equilibrium density \\(n_{0}\\). With a suitable scaling in the Lagrangian parameters that satisfy BR scaling, the modelworks fairly well at near \\(n_{0}\\)[9]. Here the relevant scalar is a chiral singlet or in a more realistic description, dominantly chiral singlet with small non-singlet admixtures. However as density increases toward the dilaton-limit fixed point, in order to exhibit correct chiral symmetry, the same baryonic HLS should flow to a Gell-Mann-Levy-type Lagrangian in which the scalar field becomes the fourth component of the chiral four vector of \\(SU(2)_{L}\\times SU(2)_{R}\\). This means that at some density, say, above \\(n_{1/2}\\), the dilaton limit fixed point is approached [25]: \\[F_{\\rho}^{*}\\to 0,\\ \\ n\\to n_{dl}<n_{c}. \\tag{16}\\] Doing mean field with this Lagrangian gives \\[S\\sim(F_{\\rho}^{*}/F_{\\pi}^{*})^{2}n. \\tag{17}\\] Since \\(F_{\\pi}^{*}\\) stays more or less constant up to the would-be critical density \\(n_{c}\\), the scaling of the symmetry will be dictated by \\(F_{\\rho}^{*}\\). As \\(F_{\\rho}^{*}\\) drops to zero approaching the DLFP \\(n_{dl}\\), \\(S/n\\) will continue to decrease after \\(n_{1/2}\\). This is in disagreement with both the skyrmion prediction [31] and the BLPR-scaling-implemented effective field theory prediction (15) [25], both of which in some ways, seem to capture many-body correlation effects that go beyond the mean field. Given that the dilaton-implemented BHLS Lagrangian is to _describe simultaneously_ both \\(n\\sim n_{0}\\) and \\(n\\sim n_{dl}\\), it must be that the mean field approximation breaks down as one enters into the half-skyrmion phase. This leads to our conjecture:The skyrmion-half-skyrmion transition that involves a topology change is tantamount to a Fermi-liquid-non-Fermi liquid transition. There are similar cases in condensed matter physics where the change from Fermi-liquid to non-Fermi liquid can be precisely formulated in terms of thermodynamic properties. For instance in [32], in a high pressure study of certain metallic state in the presence of topologically distinct spin textures (i.e., hedgehogs) it is seen that spin correlations with non-trivial topology can drive a breakdown of Fermi liquid structure. In our case, although not evident, the topology associated with skyrmions describes, near the equilibrium nuclear matter, that Fermi-liquid structure is destroyed going into the half-skyrmion phase. The deviation from canonical Fermi-liquid structure manifests itself in that the quasiparticle structure of dominant single-particle propagator is destroyed. As will be elaborated below, this non-quasiparticle structure is seen in the EoS for dense baryonic matter in a formalism in which higher-order correlations are taken into account [22]. ## 6 Prediction for the EoS of Compact-Star Matter In order to confront nature, we need to perform quantum calculations, going beyond the mean field. The features discussed above are quasi-classical and qualitative, so what we have obtained have to be incorporated into a more quantitative formalism. Our procedure is to inject the parameters scaling according to the BLPR into the baryon hidden local symmetric Lagrangian and since the quasiparticle approximation is no longer reliable, to do higher-order many-body calculations for densities \\(n>n_{1/2}\\). How to go about doing this is not precisely formulated up to date. In [22], this strategy is implemented in the \\(V_{lowk}\\)-EFT approach, by relying on available experiments for guidance. In the density regime \\(n\\leq n_{0}\\), the parameters are controlled by available data. It seems reasonable to assume - that we do - that this procedure holds (at least) up to \\(n_{1/2}\\) which is not far above \\(n_{0}\\). This density regime will be referred to as Region I. It is in the density regime \\(n>n_{1/2}\\) - that we refer to as Region II - that the topology change brings in significant modifications in the \\(V_{lowk}\\)-implemented EFT approach. The predictions made [22] with the scalings in Region II as sketched above suitably6 taken into account in the \\(V_{lowk}\\)-EFT are summarized in the figures given below. Footnote 6: There is no model-independent information on the precise scaling behavior. It is largely guided with the limited source from experiments. However what has been used in [22] has been recently given a support from different approaches anchored on hidden local symmetry [25]. In Fig. 2 are given the EoS for symmetric nuclear matter and the symmetry energy \\(S\\). It is noteworthy that the onset of half-skyrmion phase makes the EoS of symmetric nuclear matter _softer_ whereas the symmetry energy becomes _stiffer_. This feature can be easily understood in terms of the drastic change in the tensor forces associated with the phase change and how nuclear forces enter in EoS. In condensed matter, the changeover from Fermi liquid to non-Fermi liquid linked to a topology change modifies the temperature dependence in the resistivity, for instance [32]. It would be interesting in the case we are dealing with to establish more precise scaling properties in density as one goes from one phase to the other. One can already see a hint in a polytrope fit for \\(E_{0}/A\\)[22]. In Fig. 2 is seen that the polytrope fit requires two different forms with quite different density dependence changing over at \\(n_{1/2}\\). To give an idea as to how this model can accommodate massive neutron stars, we show the results from [22] without going into details of the choice of scaling etc. Fig. 3 depicts the mass-radius trajectories and maximum densities of neutron stars for two values of \\(n_{1/2}\\) that give the rough range predicted by HLS skyrmions of [24]. The results are more or less insensitive to the precise value of \\(n_{1/2}\\). What is striking is the effect of topology change for both the maximum mass and radius, with the massive star of \\(\\sim 2M_{\\odot}\\) having a bigger radius predicted with the topology change than a less massive star of \\(\\sim 1.6M_{\\odot}\\) without. There is also a marked difference in the maximum density in the interior of the star. Without the topology change the EoS is soft and hence packs a bigger - by a factor of 2 - density. Figure 2: EoS for symmetric nuclear matter \\(E_{0}/A\\) with polytrope fits of the \\(n_{1/2}=2n_{0}\\) and symmetry energy \\(S\\). ## 7 Origin of the Nucleon Mass In confronting the EoS compatible with the observed properties of the \\(\\sim 2\\) solar neutron star, it was crucial in [22] to keep the nucleon mass non-scaling up to the maximum density relevant, \\(\\sim 5.5n_{0}\\). This feature is, a posteriori, supported by the skyrmion crystal calculation [24] as seen in Figure 4. This feature is also reproduced in an RG-analysis with baryon HLS Lagrangian [25]. If the nucleon mass were to remain unchanged as density approaches first the DL fixed point and then VM fixed point, this would imply that there is a large part of the nucleon mass that remains \"unmelted\" when the quark condensate melts. In terms of the nucleon mass expressed schematically as \\(m_{N}=m_{0}+\\Delta m(\\langle\\bar{q}q\\rangle)\\), one can state that \\(\\Delta m\\to 0\\) but \\(m_{0}\\) remains unaffected when \\(\\langle\\bar{q}q\\rangle\\to 0\\). This is reminiscent of the parity-doublet nucleon model with a constant \\(m_{0}\\)[33] that remains when chiral symmetry is restored. In this model, \\(m_{0}\\) is a chiral-invariant and could be considered as an intrinsic quantity of QCD. The constant \\(m_{0}\\) found in the skyrmion crystal, however, appears to reflect an emergent rather than intrinsic symmetry. In fact, the half-skyrmion crystal phase has a higher spatial symmetry than the skyrmion phase and it seems likely that \\(m_{0}\\) arises from collective in-medium correlations. As pointed out in [22], the presence of \\(m_{0}\\) in the nucleon mass brings tension in the application of the constituent quark model (CQM) to in-medium hadron masses. The CQM has a strong support by the large \\(N_{c}\\) consideration of QCD [34]. However while it can be applied to, say, the \\(\\rho\\) meson, with the constituent quark mass subject to the vector manifestation fixed point, it cannot be to the baryon mass with a large \\(m_{0}\\). This dichotomy of the meson and baryon in-medium masses is evident when the baryon is considered as a chiral bag [35] and could perhaps be understood in terms of large \\(N_{c}\\) QCD recently formulated by Kaplan [36]. Figure 3: Mass-radius trajectories (left panel) and central densities (right panel) of neutron stars calculated with topology change using \\(n_{1/2}=2.0\\) (A) and \\(1.5n_{0}\\) (B) and without topology change (C). ## 8 Experimental Tests Since the effect on the tensor forces is the most striking, the measurement of the symmetry energy will be one of the primary targets for experimental tests of the theory developed in this note. The BLPR-implemented-EFT approach [22] backed by the skyrmion crystal calculation [24] and the RG analysis [25] indicates that a slope change at \\(n_{1/2}\\) in the symmetry energy could be measured in forthcoming experiments at RIB-type accelerators as well as at FAIR/GSI and other accelerators. Another direction is to zero-in on the structure of tensor forces in neutron-rich nuclei. In an elegant analysis, Otsuka and collaborators showed that the \"bare\" tensor forces are left un-renormalized in certain channels by both short-range correlations and core polarizations [37]. It is found in particular in the monopole matrix elements that the intrinsic tensor forces remain unscathed by nuclear correlations whereas other components can be massively renormalized. This strongly suggests that the bare tensor force strength can be pinned down in nuclear medium by looking at processes sensitive to the monopole matrix element such as single-particle shell evolution [38] and if there is any medium effect reflecting the _fundamental_ change in the chiral condensate due to density-induced vacuum change, it should show up in a pristine way. If \\(n_{1/2}\\) is close to \\(n_{0}\\), RIB-type accelerators could provide a hint to this phase change associated with a topology change. ### Acknowledgments We are grateful for valuable discussions with Tom Kuo, Won-Gi Paeng, Byung-Yoon Park and Chihiro Sasaki with whom important part of the research discussed here was performed. This work was partially supported by the WCU project of the Korean Ministry of Educational Science and Technology (R33-2008-000-10087-0). Figure 4: Crystal size \\(L\\) dependence of soliton mass (large \\(N_{c}\\) proton mass) with the FCC crystal background. Density increases to the right. The solid curve is the result of the model adopted in [24]. Others represent results when different parameterizations than what’s given by the master formula are used. ## References * [1]_From Nuclei to Stars: Festschrift in Honor of Gerald E. Brown_ (World Scientific, Singapore, 2011) ed. S. Lee. * isospin interaction,\" Phys. Lett. B **237**, 3 (1990). * [3] T. Skyrme, \"The effective nuclear potential,\" Nucl. Phys. **9**, 615 (1959). * [4] B. D. Serot and J. D. Walecka, \"Recent progress in quantum hadrodynamics,\" Int. J. Mod. Phys. E **6**, 515 (1997) [nucl-th/9701058]. * field theory,\" Nucl. Phys. A **370**, 365 (1981). * [6] G. Gelmini and B. Ritzi, \"Chiral effective Lagrangian description of bulk nuclear matter,\" Phys. Lett. B **357**, 431 (1995) [hep-ph/9503480]; T. -S. Park, D. -P. Min and M. Rho, \"Chiral Lagrangian approach to exchange vector currents in nuclei,\" Nucl. Phys. A **596**, 515 (1996) [nucl-th/9505017]. * [7] R. Shankar, \"Renormalization group approach to interacting fermions,\" Rev. Mod. Phys. **66**, 129 (1994). * NSF-ITP-92-132 (92,rec.Nov.) 39 p. (220633) Texas Univ. Austin - UTTG-92-20 (92,rec.Nov.) 39 p [hep-th/9210046]. * [9] C. Song, G. E. Brown, D. -P. Min and M. Rho, \"Fluctuations in 'BR scaled' chiral Lagrangians,\" Phys. Rev. C **56**, 2244 (1997) [hep-ph/9705255]; C. Song, \"Dense nuclear matter: Landau Fermi liquid theory and chiral Lagrangian with scaling,\" Phys. Rept. **347**, 289 (2001) [nucl-th/0006030]. * [10] T. Niksic, D. Vretenar and P. Ring, \"Relativistic nuclear energy density functionals: Mean-field and beyond,\" Prog. Part. Nucl. Phys. **66**, 519 (2011) [arXiv:1102.4193 [nucl-th]]. * [11] H.K. Lee and M. Rho, \"Hyperons and condensed kaons in compact stars,\" arXiv:1301.0067 [nucl-th]. * [12]_The Multifaceted Skyrmion_ (World Scientific, Singapre 2011) ed. G.E. Brown and M. Rho. * [13] A. D. Shapere, F. Wilczek and Z. Xiong, \"Models of topology change,\" arXiv:1210.3545 [hep-th]. * [14] M. Harada and K. Yamawaki, \"Hidden local symmetry at loop: A New perspective of composite gauge boson and chiral phase transition,\" Phys. Rept. **381**, 1 (2003) [hep-ph/0302103]. * [15] T. Sakai and S. Sugimoto, \"Low energy hadron physics in holographic QCD,\" Prog. Theor. Phys. **113**, 843 (2005); \"More on a holographic dual of QCD,\" Prog. Theor. Phys. **114**, 1083 (2005). * [16] D. K. Hong, M. Rho, H.-U. Yee, and P. Yi, \"Chiral dynamics of baryons from string theory,\" Phys. Rev. D **76**, 061901 (2007); \"Dynamics of baryons from string theory and vector dominance,\" J. High Energy Phys. **0709**, 063 (2007); K. Hashimoto, T. Sakai, and S. Sugimoto, \"Holographic baryons: Static properties and form factors from gauge/string duality,\" Prog. Theor. Phys. **120**, 1093 (2008). * [17] Y.-L. Ma, Y. Oh, G.-S. Yang, M. Harada, H. K. Lee, B.-Y. Park, and M. Rho, \"Hidden local symmetry and infinite tower of vector mesons for baryons,\" Phys. Rev. D **82**, 074025 (2012); Y.-L. Ma, G.-S. Yang, Y. Oh, and M. Harada, \"Skyrmions with vector mesons in the hidden local symmetry approach,\" Phys. Rev. D **87**, 034023 (2013). * [18] D. T. Son and M. A. Stephanov, \"QCD and dimensional deconstruction,\" Phys. Rev. D **69**, 065020 (2004) [hep-ph/0304182]. * [19] A. S. Goldhaber and N. S. Manton, Maximal symmetry of the Skyrme crystal, Phys. Lett. B198, 231 (1987). * [20] N. Manton and P. Sutcliffe, _Topological Solitons_ (Cambridge University Press, 2004) * [21] B.-Y. Park and V. Vento, in [12]. * [22] H. Dong, T. T. S. Kuo, H. K. Lee, R. Machleidt and M. Rho, \"Half-skyrmions and the equation of state for compact-star matter,\" arXiv:1207.0429 [nucl-th]. * [23] N. Manton and P. Sutcliffe, in [12]. * [24] Y. -L. Ma, M. Harada, H. K. Lee, Y. Oh, B. -Y. Park and M. Rho, \"Dense baryonic matter in hidden local symmetry spproach: Half-skyrmions and nucleon mass,\" arXiv:1304.5638 [hep-ph]. * [25] W.-G. Paeng, H.K. Lee, M. Rho and C. Sasaki, \"Interplay between \\(\\omega\\)-nucleon interaction and nucleon mass in dense matter,\" arXiv:1303.2898 [nucl-th]. * [26] G. E. Brown and M. Rho, \"Scaling effective Lagrangians in a dense medium,\" Phys. Rev. Lett. **66**, 2720 (1991). * [27] H. K. Lee and M. Rho, \"Flavor symmetry and topology change in nuclear symmetry energy for compact stars,\" Int. J. Mod. Phys. E **22**, 1330005 (2013) [arXiv:1201.6486 [nucl-th]]. * [28] G. E. Brown and R. Machleidt, \"Strength of the \\(\\rho\\) meson coupling to nucleons,\" Phys. Rev. C **50**, 1731 (1994). * [29] C. Xu and B. -A. Li, \"Tensor force induced isospin-dependence of short-range nucleon-nucleon correlation and high-density behavior of nuclear symmetry energy,\" arXiv:1104.2075 [nucl-th]. * [30] I. Vidana, A. Polls and C. Providencia, \"Nuclear symmetry energy and the role of the tensor force,\" arXiv:1107.5412 [nucl-th]. * [31] H. K. Lee, B. -Y. Park and M. Rho, \"Half-skyrmions, tensor forces and symmetry energy in cold dense matter,\" Phys. Rev. C **83**, 025206 (2011) [Erratum-ibid. C **84**, 059902 (2011)] [arXiv:1005.0255 [nucl-th]]. * [32] R. Ritz et al, \"Formation of a topological non-Fermi liquid in MnSi,\" Nature 497, 231 (2013). * [33] C. E. DeTar and T. Kunihiro, \"Linear sigma model with parity doubling,\" Phys. Rev. D **39**, 2805 (1989); W. -G. Paeng, H. K. Lee, M. Rho and C. Sasaki, \"Dilaton-limit fixed point in hidden local symmetric parity doublet model,\" Phys. Rev. D **85**, 054022 (2012) [arXiv:1109.5431 [hep-ph]]. * [34] S. Weinberg, \"Pions in large-\\(N\\) quantum chromodynamics,\" Phys. Rev. Lett. **105**, 261601 (2010) [arXiv:1009.1537 [hep-ph]]. * [35] G. E. Brown and M. Rho, \"The little bag,\" Phys. Lett. B **82**, 177 (1979); G. E. Brown, M. Rho and V. Vento, \"Little bag dynamics,\" Phys. Lett. B **84**, 383 (1979); M. Rho, A. S. Goldhaber and G. E. Brown, \"Topological soliton bag model for baryons,\" Phys. Rev. Lett. **51**, 747 (1983). * [36] D. B. Kaplan, \"Extended QCD,\" arXiv:1306.5818 [nucl-th]. * [37] N. Tsunoda, T. Otsuka, K. Tsukiyama and M. Hjorth-Jensen, \"Renormalization persistency of tensor force in nuclei,\" Phys. Rev. C **84**, 044322 (2011) [arXiv:1108.4147 [nucl-th]]. * [38] T. Otsuka, T. Suzuki, R. Fujimoto, H. Grawe and Y. Akaishi, \"Evolution of nuclear shells due to the tensor force,\" Phys. Rev. Lett. **95**, 232502 (2005).
When skyrmions representing nucleons are put on crystal lattice and compressed to simulate high density, there is a transition above the normal nuclear matter density (\\(n_{0}\\)) from a matter consisting of skyrmions with integer baryon charge to a state of half-skyrmions with half-integer baryon charge. We exploit this observation in an effective field theory framework to access dense baryonic system. We find that the topology change involved in the transition implies changeover from a Fermi liquid structure to a non-Fermi liquid with the chiral condensate in the nucleon \"melted off.\" The \\(\\sim 80\\%\\) of the nucleon mass that remains \"unmelted,\" invariant under chiral transformation, points to the possible origin of the (bulk of) proton mass that is not encoded in the standard mechanism of spontaneously broken chiral symmetry. The topology change engenders a drastic modification of the nuclear tensor forces, thereby nontrivially affecting the EoS, in particular, the symmetry energy, for compact star matter. It brings in stiffening of the EoS needed to accommodate a neutron star of \\(\\sim 2\\) solar mass. The strong effect on the EoS in general and in the tensor force structure in particular will also have impact on processes that could be measured at RIB-type accelerators.
Summarize the following text.
286
arxiv-format/2311_04522v2.md
# Long-term Time Series Forecasting based on Decomposition and Neural Ordinary Differential Equations Seonkyu Lim124, Jaehyeon Park14, Seojin Kim14, Hyowon Wi1, Haksoo Lim1, Jinsung Jeon1, Jeongwhan Choi1 and Noseong Park1 _Yonsei University1, Seoul, South Korea Korea Financial Telecommunications & Clearings Institute2, Seoul, South Korea_ [email protected], {jaehyun9907, bwnebs1, wihyowon, limhaksoo96, jijjs9092, jeongwhan.choi, noseong}@yonsei.ac.kr1 ## I Introduction Time series data is continuously generated from various real-world applications. To utilize this data, numerous approaches have been developed in the fields such as forecasting [1, 2, 3, 4, 5, 6, 7, 8, 9], classification [10, 11, 12, 5, 6, 13], and generation [14, 15, 16, 17]. Among them, time series forecasting is one of the most important research topics in deep learning. For time series forecasting, recurrent neural network (RNN)-based models were used, such as LSTM [18] and GRU [19]. These models performed well in time series forecasting but suffered from error accumulation due to their iterative multi-step forecasting approach, especially when dealing with long-term time series forecasting (LTSF). To overcome this challenge, there have been various attempts in the past few years. Among them, the Transformer-based approaches [2], which enable direct multi-step forecasting, show significant performance improvement. Transformer [20] has demonstrated remarkable performance in various natural language processing tasks, and its ability to effectively capture long-range dependencies and interactions in sequential data makes it suitable for application in LTSF. Despite the impressive achievements of Transformer-based models in LTSF, they have struggled with temporal information loss caused by the self-attention mechanism as a result of permutation invariant and anti-order properties [21]. Furthermore, it was demonstrated that the error upper bound of Transformer-based models, which is one of the non-linear deep learning approaches, is higher than linear regression [22]. Recently, simple Linear-based methods [21, 22], as non-Transformer-based models, show better performance compared to complex Transformer-based models. These approaches are novel attempts in LTSF where Transformer architecture has recently taken the lead. However, they have limits to comprehensively exploit the intricate characteristics of the time series dataset as the models are too simple. Inspired by these insights, we aim to design a model that takes into account the characteristics of each dataset by introducing some sophistication to the model architecture. Our proposed model, LTSF-DNODE, adeptly harnesses temporal information by employing time series decomposition and the neural ordinary differential equation (NODE) framework. The NODE framework transforms a single linear layer into time-derivative modeling. It uses a structure composed of the same single linear layer, but can better capture the complex dynamics of time series data, providing various advantages within time series processing. Our contributions can be summarized as follows: * We analyze the temporal information of each real-world time series datasets with exploratory data analysis. This investigation helps us detect the presence of seasonality, guiding us to perform decomposition appropriately. * Based on the NODE framework, we demonstrate the following benefits of time-derivative modeling: i) It is suitable for time series tasks by interpreting discrete linear layers as continuous linear layers, and ii) more advanced regularization is available. * Through various empirical experiments, we demonstrate that the NODE framework and decomposition according to data characteristics are effective in LTSF. ## II Preliminaries In this section, we review the decomposition method and neural ordinary differential equations. Following that, we conduct empirical explorations to investigate the effectiveness of these techniques on LTSF. ### _Problem Formulation_ The objective of LTSF is to forecast from an input sequence of historical time series data to a corresponding future sequence. Given the input historical data \\(\\mathbf{X}=[\\mathbf{x}_{1},\\dots,\\mathbf{x}_{L}]^{\\mathsf{T}}\\in\\mathbb{R}^{L \\times F}\\), LTSF models forecast \\(\\mathbf{Y}=[\\mathbf{y}_{1},\\dots,\\mathbf{y}_{H}]^{\\mathsf{T}}\\in\\mathbb{R}^{ H\\times F}\\), where \\(L\\) is the look-back window size, \\(H\\) is the forecasting horizon and \\(F\\) is the feature dimension. The LTSF problem deals with cases where \\(H\\) is longer than 1, and \\(F\\) is not restricted to univariate cases. ### _Time Series Decomposition_ Data preprocessing is used to enhance data quality as an input to the LTSF method, enabling it to deliver better outcomes. According to [23], suitable preprocessing methods for non-stationary time series increased forecasting accuracy by more than 10% on over 95% of the temporal data. Time series decomposition is a pivotal technique in time series preprocessing [24]. The decomposition methods (e.g., STL [25] and SEAT [26]) typically decompose the time series into three components: trend, seasonality, and residual component. The decomposition can be represented as follows: \\[\\mathbf{X}=\\mathbf{T}+\\mathbf{S}+\\mathbf{R}, \\tag{1}\\] where \\(\\mathbf{T},\\mathbf{S},\\mathbf{R}\\in\\mathbb{R}^{L\\times F}\\) respectively represent the trend, seasonality, and residual component. The trend is a general systematic linear or nonlinear component that changes over time and does not repeat within the given timeframe, usually identified by the two-sided simple moving average [27]. Seasonality means a pattern with a particular cycle in a time series. To extract seasonality, the classic additive decomposition method averages the detrended series \\((\\mathbf{X}-\\mathbf{T})\\) across a predetermined period. Residual is the remaining value after removing the trend and seasonality from a series. We use this method to provide an overall understanding of time series datasets with exploratory data analysis. ### _Neural Ordinary Differential Equations_ Neural ordinary differential equations (NODEs) [28] can handle time series data in a continuous manner, using the differential equation as follows: \\[\\mathbf{z}(T)=\\mathbf{z}(0)+\\int_{0}^{T}f(\\mathbf{z}(t),t;\\boldsymbol{\\theta }_{f})dt, \\tag{2}\\] where \\(f(\\mathbf{z}(t),t;\\boldsymbol{\\theta}_{f})\\), called an ODE function, is a neural network to approximate the derivative of \\(\\mathbf{z}(t)\\) with respect to \\(t\\) (denoted as \\(\\frac{d\\mathbf{z}(t)}{dt}\\)). To solve the integral problem, the NODEs use the ODE solver. The ODE solvers divide the integral time domain \\([0,T]\\) in Eq. (2) into small steps and convert the integral into many steps of additions. For example, one step of the explicit Euler method, a typical ODE solver, is as follows: \\[\\mathbf{z}(t+s)=\\mathbf{z}(t)+s\\cdot f(\\mathbf{z}(t),t;\\boldsymbol{\\theta}_{f}), \\tag{3}\\] where \\(s\\in(0,1]\\) is step size of the Euler method. A more sophisticated method, such as the 4-th order Runge-Kutta (RK4) method, is as follows: \\[\\mathbf{z}(t+s)=\\mathbf{z}(t)+\\frac{s}{6}\\Big{(}f_{1}+2f_{2}+2f_{3}+f_{4}\\Big{)}, \\tag{4}\\] where \\(f_{1}=f(\\mathbf{z}(t),t;\\boldsymbol{\\theta}_{f})\\), \\(f_{2}=f(\\mathbf{z}(t)+\\frac{s}{2}f_{1},t+\\frac{s}{2};\\boldsymbol{\\theta}_{f})\\), \\(f_{3}=f(\\mathbf{z}(t)+\\frac{s}{2}f_{2},t+\\frac{s}{2};\\boldsymbol{\\theta}_{f})\\), and \\(f_{4}=f(\\mathbf{z}(t)+sf_{3},t+s;\\boldsymbol{\\theta}_{f})\\). Fig. 1 denotes the Euler and RK4 methods. These are some of the explicit methods that have a fixed step size. The RK4 method requires four times as much work as the Euler method in a single step. When the step size is 1, the Euler method is equivalent to the residual connection. Similarly, when \\(f\\) represents a neural network layer, the RK4 method is analogous to the dense connection. On the other hand, one of the most advanced methods, Dormand-Prince (DOPRI) [29], uses an adaptive step size. Recently, the Memory-efficient ALF Integrator [30] guaranteeing constant memory cost shows good performance. However, when NODE learns a complex dataset, the step size of the ODE solver often becomes extremely small. Consequently, this results in dynamics equivalent to a substantial number of layers, thereby increasing the training time significantly. To address this issue, Jacobian and kinetic regularizations [31] are introduced, simplifying the dynamics, increasing the step size, and reducing the training time. The NODEs demonstrated superior performance in various tasks, including time series and others, by employing continuous modeling [32, 33, 34]. Specifically, the neural rough differential equations applied to the NODEs using rough-path theory showed good performance in the long-term time series task [35, 36]. In our model, we use a single linear layer as \\(f\\) with various ODE solvers and regularizers to model times series dynamics effectively. ### _Empirical Explorations on Decomposition and NODEs_ We conduct ablation studies to explore the efficacy of decomposition and NODE in LTSF. We implement simple variants of a single linear layer model by applying the decomposition method and NODE framework: * \"Linear\" is a single linear layer identical to the one introduced in [21]. This model predicts future values based on Fig. 1: The (a) explicit Euler method and (b) RK4 method in a step. We note that the Euler method creates the residual connection and RK4 makes the dense connection given the ODE function \\(f\\) parameterized by \\(\\theta_{f}\\). past values via a weighted summation. The single linear layer is mathematically expressed as \\(\\widehat{\\mathbf{Y}}=\\mathbf{W}\\mathbf{X}\\), where \\(\\mathbf{W}\\in\\mathbb{R}^{P\\times L}\\) is a weight matrix. * \"Linear with T/R\" decomposes time series into trend and residual components and then individually learns them using single linear layers, as proposed in [21]. * \"Linear with T/S/R\" decomposes time series into trend, seasonality, and residual components, and then individually learns them through single linear layers. * \"Linear with NODE\" also uses a single linear layer with the same structure as \"Linear\" to build the ODE function in the NODE framework. It employs the Euler method as the ODE solver. The mathematical expression of \"Linear\" can also be depicted using the time variable \\(t\\) as follows: \\[\\widehat{\\mathbf{x}}(t_{1})=\\mathbf{W}\\mathbf{x}(t_{0}), \\tag{5}\\] where \\(\\widehat{\\mathbf{x}}(t_{1})\\) is the future sequence \\(\\widehat{\\mathbf{Y}}\\), \\(\\mathbf{x}(t_{0})\\) is the historical sequence \\(\\mathbf{X}\\). \\(\\mathbf{W}\\) is a single linear layer matrix. \"Linear with NODE\" formulation based on Eq. (5) is as follows: \\[\\frac{d\\mathbf{x}(t)}{dt} =\\frac{1}{t_{1}-t_{0}}(\\log\\mathbf{W})\\mathbf{x}(t), \\tag{6}\\] \\[\\mathbf{x}(t_{1}) =\\mathbf{x}(t_{0})+\\int_{t_{0}}^{t_{1}}\\frac{1}{t_{1}-t_{0}}(\\log\\mathbf{ W})\\mathbf{x}(t)dt. \\tag{7}\\] The parameters are trained using the adjoint sensitivity method of the NODE framework. Table I shows the LTSF results for each variant model. Based on these observations, we infer the following findings: 1. Applying time series decomposition methods enhances the performance of LTSF. This assertion is corroborated by the empirical observation that \"Linear with T/R\" shows better performance compared to the \"Linear\". 2. In the ETTh2 dataset, \"Linear with T/S/R\" demonstrates a discernible advantage over \"Linear with T/R\". This suggests that the merits of a finer-grained decomposition, particularly the extraction of seasonality, might vary depending on the characteristics of the datasets. 3. The better performance of \"Linear with NODE\" as compared to \"Linear\" demonstrates the benefits of incorporating the NODE framework in the domain of LTSF. From these observations, we can infer that time-derivate modeling based on the NODE framework and more refined time series decomposition have a positive impact on LTSF. ## III Proposed Method We explain the detailed information about our proposed model, LTSF-DNODE, in this section. We first describe an overview of how our model works, followed by detailed components and how they contribute to time series forecasting. ### _Overall Workflow_ Fig. 2 illustrates the forecasting procedure of LTSF-DNODE. It consists of three main blocks: the decomposition block (DCMP), the normalizing and denormalizing blocks (NORM & DENORM), and the NODE block (NODE). These blocks are sequentially applied to make predictions as follows: 1. Find data characteristics with exploratory data analysis. 2. An observed series \\(\\mathbf{X}\\) is given as input. The DCMP block decomposes \\(\\mathbf{X}\\) depending on data characteristics. 3. For datasets with distribution discrepancy problems, we apply the NORM block to address them. 4. Then, the NODE block forecasts the future patterns of each decomposed component. 5. In the case of the dataset normalized in (2), denormalization is performed in the DENORM block. 6. Finally, the forecasting series \\(\\widehat{\\mathbf{Y}}\\) is reconstructed by addition of the future decomposed components. ### _Exploratory Data Analysis_ #### Iii-B1 Properties To obtain insights from the datasets, we analyze their characteristics. The results of this analysis are presented in Table IV. The fundamental structure and methodology of this analysis are derived from [37]. The detailed methods used to acquire statistical properties are as follows: * \"Forecastability\" [38] refers to a measure calculated by subtracting the entropy of the Fourier decomposition of the time series from one. * \"Trend\" is the slope of the linear regression applied to the time series, adjusted according to its magnitude. * \"Seasonality\" is the ratio of noticeable patterns quantified by the ACF test [39]. * \"Stationarity\" is measured by the ADF test on the residual component after removing trend and seasonality components from the time series. #### Iii-B2 Decomposition DCMP block requires a pre-defined kernel size and seasonality period during the decomposition process. In order to achieve a suitable decomposition, we need to find the optimal parameters depending on the dataset. We consider various parameter combinations as candidates and conduct two tests, the auto-correlation function (ACF) test [39] and the augmented Dickey-Fuller (ADF) test, to find the optimal values among them. The ACF test examines if a sequence follows a repeating seasonal pattern with a pre-defined cycle, while the ADF test is used to check whether the sequence is stationary or not. We first divide the time series into non-overlapping window-sized sequences and then decompose them. The ACF test is conducted for analyzing the seasonality components, while the ADF test is utilized to assess the residual components. Following the tests, we can quantify both seasonality and stationarity by calculating the proportion of sequences meeting the criteria relative to the total number of sequences. This procedure is applied to all the datasets. An overview of the experimental parameters selected for each dataset and a summary of the results derived from the exploratory analysis are presented in Table IV. #### Iii-B3 Normalization We investigate the distribution of trend components in both training and testing data. If there exists a noticeable discrepancy between these distributions, we employ the NORM block to perform instance normalization. The same procedure is also applied to the residual components. The instance normalization method aids in mitigating distribution disparities and aligning the overall trend. Additional findings can be found in Section IV. ### _DCMP Block_ The DCMP block decomposes the time series into three, trend \\(\\mathbf{T}\\), seasonality \\(\\mathbf{S}\\), and residual \\(\\mathbf{R}\\) components, or two, without seasonality depending on the characteristics of the observed series \\(\\mathbf{X}\\). In the DCMP block, in order to extract the trend from the \\(\\mathbf{X}\\), we use the moving average method. To align the length of the derived trend with \\(\\mathbf{X}\\), we first apply \\(padding(\\cdot)\\), which involves pre-padding with the first value and post-padding with the last value of the input. After that, we perform an average pooling operation to extract the trend as follows: \\[\\mathbf{T}=AvgPool(padding(\\mathbf{X})). \\tag{8}\\] If the time series has a low significance of seasonality, extracting seasonality is omitted. In this case, the detrended (\\(\\mathbf{X}-\\mathbf{T}\\)) is regarded as the residual component, and the decomposition process is completed. However, if the series exhibits significant seasonality, we extract the seasonality from \\((\\mathbf{X}-\\mathbf{T})\\). We first obtain seasonal fragments by averaging \\((\\mathbf{X}-\\mathbf{T})\\) across the pre-defined period \\(P\\). In the fragments, each element is calculated as follows: \\[\\mathbf{S}_{i}=\\frac{1}{m}\\sum_{k=0}^{m-1}(\\mathbf{X}_{i+kP}-\\mathbf{T}_{i+kP}), \\tag{9}\\] where \\(m\\) is the smallest integer satisfying \\(L<i+mP\\) for \\(0\\leq i<P\\) and sequence length \\(L\\), \\(\\mathbf{S}_{i}\\), \\(\\mathbf{T}_{i+kP}\\) and \\(\\mathbf{X}_{i+kP}\\) denote the \\(i\\)-th element of the seasonal fragments and the (\\(i+kP\\))-th element of \\(\\mathbf{T}\\) and \\(\\mathbf{X}\\). We produce seasonal fragments with a period \\(P\\) using Eq. (9). Then we construct the overall seasonality component \\(\\mathbf{S}\\) of length \\(L\\) by tiling the obtained seasonal fragments. The residual component \\(\\mathbf{R}\\) is the remaining part of the observed series \\(\\mathbf{X}\\), obtained by subtracting the trend (and seasonality, depending on the dataset). We effectively capture the temporal information from the time series using the decomposition method, which allows us to enhance the forecasting capabilities of our model. The analysis about the significance of seasonality for the datasets we use is in Section IV-C. Fig. 2: The overall workflow of LTSF-DNODE. LTSF-DNODE consists of three blocks: the decomposition block (DCMP), the normalizing and denormalizing blocks (NORM & DENORM), and the neural ODE block (NODE). DCMP block decomposes the observed time series into trend, seasonality, and residual components. NORM block normalizes the observed series based on its mean and variance. NODE block forecasts of the decomposed component based on each ODE. The processes indicated by dotted lines are optionally applied, considering the characteristics of the dataset. ### _NODE Block_ For the decomposition of the time series into trend, seasonality and residual, the NODEs can be written as follows and solved by the ODE solvers: \\[\\mathbf{T}(T) =\\mathbf{T}(0)+\\int_{0}^{T}f_{\\mathbf{T}}(\\mathbf{T}(t);\\mathbf{\\theta _{\\mathbf{T}}})dt, \\tag{10}\\] \\[\\mathbf{S}(T) =\\mathbf{S}(0)+\\int_{0}^{T}f_{\\mathbf{S}}(\\mathbf{S}(t);\\mathbf{ \\theta_{\\mathbf{S}}})dt,\\] \\[\\mathbf{R}(T) =\\mathbf{R}(0)+\\int_{0}^{T}f_{\\mathbf{R}}(\\mathbf{R}(t);\\mathbf{ \\theta_{\\mathbf{R}}})dt,\\] where the function \\(f_{\\mathbf{T}}:\\mathbb{R}^{L}\\rightarrow\\mathbb{R}^{L}\\) is a neural network with parameters \\(\\mathbf{\\theta_{\\mathbf{T}}}\\) that are learned from the data and captures the dynamics of the data with the trend. The function \\(f_{\\mathbf{T}}\\) is defined as \\(f_{\\mathbf{T}}:=\\frac{d\\mathbf{T}(t)}{dt}=\\mathbf{W_{\\mathbf{T}}}\\mathbf{T}(t)\\), where \\(\\mathbf{W_{\\mathbf{T}}}\\in\\mathbb{R}^{L\\times L}\\) is a weight matrix associated with the trend component, modeled as a single linear layer. Starting with an initial value, \\(\\mathbf{T}(0)=\\mathbf{T}\\), the NODE block computes the output, \\(\\mathbf{T}(T)\\), at the terminal time \\(T\\) by solving the initial value problems. The \\(f_{\\mathbf{S}}\\) and \\(f_{\\mathbf{R}}\\) are defined and parameterized in the same manner as \\(f_{\\mathbf{T}}\\). Note that we use various ODE solvers with the Jacobian and kinetic regularization [31]. Leveraging these regularizers allows for more accurate regularization of the learned dynamics at each time point, which exhibits diverse patterns. Since NODE requires the same dimension size of input and output, the results of the ODE solvers are decoded into the forecasting horizon through the decoding layer, as follows: \\[\\widehat{\\mathbf{T}} =\\texttt{FC}^{\\mathbf{T}}_{L\\to H}(\\mathbf{T}(T)), \\tag{11}\\] \\[\\widehat{\\mathbf{S}} =\\texttt{FC}^{\\mathbf{S}}_{L\\to H}(\\mathbf{S}(T)),\\] \\[\\widehat{\\mathbf{R}} =\\texttt{FC}^{\\mathbf{R}}_{L\\to H}(\\mathbf{R}(T)),\\] where each \\(\\texttt{FC}_{L\\to H}\\) means a fully-connected layer whose input size is \\(L\\) and output size is \\(H\\). The decomposed components \\(\\mathbf{T}\\), \\(\\mathbf{S}\\), and \\(\\mathbf{R}\\) of the observed sequence yield the future decomposed components \\(\\widehat{\\mathbf{T}}\\), \\(\\widehat{\\mathbf{S}}\\), and \\(\\widehat{\\mathbf{R}}\\) after passing through the NODE block. ### _NORM & DENORM Blocks_ If the dataset has distribution discrepancy problems, we apply instance normalization to the trend and residual components using the NORM and DENORM blocks. In Section IV-C, it will be discussed why instance normalization is only done on the trend and residual components. Instance normalization is performed on feature dimensions. Specifically, given the original component \\(\\mathbf{C}\\), which could be a trend or a residual, represented as \\(\\mathbf{C}_{ij}\\in\\mathbb{R}^{L\\times F}\\), the normalization procedure can be expressed as follows: \\[\\mathbf{\\mu}_{i}=\\frac{1}{F}\\sum_{j=1}^{F}\\mathbf{C}_{ij}\\,,\\quad\\mathbf{ \\sigma}_{i}{}^{2}=\\frac{1}{F}\\sum_{j=1}^{F}(\\mathbf{C}_{ij}-\\mathbf{\\mu}_{i})^{2}, \\tag{12}\\] \\[\\widetilde{\\mathbf{C}}_{ij}=\\frac{\\mathbf{C}_{ij}-\\mathbf{\\mu}_{i}}{ \\mathbf{\\sigma}_{i}}, \\tag{13}\\] where \\(L\\) is the length of the sequence, \\(F\\) is the number of features, \\(\\mathbf{\\mu}_{i}\\) and \\(\\mathbf{\\sigma}_{i}{}^{2}\\) are the \\(i\\)-th mean and variance of the original data, and \\(\\widetilde{\\mathbf{C}}_{ij}\\) is normalized component. DENORM block conducts the inverse process of normalization using the preserved mean (\\(\\mathbf{\\mu}_{i}\\)) and standard deviation (\\(\\mathbf{\\sigma}_{i}\\)) obtained during the normalization process. When normalized component \\(\\widetilde{\\mathbf{C}}\\) is given, the denormalization procedure performs the reverse of normalization as follows: \\[\\mathbf{C}_{ij}=\\sigma_{i}\\widetilde{\\mathbf{C}}_{ij}+\\mathbf{\\mu}_{i}, \\tag{14}\\] ### _Forecasting_ To forecast future series, if the dataset is normalized, we apply denormalization to restore the original distribution, and we add up \\(\\widehat{\\mathbf{T}}\\), \\(\\widehat{\\mathbf{S}}\\), and \\(\\widehat{\\mathbf{R}}\\) to obtain the final prediction \\(\\widehat{\\mathbf{Y}}\\): \\[\\widehat{\\mathbf{Y}}=\\widehat{\\mathbf{T}}+\\widehat{\\mathbf{S}}+\\widehat{ \\mathbf{R}}. \\tag{15}\\] The values used as Trend regularizers are defined as follows: \\[\\dot{E}_{\\mathbf{T}}(t) =||f_{\\mathbf{T}}(\\mathbf{T}(t);\\theta)||^{2}, \\tag{16}\\] \\[\\dot{J}_{\\mathbf{T}}(t) =||\\epsilon^{\\top}\ abla f_{\\mathbf{T}}(\\mathbf{T}(t);\\theta)||, \\tag{17}\\] where '\\(\\cdot\\)' denotes derivative, \\(\\epsilon\\) is sampled from standard normal distribution, and \\(f_{\\mathbf{T}}\\) refers to the function defined in Eq. (10). The same definitions are applied to the seasonality and residual components as well. The proposed model is trained using the mean square error (MSE) loss function with regularization terms as follows: \\[\\begin{split} Loss&=\\frac{\\sum_{i=1}^{n}(\\mathbf{Y}_{i }-\\widehat{\\mathbf{Y}}_{i})^{2}}{n}\\\\ &+\\sum_{j\\in\\{\\mathbf{T},\\mathbf{S},\\mathbf{R}\\}}(\\lambda_{K}E_{j }(t)+\\lambda_{J}J_{j}(t))\\end{split} \\tag{18}\\] where \\(\\mathbf{Y}_{i}\\) is the \\(i\\)-th element of the ground truth \\(\\mathbf{Y}\\), \\(n\\) is the number of elements, and \\(\\{\\mathbf{T},\\mathbf{S},\\mathbf{R}\\}\\) each represent the Trend, Seasonality, and Residual in the summation, respectively. The coefficients \\(\\lambda_{K}\\) and \\(\\lambda_{J}\\) are used as hyperparameters. ### _Comparison with Existing Models_ Table II shows the main differences between our model and the baselines. In contrast to the baseline models using single linear and Transformer, our model employs NODE as the main architecture. If we remove the NODE framework from our proposed model, it closely resembles a combination of the NLinear and DLinear. However, our approach employsboth data characteristic-dependent decomposition and normalization methods, as opposed to the consistent method used in other baselines. The normalization and decomposition methods of each model can be found in Table II. ## IV Experimental Evaluations In this section, we compare the performance of LTSF-DNODE for multivariate LTSF on real-world datasets against baselines and provide an analysis of the model's architecture with respect to data characteristics. ### _Experimental Environments_ All experiments are conducted in the following software and hardware environments: Ubuntu 18.04.4 LTS, Python 3.8.13, Pytorch 1.11.0, CUDA V10.0.130, NVIDIA Quadro RTX 6000/8000. We select several cases for each dataset by varying hyper-parameters. The learning rates are in {0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001} and the batch sizes are in {8, 16, 32, 64}. The number of training epochs is set to a maximum of 100. With the validation dataset, an early-stop approach with a patience of 10 iterations is applied. We implement the ODE function of the NODE framework using a single linear layer without an activation function and use Euler, RK4, and DOPRI as the ODE solvers. The coefficients of the Jacobian and kinetic regularizers are in {0.1, 0.2, , 1.0}. The terminal time, denoted as \\(T\\), is set to 1. #### Iv-A1 Datasets In order to evaluate LTSF-DNODE, we conduct experiments on real-world datasets with diverse characteristics collected from various domains such as energy, economics, and more. These datasets possess a wide range of features (see Table IV for its detailed descriptions). * Electricity records the amount of electricity consumption of 321 customers from 2012 to 2014. * Exchange Rate [42] consists of the daily records of the exchange rates of eight countries from 1990 to 2016. * Weather consists of data from 21 weather-related features (e.g. air temperature, humidity), recorded in Germany with 10-min interval during 2020. * ILI (Influenza-like Illness) is compiled by the Centers for Disease Control and Prevention of the United States from 2002 to 2021. * ETT (Electricity Transformer Temperature) [43] consists of two datasets with hourly granularity and two datasets with 15-minute granularity. Each data shows seven oil and load related properties of transformer. The data was aggregated from July 2016 to July 2018. #### Iv-A2 Baselines We consider the following 7 baselines for long-term time series forecasting. These baselines consist of five Transformer-based models and two Linear-based models, the previous state-of-the-art models: * Linear-based models: NLinear and DLinear [21] * Transformer-based models: Informer [43], Pyraformer [44], and LogTrans [45] * Tranformer-based models with decomposition method: FEDformer [40], and Autoformer [41] #### Iv-A3 Metrics To evaluate our model, we use MSE and mean absolute error (MAE). These metrics are as follows: \\[\\text{MSE}=\\frac{\\sum_{i=1}^{n}(\\mathbf{Y}_{i}-\\widehat{\\mathbf{Y}}_{i})^{2}} {n},\\text{ MAE}=\\frac{1}{n}\\sum_{i=1}^{n}\\Bigl{|}\\frac{\\mathbf{Y}_{i}-\\widehat{ \\mathbf{Y}}_{i}}{\\mathbf{Y}_{i}}\\Bigr{|}, \\tag{19}\\] where \\(\\mathbf{Y}_{i}\\) is the \\(i\\)-th element of the ground truth \\(\\mathbf{Y}\\), \\(n\\) is the number of elements. ### _Main Results_ In Table III, we list the results of the multivariate long-term time series forecasting. The forecasting horizon refers to the length of the sequence that the model aims to predict. LTSF-DNODE shows better performance in most cases than the previous models. This result indicates that LTSF-DNODE effectively utilized the essential temporal information emphasized in [21] for LTSF, leveraging precise preprocessing methods and forecasting modeling based on NODE. There are performance improvements (e.g., up to 15.08% on MSE and up to 6.28% on MAE) observed in the Exchange, ILI, ETTh1, ETTh2, and ETTm2 datasets. In the Electricity, Weather, and ETTm1 datasets, our model demonstrated performance similar to that of Linear-based models. These improvements are attributed to decomposition and normalization methods tailored to the data characteristics. In addition, it can be argued that formulating between past and future values as Eq. (6) contributed positively to LTSF by accurately capturing the dynamics of time series. The subsequent section discusses the analysis results of data characteristics and how each component of our proposed model contributed to performance enhancement. ### _Data Analysis Results_ The result of exploratory analysis on each dataset can be found in Table IV. \"Forecastability\" and \"Trend\" metrics are evaluated for the full sequence, with averaging across the features. \"Seasonality\" is determined based on sub-sequences with a length (104 for ILI and 336 for the others) using kernel sizes of {10, 25, 50} and periods that are determined based on the \"Granularity\" (e.g., the period is 24 for an hour-based dataset to determine the seasonality of a day, 48 for two days). In Table IV, \"Seasonality\" is the highest ratio obtained among combinations of kernel sizes and periods. Similarly, we calculate the proportion of sequences exhibiting stationarity for each feature and take the average sequences sliced of length 720 (also 104 for ILI). Based on the results of the ACF test, we select three candidates with the highest seasonality ratio for each kernel size. Then, using the outcomes of the ADF test, we compare the stationarity ratio of the residuals to identify the best parameter combination. Since the residual should show stationarity, we Fig. 3: Distribution of original, trend, and residual for feature OT in (a) ILI and (b) ETTh1. There is a clear discrepancy in the distribution between the train and test datasets. select the parameter combination that shows a higher ratio of stationarity in the residual. If it is ambiguous to differentiate the candidates by stationarity ratios, we compare the p-values obtained from the ADF test for each sequence. Since a lower p-value indicates a stronger indication of stationarity, we select the parameter combination with a greater number of sequences having lower p-values as the optimal choice. Additionally, we determine whether to extract seasonality during the decomposition process based on the seasonality ratios. The Exchange, ILI, and Weather datasets have lower seasonality ratios compared to other datasets, indicating that these datasets have a lower significance of seasonality. We decompose these datasets without extracting seasonality. Fig. 3 shows a clear distribution discrepancy between trend components of the training and testing datasets. It was observed in the ETTh1, ETTh2, ETTm2, and ILI datasets. For these datasets, we adopt instance normalization to mitigate distribution discrepancies. There are less significant changes in the distributions of residual components. Nevertheless, we have observed different variances between the training and testing datasets, and to mitigate these, we apply instance normalization to residual components. For the seasonality component, instance normalization is not applied since seasonality repeats values periodically, as defined. ### _Ablation Studies_ In order to understand the effectiveness of each block of LTSF-DNODE, we conduct ablation studies on the datasets. #### Iv-D1 Decomposition To assess the effectiveness of the DCMP block (utilizing the classical decomposition method), we compare the performance of the LTSF-DNODE model with the \"\\(w/o\\). DCMP\", where the DCMP block is omitted. LTSF-DNODE performs better compared to \"\\(w/o\\). DCMP\", with a maximum improvement of 11.26% on MSE and 3.01% on MAE in the Exchange dataset. As a result, we can infer that time series decomposition has a positive impact on forecasting. Additionally, we conduct an ablation study to investigate alternative decomposition methods, specifically frequency-based approaches, as follows: * \"FTLinear\" employs Fourier transform [46] to decompose the time series. It uses low or high-pass filters with frequency criteria of \\(1.00\\times 10^{-4}\\). The filtered time series passes through individual single linear layers. * \"WTLinear\" employs Wavelet transform [46] to decompose the time series. This model has a structure similar to \"FTLinear\" and uses a low or high-pass filter. * \"TSRLinear\" decomposes time series into trend, seasonality, and residual components using classical methods. This model is identical to the LTSF-DNODE, excluding the NORM and NODE blocks. Table VI shows the forecasting results of these models. In various benchmark datasets, the classical decomposition method is the better option than other methods. #### Iv-D2 Normalization To assess how NORM and DENORM blocks impact a dataset with distribution disparities, we compare the performance of LTSF-DNODE with \"\\(w/o\\). NORM\", which excludes instance normalization. LTSF-DNODE improves performance by up to 14.54% in MSE and 10.76% in MAE compared to \"\\(w/o\\). NORM\" in the ETTh2 dataset. Therefore, it can be inferred that instance normalization is effective when there exists a distribution discrepancy between the training and testing datasets. #### Iv-D3 Neural ODEs Finally, we investigate the effectiveness of modeling the relationship between the past and the future in LTSF using linear ODEs, as shown in Eq. (6). We compare two variants: \"\\(w/o\\). NODE\" and LTSF-DNODE. To investigate the effectiveness of NODE, we compare \"\\(w/o\\). NODE\" and LTSF-DNODE. Both models utilize decomposition and normalization methods. However \"\\(w/o\\). NODE\" has a single linear layer, while LTSF-DNODE adopts the NODE framework. The experimental results show that LTSF-DNODE outperforms \"\\(w/o\\). NODE\" with a maximum performance improvement of 5.28% on MSE and 2.50% on MAE in the ETTh2 dataset. Also, we explore the effect of various learning techniques, such as ODE solver and regularizer, within the NODE framework. The ODE solver is one of the crucial factors in the NODE framework. Table VII shows the MSE and MAE against various ODE solvers. Additionally, the Jacobian and kinetic regularizers enable smoother learning. It transforms unstable learning into stable learning, leading to an improvement in forecasting accuracy. In Fig. 4 (a), we can see the loss decreases as the Jacobian and kinetic coefficients increase, respectively; however, in Fig. 4 (b), it does not. These findings indicate that the effectiveness of the regularizer depends on the dataset. As a result, the incorporation of a regularizer suggestsits potential to enhance the learning process of our proposed model. #### Iv-B4 Model Efficiency Analyses Fig. 5 shows the number of parameters and the MSE of models. Our LTSF-DNODE model employs fewer parameters compared to Transformer-based models and slightly more parameters compared to Linear-based models. However, LTSF-DNODE outperforms these models on various datasets. ## V Related Work Recently, various Transformer-based models have shown encouraging results in the field of LTSF. Transformer-based LTSF models overcome the limitations of RNNs in LTSF by enabling direct multi-step forecasting, but difficulties remain due to quadratic time complexity, high memory usage, and the inherent limitations of the encoder-decoder architecture. Transformer-based LTSF models have addressed these issues in various ways. LogTrans [45] utilizes a convolutional self-attention mechanism to address challenges related to locality-agnostic property and memory limitations. Pyraformer [44] uses an interscale tree structure, reflecting a multi-resolution representation of a time series dataset. Informer [43] introduces a Probsparse self-attention to minimize time complexity and reduce the processing time. Also, it uses a generative decoder to improve performance. On the other hand, other transformer-based models have attempted to learn the characteristics of time series by combining self-attention structures and decomposition methods. Autodformer [41] aims to enhance forecasting accuracy by capturing auto-correlation and employing the decomposition method, which extracts trend and seasonality. This structure enables progressive decomposition capabilities for intricate time series. FEDformer [40] also utilizes the Fast Fourier transform with a low-rank approximation for preprocessing the temporal data. Moreover, it employs a mixture of expert decomposition, which extracts trend and seasonality, to manage the distribution discrepancy problem. Due to the issues with Transformers, non-Transformer-based models have also emerged. N-BEATS [47] incorporates fully-connected networks with trend and seasonality decomposition to enhance interpretability. Furthermore, DEPTS [48] advances N-BEATS to propose a more effective model that specializes in learning periodic time series. SNaive [22] uses simple linear regression and demonstrates that a simple statistical model using linear regression could achieve better performance. NLinear and DLinear [21] are Linear-based models that use a single linear layer with simple preprocessing. NLinear utilizes a normalization method that subtracts the last value of the sequence from the input. DLinear utilizes the classical decomposition method that decomposes the input into a trend and a remainder. ## VI Conclusion and Future Work Long-term time series forecasting is an important research topic in deep learning, and simple models such as Linear-based approaches are showing good performance. However, these models are too simple to represent the dynamics of the time series. Our proposed method, LTSF-DNODE, uses a neural ODE (NODE) framework with a simple architecture and utilizes decomposition depending on data characteristics. Our contribution allows the model to not only use temporal information appropriately but also capture time series dynamically. We experimentally demonstrated that LTSF-DNODE outperforms the existing baselines on real-world benchmark datasets. In ablation studies, we analyzed how the main components of LTSF-DNODE affect performance. In future work, we will refine the modeling of the residual component. Our analysis results demonstrate the evident stationarity of the residual component within the optimal settings we have identified. Therefore, we hypothesize that the performance of LTSF can be further improved through an elaborate modeling of the residual component, potentially via an approach such as stochastic differential equations (SDEs). Fig. 4: Analyses about the impact of the Jacobian and kinetic regularization on performance (MSE). The forecasting horizon is 60 in (a) and 96 in (b). Fig. 5: Model efficiency. It was measured for the Exchange dataset. The bottom left corner is preferred. ## Acknowledgement Noseong Park is the corresponding author. This work was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No. 2020-0-01361, Artificial Intelligence Graduate School Program at Yonsei University, 10%), and (No.2022-0-00857, Development of AI/data-based financial/economic digital twin platform, 90%) ## References * [1]A. Alaa, A. J. Chan, and M. van der Schaar (2021) Generative time-series modeling with fourier flows. In ICLR, Cited by: SSI, SSII-A, SSII- * [47] B. N. Oreshkin, D. Carpov, N. Chapados, and Y. Bengio, \"N-beats: Neural basis expansion analysis for interpretable time series forecasting,\" _arXiv preprint arXiv:1905.10437_, 2019. * [48] W. Fan, S. Zheng, X. Yi, W. Cao, Y. Fu, J. Bian, and T.-Y. Liu, \"Depts: deep expansion learning for periodic time series forecasting,\" _arXiv preprint arXiv:2203.07681_, 2022.
Long-term time series forecasting (LTSF) is a challenging task that has been investigated in various domains such as finance investment, health care, traffic, and weather forecasting. In recent years, Linear-based LTSF models showed better performance, pointing out the problem of Transformer-based approaches causing temporal information loss. However, Linear-based approach has also limitations that the model is too simple to comprehensively exploit the characteristics of the dataset. To solve these limitations, we propose LTSF-DNODE, which applies a model based on linear ordinary differential equations (ODEs) and a time series decomposition method according to data statistical characteristics. We show that LTSF-DNODE outperforms the baselines on various real-world datasets. In addition, for each dataset, we explore the impacts of regularization in the neural ordinary differential equation (NODE) framework. long-term time series forecasting, time series decomposition, instance normalization, neural ordinary differential equations + Footnote †: These authors contributed equally to this research. + Footnote †: These authors contributed equally to this research.
Summarize the following text.
210
arxiv-format/2407_11682v1.md
MapDistill: Boosting Efficient Camera-based HD Map Construction via Camera-LiDAR Fusion Model Distillation Xiaoshuai Hao 1Samsung R&D Institute China-Beijing 1 Ruikai Li2 1Samsung R&D Institute China-Beijing 1 Hui Zhang 1Samsung R&D Institute China-Beijing 1 Dingzhe Li 1Samsung R&D Institute China-Beijing 1 Rong Yin 3Institute of Information Engineering, Chinese Academy of Sciences 3 Sangil Jung 4Computer Vision TU, SAIT, SEC, Korea 5The University of Sydney 1 Samsung R&D Institute China-Beijing 1 Samsung R&D Institute China-Beijing 1 Samsung R&D Institute China-Beijing 1 Samsung R&D Institute China-Beijing 1 ByingIn Yoo 4Computer Vision TU, SAIT, SEC, Korea 5The University of Sydney 1 Haimei Zhao 5The University of Sydney 1 Jing Zhang 5Corresponding authors. 1 Footnote 1: The first two authors contributed equally to this work. Footnote 2: footnotemark: Footnote 3: footnotemark: ## 1 Introduction Online high-definition (HD) map provides abundant and precise static environmental information about the driving scenes, which is fundamental for planningand navigation in autonomous driving systems. Recently, multi-view camera-based [7, 23, 34] HD map construction has gained increasing attention thanks to the significant progress of Bird's-Eye-View (BEV) perception. Compared with LiDAR-based [13, 39] and Fusion-based methods [19, 23, 25], multi-view camera-based methods can be deployed at low cost, while the lack of depth information makes current approaches adopt large models for effective feature extraction and good performance achievement. Therefore, it is crucial to trade off the performance and efficiency of the camera-based model for practical deployment. To achieve this goal, Knowledge Distillation (KD) [8] has drawn great attention in related fields since it is one of the most practical techniques for training efficient yet accurate models. KD-based methods usually transfer knowledge from a large well-trained model (teacher) to a small model (student) [14], which has made remarkable progress in many fields, such as image classification [31], 2D object detection [3], semantic segmentation [43] and 3D object detection [5, 51, 53]. Previous methods follow the well-known teacher-student paradigm [14], which forces the logits of the student network to match those of the teacher network. Recently, BEV-based KD methods have advanced the field of 3D object detection, which unify the image and LiDAR features in the Bird-Eye-View (BEV) space and adaptively transfer knowledge across non-homogenous representations in a teacher-student paradigm. Existing works use a strong LiDAR teacher model to distill a camera student model, such as BEVDistill [5], UVTR [20], BEVL-GKD [18], TiG-BEV [15] and DistillBEV [41]. Furthermore, the latest work UniDistill [53] proposes a universal cross-modality knowledge distillation framework for 3D Object Detection. Figure 1: Comparison of different methods on the nuScenes val dataset. We benchmark the inference speed on a single NVIDIA RTX 3090 GPU. Our method can achieve a better trade-off in both speed (FPS) and accuracy (mAP). Compared to these methods, BEV-based HD map construction KD method differs in two crucial aspects: Firstly, the detection head (DetHead) produces the output of classification and localization for objects, while the output of the map head (MapHead) from a vectorized map construction model, _e.g_. MapTR [23], is the classification and point regression result. Secondly, existing BEV-based KD methods for 3D object detection typically focus on aligning foreground objects' features to mitigate the background environment's adverse impact, which is obviously unsuitable for HD map construction. Therefore, directly applying the BEV-based KD method for 3D object detection to HD map construction fails to achieve satisfying results (see the experiment results in Tab. 1) due to the inherent dissimilarity between the two tasks. To the best of our knowledge, BEV-based KD methods for HD map construction are still under exploration. To fill this gap, we propose a novel KD-based method named MapDistill to transfer the knowledge from a high-performance teacher model to an efficient student model. First, we adopt the teacher-student architecture, _i.e_., a camera-LiDAR fusion model as the teacher and a lightweight camera model as the student, and devise a dual BEV transform module to facilitate cross-modal knowledge distillation while maintaining cost-effective camera-only deployment. Building upon this architecture, we propose a comprehensive distillation scheme encompassing cross-modal relation distillation, dual-level feature distillation, and map head distillation, to mitigate the knowledge transfer challenges between modalities and help the student model learn improved feature representations for HD map construction. Specifically, we first introduce the cross-modal relation distillation loss for the student model to learn better cross-modal representations from the fusion teacher model. Second, to achieve better semantic knowledge transfer, we employ the dual-level feature distillation loss on both the low-level and high-level feature representations in the unified BEV space. Last but not least, we specifically introduce a map head distillation loss tailored for the HD map construction task, including classification loss and point-to-point loss, which can make the final predictions of the student closely resemble those of the teacher. Extensive experiments on the challenging nuScenes dataset [2] demonstrate the effectiveness of MapDistill, surpassing existing competitors by over 7.7 mAP or 4.5\\(\\times\\) speedup as shown in Fig. 1. The contributions of this paper are mainly three-fold: * We present an effective model architecture for distillation-based HD map construction, including a camera-LiDAR fusion teacher model, a lightweight camera-only student model, and a dual BEV transform module, which facilitates knowledge transfer within and between different modalities while enjoying cost-effective camera-only deployment. * We introduce a comprehensive distillation scheme that supports cross-modal relation distillation, dual-level feature distillation, and map head distillation simultaneously. By mitigating the knowledge transfer challenges between modalities, this distillation scheme helps the student model learn better feature representation for HD map construction. * MapDistill achieves superior performance than state-of-the-art (SOTA) methods, which could serve as a strong baseline for KD-based HD map construction research. ## 2 Related Work **Camera-based HD Map Construction.** HD map construction is a prominent and extensively researched area within the field of autonomous driving. Recently, camera-based methods [7, 9, 10, 16, 23, 25, 34, 47] have increasingly employed the Bird's-eye view (BEV) representation as an ideal feature space for multi-view perception due to its remarkable ability to mitigate scale-ambiguity and occlusion challenges. Various techniques have been proposed and utilized to project perspective view (PV) features into the BEV space by leveraging geometric priors, such as LSS [33], Deformable Attention [21] and GKT [4]. Furthermore, camera-based methods have come to rely on higher resolution images and larger backbone models to achieve enhanced accuracy [21, 26, 27, 40, 42, 45, 21], a practice that introduces substantial challenges for practical deployment. For example, HDMapNet [19] and VectorMapnet [25] employ the Efficient-B0 model [37] and ResNet50 model, respectively, as backbones for feature extraction. Additionally, MapTR [23] explores the impact of various backbones, including the Swin Transformer [27], ResNet50 [12], and Resnet18 [12]. Experimental results demonstrate a direct correlation between the backbone's representation capability and model performance, _i.e._, larger models generally yield better results. Yet, using larger models leads to slower inference, compromising the cost advantage of camera-based methods. In this paper, we introduce an effective yet efficient camera-based method tailored for practical deployment via knowledge distillation. **Fusion-based HD Map Construction.** LiDAR-based methods [11, 13, 19, 25, 39] provide precise spatial data for creating the BEV feature representation. Recently, camera-LiDAR fusion methods [1, 28, 22, 23, 24, 35, 38, 19] leverage the semantic richness of camera data and the geometric information from LiDAR in a collaborative manner. This fusion at the BEV level incorporates distinct streams, encoding camera and LiDAR inputs into shared BEV features, surpassing unimodal input approaches in performance. However, this integration may impose significant computational and cost burdens in practical deployment. To address this issue, we leverage KD techniques for efficient HD map construction and introduce a novel approach called MapDistill to transfer knowledge from a high-performance camera-LiDAR fusion model to a lightweight camera-only model, yielding a cost-effective yet accurate solution. **Knowledge Distillation.** KD refers to transferring knowledge from a well-trained, larger teacher model to a smaller student [14], which has been widely applied across diverse tasks, such as image classification [31, 48, 49], 2D object detection [3, 52], semantic segmentation [36, 39, 43, 46] and 3D object detection [5, 53, 6, 51]. Recently, BEV-based KD methods have gained increasing attention in the field of 3D object detection. Several existing works have adopted cross-modality knowledge distillation frameworks for 3D object detection, includingBEVDistill [5], UVTR [20], BEV-LGKD [18], TiG-BEV [15], DistillBEV [41], and UniDistill [53]. Despite the numerous KD methods for 3D object detection, KD-based HD map construction remains relatively under-explored. In this paper, we fill this gap by proposing a novel KD-based approach called MapDistill to boost efficient camera-based HD map construction via camera-LiDAR fusion model distillation. ## 3 Methodology In this section, we describe our proposed MapDistill in detail. We first give an overview of the whole framework in Fig. 2 and clarify the model designs of the teacher and student models in Sec. 3.1. Then, we elaborate details of MapDistill objectives in Sec. 3.2, such as the cross-modal relation distillation, the dual-level feature distillation, and the map head distillation. Finally, we present the overall training procedure in Sec. 3.3. ### Model Overview **Fusion-based Model (Teacher).** To enable the knowledge transfer from the camera-LiDAR fusion teacher model to the student model, we first establish a baseline of fusion-based HD map construction based on the state-of-the-art MapTR [23] model. The fused MapTR model has two branches, as depicted in Figure 2: **The overview of our proposed MapDistill. It consists of a fusion-based teacher model (top) and a lightweight camera-based student model (bottom). In addition, three distillation losses are employed to enable the teacher model to transfer knowledge to the student, _i.e._, by instructing the student model to produce similar features and predictions, which are cross-modal relation distillation (\\(\\mathcal{L}_{relation}\\)), dual-level feature distillation (\\(\\mathcal{L}_{feature}\\)), and map head distillation (\\(\\mathcal{L}_{head}\\)). Note that only the student model is needed for inference.** the top part of Fig. 2. For the camera branch, it firstly utilizes **Resnet50**[12] as the backbone to extract multi-view features. Next, it uses GKT [4] as the 2D-to-BEV transformation module to convert the multi-view features into the BEV space. The generated camera BEV features can be denoted as \\(\\mathbf{F}_{C_{bev}}^{T}\\in\\mathbb{R}^{H\\times W\\times C}\\), where \\(H,W,C\\) represents the height, width and the number of channels of BEV features respectively, and the superscript \\(T\\) is short for \"teacher\". For the LiDAR branch, it adopts **SECOND**[44] for point cloud voxelization and LiDAR feature encoding. The LiDAR features are projected to BEV space using a flattening operation as in [28], to obtain the LiDAR BEV representation \\(\\mathbf{F}_{L_{bev}}^{T}\\in\\mathbb{R}^{H\\times W\\times C}\\). Then, MapTR concatenates \\(\\mathbf{F}_{C_{bev}}^{T}\\) and \\(\\mathbf{F}_{L_{bev}}^{T}\\) and processes the features with the fully convolutional network to produce the fused BEV features \\(\\mathbf{F}_{fused}^{T}\\in\\mathbb{R}^{H\\times W\\times C}\\). The following step is to use a Map Encoder (MapEnc), which takes the fused BEV features \\(\\mathbf{F}_{fused}^{T}\\) as input, to further generate the high-level feature \\(\\mathbf{F}_{high}^{T}\\): \\[\\mathbf{F}_{high}^{T}=\\text{MapEnc}(\\mathbf{F}_{fused}^{T}), \\tag{1}\\] Then, the teacher Map head (MapHead) employs the classification and point branches to produce the final predictions of map elements categories \\(\\mathbf{F}_{cls}^{T}\\) and point positions \\(\\mathbf{F}_{point}^{T}\\): \\[\\mathbf{F}_{cls}^{T},\\mathbf{F}_{point}^{T}=\\text{MapHead}(\\mathbf{F}_{high} ^{T}). \\tag{2}\\] During the overall training procedure, the teacher model will continuously produce diverse features \\(\\mathbf{F}_{C_{bev}}^{T}\\), \\(\\mathbf{F}_{L_{bev}}^{T}\\), \\(\\mathbf{F}_{fused}^{T}\\), \\(\\mathbf{F}_{high}^{T}\\), \\(\\mathbf{F}_{cls}^{T}\\) and \\(\\mathbf{F}_{point}^{T}\\). **Camera-based Model (Student).** To realize real-time inference speed for practical deployment, we adopt MapTR's camera branch as the base for the student model. Note that we employ **Resnet18**[12] as the backbone to extract the multi-view features, which can make the network lightweight and easy to deploy. On the base from MapTR, to mimic the multimodal fusion pipeline of the teacher model, we propose a Dual BEV Transform module to convert the multi-view features into two distinct BEV subspaces, whose effect will be verified in the ablation experiments. Specifically, we firstly use GKT [4] to generate BEV features in the first subspace \\(\\mathbf{F}_{C_{sub1}}^{S}\\in\\mathbb{R}^{H\\times W\\times C}\\), where the superscript \\(S\\) is short for \"student\". Then, we utilize LSS [33] to generate BEV features in the second subspace \\(\\mathbf{F}_{C_{sub2}}^{S}\\in\\mathbb{R}^{H\\times W\\times C}\\). Then, we concatenate \\(\\mathbf{F}_{C_{sub1}}^{S}\\) and \\(\\mathbf{F}_{C_{sub2}}^{S}\\) and process the features with the fully convolutional network to produce the fused BEV features \\(\\mathbf{F}_{fused}^{S}\\in\\mathbb{R}^{H\\times W\\times C}\\). Then, employing the same process as the teacher model, we can generate \\(\\mathbf{F}_{high}^{S}\\), \\(\\mathbf{F}_{cls}^{S}\\) and \\(\\mathbf{F}_{point}^{S}\\) from \\(\\mathbf{F}_{fused}^{S}\\) with Eq. 1 and Eq. 2. Therefore, the student model will consistently produce \\(\\mathbf{F}_{C_{sub1}}^{S}\\), \\(\\mathbf{F}_{C_{sub2}}^{S}\\), \\(\\mathbf{F}_{fused}^{S}\\), \\(\\mathbf{F}_{high}^{S}\\), \\(\\mathbf{F}_{cls}^{S}\\) and \\(\\mathbf{F}_{point}^{S}\\) during the procedure of map construction. ### MapDistill Objectives #### 3.2.1 Cross-modal Relation Distillation. The teacher model, a camera-LiDAR fusion model, combines semantic-rich information from camera data with explicit geometric data from LiDAR. In contrast, the student model, a camera-based model, focuses mainly on capturing semantic information from the camera. The essential factor contributing to the teacher model's superior performance is cross-modal interaction, which the student model lacks. Therefore, we encourage the student model to develop this cross-modal interaction capability through imitation. To this end, we introduce a cross-modal attention distillation objective. The core idea is to let the student model imitate the cross-modal attention of the teacher model during training. More specifically, for the teacher model, we begin by reshaping the camera BEV features \\(\\mathbf{F}_{C_{bxv}}^{T}\\in\\mathbb{R}^{H\\times W\\times C}\\) and the LiDAR BEV features \\(\\mathbf{F}_{L_{bev}}^{T}\\in\\mathbb{R}^{H\\times W\\times C}\\) into sequences of 2D patches represented as \\(\\mathbf{F}\\mathbf{p}_{C_{bev}}^{T}\\in\\mathbb{R}^{N\\times(P^{2}C)}\\) and \\(\\mathbf{F}\\mathbf{p}_{L_{bev}}^{T}\\in\\mathbb{R}^{N\\times(P^{2}C)}\\), respectively. Here, the patch size is denoted as \\(P\\times P\\), and the number of patches is given by \\(N=HW/P^{2}\\). Then, we calculate the cross-modal attention from the teacher, including camera-to-lidar attention \\(\\mathbf{A}_{c2l}^{T}\\in\\mathbb{R}^{N\\times N}\\) and lidar-to-camera attention \\(\\mathbf{A}_{l2c}^{T}\\in\\mathbb{R}^{N\\times N}\\) as follows: \\[A_{c2l}^{T}=\\text{softmax}\\left(\\frac{\\mathbf{F}\\mathbf{p}_{C_{bxv}}^{T} \\operatorname{Transpose}\\left(\\mathbf{F}\\mathbf{p}_{L_{bev}}^{T}\\right)}{ \\sqrt{D_{k}}}\\right), \\tag{3}\\] \\[A_{l2c}^{T}=\\text{softmax}\\left(\\frac{\\mathbf{F}\\mathbf{p}_{L_{bev}}^{T} \\operatorname{Transpose}\\left(\\mathbf{F}\\mathbf{p}_{C_{bev}}^{T}\\right)}{ \\sqrt{D_{k}}}\\right), \\tag{4}\\] where \\(\\frac{1}{\\sqrt{D_{k}}}\\) is a scaling factor for preventing the softmax function from falling into a region with extremely small gradients when the magnitude of dot products grow large. For the student model, we adopt the same operation as the teacher model to generate \\(\\mathbf{F}\\mathbf{p}_{C_{xub1}}^{S}\\in\\mathbb{R}^{N\\times(P^{2}C)}\\) and \\(\\mathbf{F}\\mathbf{p}_{C_{xub2}}^{S}\\in\\mathbb{R}^{N\\times(P^{2}C)}\\) from \\(\\mathbf{F}_{C_{xub1}}^{S}\\) and \\(\\mathbf{F}_{C_{xub2}}^{S}\\), respectively, and then compute the cross-modal attention of the student \\(\\mathbf{A}_{c2l}^{S}\\), \\(\\mathbf{A}_{l2c}^{S}\\) as follows: \\[A_{c2l}^{S}=\\text{softmax}\\left(\\frac{\\mathbf{F}\\mathbf{p}_{C_{xub1}}^{S} \\operatorname{Transpose}\\left(\\mathbf{F}\\mathbf{p}_{C_{xub2}}^{S}\\right)}{ \\sqrt{D_{k}}}\\right), \\tag{5}\\] \\[A_{l2c}^{S}=\\text{softmax}\\left(\\frac{\\mathbf{F}\\mathbf{p}_{C_{xub2}}^{S} \\operatorname{Transpose}\\left(\\mathbf{F}\\mathbf{p}_{C_{xub1}}^{S}\\right)}{ \\sqrt{D_{k}}}\\right). \\tag{6}\\] To this end, we propose the cross-modal relation distillation and employ a KL-divergence loss to align the cross-modal attention \\(\\mathbf{A}_{c2l}^{S}\\) and \\(\\mathbf{A}_{l2c}^{S}\\) of the student with \\(\\mathbf{A}_{c2l}^{T}\\) and \\(\\mathbf{A}_{l2c}^{T}\\) of the teacher model: \\[\\mathcal{L}_{relation}=D_{KL}(A_{c2l}^{T}||A_{c2l}^{S})+D_{KL}(A_{l2c}^{T}||A _{l2c}^{S}). \\tag{7}\\] #### 3.2.2 Dual-level Feature Distillation. To facilitate the student model to absorb the rich semantic/geometric knowledge from the teacher model, we take advantage of the fused BEV features for the feature-level distillation. Specifically, we leverage the low-level fused BEV feature of the teacher \\(\\mathbf{F}_{fused}^{T}\\) as the supervisory signal for learning the counterpart of the student \\(\\mathbf{F}_{fused}^{S}\\) via an MSE loss, _i.e._, \\[\\mathcal{L}_{low}=\\text{MSE}(\\mathbf{F}_{fused}^{T},\\mathbf{F}_{fused}^{S}). \\tag{8}\\] In addition, we further propose the high-level feature distillation \\(\\mathcal{L}_{high}\\) to align \\(\\mathbf{F}_{high}^{T}\\) and \\(\\mathbf{F}_{high}^{S}\\), which are generated by the Map Encoder. \\(\\mathcal{L}_{high}\\) is defined as: \\[\\mathcal{L}_{high}=\\text{MSE}(\\mathbf{F}_{high}^{T},\\mathbf{F}_{high}^{S}). \\tag{9}\\] Formally, the dual-level feature distillation loss \\(\\mathcal{L}_{features}\\) is the sum of low-level distillation loss \\(\\mathcal{L}_{low}\\) and high-level distillation loss \\(\\mathcal{L}_{high}\\), _i.e._, \\[\\mathcal{L}_{feature}=\\mathcal{L}_{low}+\\mathcal{L}_{high}. \\tag{10}\\] We use \\(\\mathcal{L}_{feature}\\) as one of the distillation objectives to enable the student model to benefit from the teacher model implicitly during training. #### 3.2.2 Map Head Distillation. After the Map Encoder, the high-level BEV feature in the student model is fed into the HD Map Head to produce the prediction in the same way as the teacher model. To make the final prediction of the student close to that of the teacher, we further propose the map head distillation. Specifically, we use the predictions generated by the teacher model as pseudo labels to supervise the student model via the \\(\\mathcal{L}_{head}\\) loss. To achieve the goal, we need to construct the correspondence between the predictions of the student and the teacher. Suppose the classification and point predictions from the teacher model are \\(\\mathbf{F}_{cls}^{T}\\) and \\(\\mathbf{F}_{point}^{T}\\) respectively, and those from the student can be represented as \\(\\mathbf{F}_{cls}^{S}\\) and \\(\\mathbf{F}_{point}^{S}\\) respectively. The \\(\\mathcal{L}_{head}\\) loss consists of two parts, _i.e._, the classification loss \\(\\mathcal{L}_{cls}\\) for map elements classification and the point2point loss \\(\\mathcal{L}_{point}\\) for point position regression: \\[\\begin{split}\\mathcal{L}_{head}&=\\mathcal{L}_{cls }+\\mathcal{L}_{point}\\\\ &=\\mathcal{L}_{Focal}(\\mathbf{F}_{cls}^{T},\\mathbf{F}_{cls}^{S} )+\\mathcal{L}_{p2p}(\\mathbf{F}_{point}^{T},\\mathbf{F}_{point}^{S}),\\end{split} \\tag{11}\\] where \\(\\mathcal{L}_{Focal}\\) denotes the Focal loss [32] and \\(\\mathcal{L}_{p2p}\\) denotes the Manhattan distance [30] between \\(\\mathbf{F}_{point}^{T}\\) and \\(\\mathbf{F}_{point}^{S}\\). ### Overall Training To facilitate knowledge transfer from the multi-modal fusion-based teacher model to the camera-based student model, we integrate the map loss \\(\\mathcal{L}_{map}\\) with the above distillation losses, including the cross-modal relation distillation loss (\\(\\mathcal{L}_{relation}\\)), the dual-level feature distillation loss (\\(\\mathcal{L}_{feature}\\)), and the map head distillation loss (\\(\\mathcal{L}_{head}\\)). The overall training objective can be formulated as: \\[\\mathcal{L}=\\mathcal{L}_{map}+\\lambda_{1}\\mathcal{L}_{relation}+\\lambda_{2} \\mathcal{L}_{feature}+\\lambda_{3}\\mathcal{L}_{head}, \\tag{12}\\] where \\(\\lambda_{1}\\), \\(\\lambda_{2}\\) and \\(\\lambda_{3}\\) are hyper-parameters for balancing these terms. The map loss \\(\\mathcal{L}_{map}\\) is calculated following [23], which is composed of three parts, _i.e._, classification loss, point2point loss, and edge direction loss. ## 4 Experiments ### Experimental Settings **Datasets.** We evaluate our method on the widely-used challenging nuScenes [2] dataset following the standard setting of previous methods [19, 23, 25]. The nuScenes dataset contains 1,000 sequences of recordings collected by autonomous driving cars. Each sample is annotated at 2Hz and contains 6 camera images covering 360\\({}^{\\circ}\\) horizontal FOV of the ego-vehicle. Following [19, 23, 25], three kinds of map elements are chosen for fair evaluation - pedestrian crossing, lane divider, and road boundary. **Evaluation Metrics.** We adopt the evaluation metrics used in previous works [19, 23, 25]. Specifically, average precision (AP) is used to evaluate the map construction quality. Chamfer distance \\(D_{Chamfer}\\) is used to determine whether the prediction and GT are matched or not. We calculate the \\(AP_{\\tau}\\) under several \\(D_{Chamfer}\\) thresholds (\\(\\tau\\in T=\\{0.5,1.0,1.5\\}\\)), and then average across all thresholds as the final mean AP (_mAP_) metric: \\[mAP\\ =\\ \\frac{1}{|T|}\\sum\\limits_{\\tau\\in T}AP_{\\tau}. \\tag{13}\\] The perception ranges are [-15.0m, 15.0m]/[-30.0m, 30.0m] for X/Y-axes. **Model and Training Details.** MapDistill is trained with 8 NVIDIA RTX A6000 GPUs. For the teacher model, we first establish a baseline method of fusion-based HD map construction based on MapTR [23]. The fused MapTR model uses ResNet50 [12] and SECOND [44] as the backbone and employ GKT [4] as the default 2D-to-BEV module. For the student model, we adopt MapTR's camera branch as the base, and introduce the dual BEV transform module to facilitate cross-modal knowledge distillation. Note that, the student model adopts ResNet18 [12] as the backbone. Moreover, we adopt the AdamW optimizer [29] for all our experiments. The setting of hyper-parameters \\(\\lambda_{1}\\), \\(\\lambda_{2}\\), and \\(\\lambda_{3}\\) is discussed extensively in the ablation studies. We set the mini-batch size to 64, and use a step-decayed learning rate with an initial value of \\(4e^{-3}\\). ### Comparison with the State-of-the-Arts We compare our method with several state-of-the-art baselines across two categories, _i.e._, camera-based HD map construction methods, and customized KD methods which were originally designed for BEV-based 3D object detection. For camera-based HD map construction methods, we directly report the results from the corresponding papers. For KD-based methods, we implement three methods for BEV-based 3D object detection and modify them for the HD map construction task, including BEV-LGKD [18], BEVDistill [5], and UnDistill [53]. For fairness, we use the same teacher and student models as our method. Tab. 1 shows that: (1) KD methods originally designed for BEV-based 3D object detection fail to achieve satisfying results due to task discrepancies between 3D object detection and HD map construction. (2) Intra-modal distillation between camera-only teacher and student models cannot learn accurate3D information due to the limited capacity of the teacher model for inferring 3D geometry, and the gain is only 0.6 mAP by BEV-LGKD and 2.1 mAP by our MapDistill. (3) Cross-modal distillation between the LiDAR teacher and the camera student enables learning useful 3D information from the teacher but suffers from the large cross-modal gap, achieving the improved gain of 1.2 mAP by BEVDistill and 4.2 mAP by our MapDistill. (4) Our MapDistill with the fusion-based teacher enables effective knowledge distillation within/between modalities while enjoying cost-effective camera-only deployment, achieving the most significant gain of 7.7 mAP and surpassing UniDistill by 5.4 mAP. ### Ablation Study **Effect of \\(\\mathcal{L}_{relation}\\), \\(\\mathcal{L}_{feature}\\), and \\(\\mathcal{L}_{head}\\).** We conduct an ablation study on the components in MapDistill and summarize our results in Tab. 2. We evaluate model variants using different combinations of the proposed distillation losses, including \\(\\mathcal{L}_{relation}\\), \\(\\mathcal{L}_{feature}\\), and \\(\\mathcal{L}_{head}\\). We first investigate the effect of each distillation loss function. In model variants (a), (b), and (c), we use different distillation losses to distill the student model separately. The experimental results show that all model variants get improved performance compared to the baseline model, verifying the effectiveness of the proposed distillation losses. Moreover, the results of model variants (d), (e), and (f) prove that different distillation losses are complementary to each other. Finally, using all the proposed distillation losses together, we arrive at the full MapDistill method, which achieves the overall best performance of 53.6 \\begin{table} \\begin{tabular}{l|c c c|c c|c c c} \\hline \\multirow{2}{*}{Method} & Student & Teacher & \\multirow{2}{*}{Backbone} & Epochs & \\(\\text{AP}_{ped.}\\) & \\(\\text{AP}_{div.}\\) & \\(\\text{AP}_{bus.}\\) & mAP \\\\ & Modality & Modality & & & & & & \\\\ \\hline HDMapNet [19] & C & \\(-\\) & Efl\\(-\\)B0 & 30 & 14.4 & 21.7 & 33.0 & 23.0 \\\\ VectorMapNet [25] & C & \\(-\\) & R50 & 110 & 36.1 & 47.3 & 39.3 & 40.9 \\\\ MapVR [47] & C & \\(-\\) & R50 & 24 & 47.7 & 54.4 & 51.4 & 51.2 \\\\ PivotNet [7] & C & \\(-\\) & R50 & 30 & 58.5 & 53.8 & 59.6 & 57.4 \\\\ BeMapNet [34] & C & \\(-\\) & R50 & 30 & 62.3 & 57.7 & 59.4 & 59.8 \\\\ MapTR [23] & C & \\(-\\) & R50 & 24 & 45.3 & 51.5 & 53.1 & 50.3 \\\\ MapTR [23] & L & \\(-\\) & Sec & 24 & 48.5 & 53.7 & 64.7 & 55.6 \\\\ MapTR [23] & C \\& L & \\(-\\) & R50 \\& Sec & 24 & 55.9 & 62.3 & 69.3 & 62.5 \\\\ MapTR [23] & C & \\(-\\) & R18 & 110 & 39.6 & 49.9 & 48.2 & 45.9 \\\\ \\hline BEV-LGKD\\(\\dagger\\)[18] & C & C & R18 & 110 & 42.2 & 47.6 & 49.7 & 46.5\\({}_{+0.6}\\) \\\\ BEVDistill\\(\\dagger\\)[5] & C & L & R18 & 110 & 42.4 & 48.5 & 50.2 & 47.1\\({}_{+1.2}\\) \\\\ UniDistill\\(\\dagger\\)[53] & C & C\\&L & R18 & 110 & 43.9 & 48.6 & 52.1 & 48.2\\({}_{+2.3}\\) \\\\ MapDistill & C & C & R18 & 110 & 43.3 & 48.8 & 51.9 & 48.0\\({}_{+2.1}\\) \\\\ MapDistill & C & L & R18 & 110 & 45.9 & 50.7 & 53.6 & 50.1\\({}_{+4.2}\\) \\\\ **MapDistill** & **C** & **C** \\& **L** & **R18** & **110** & **49.2** & **54.5** & **57.1** & **53.6\\({}_{+7.7}\\)** \\\\ \\hline \\end{tabular} \\end{table} Table 1: Performance analysis of MapDistill on nuScenes val set. “L” and “C” represent the LiDAR and camera, respectively. “Effi-B0”, “R18”, “R50”, and “Sec” are short for EfficientNet-B0 [37], ResNet18 [12], ResNet50 [12], and SECOND [44], respectively. We adopt the MapTR method to build the teacher model and the student model. Note that the directly-trained MapTR models in the red region are selected as teachers. Our proposed MapDistill outperforms all existing approaches in both single-class APs and the overall mAP by a significant margin. \\(\\dagger\\) denotes our re-implementation following the setting in the paper. Best viewed in color. mAP, significantly surpassing the baseline's performance of 45.9 mAP. The ablation study results show that each of the distillation losses in MapDistll provides a meaningful contribution to improving the student model performance. Notably, these losses are only calculated during training, which brings no computational overhead during inference. **Ablations on the cross-modal relation distillation.** We investigate the choice of relation distillation loss in our method. The ablation variants include training without relation distillation loss (MapDistill (w/o \\(\\mathcal{L}_{relation}\\))), uni-modal relation distillation (Uni-modal Rel.), cross-modal relation distillation (Cross-modal Rel.), and the hybrid relation distillation (hybrid Cross-modal and Uni-modal). Note that uni-modal relation distillation means replacing the cross-modal attention matrices \\(\\mathbf{A}_{c2l}^{S/T}\\) and \\(\\mathbf{A}_{l2c}^{S/T}\\) in Eq. 7 with the uni-modal ones \\(\\mathbf{A}_{c2c}^{S/T}\\) and \\(\\mathbf{A}_{l2l}^{S/T}\\). We explore which relation (cross-modal or uni-modal) is more critical. As shown in Tab. (a)a, employing cross-modal relation distillation achieves more improvements. Furthermore, we find that using only cross-modal relation for distillation performs better than using both cross-modal and uni-modal relations. These observations validate that cross-modal interactions encode useful knowledge and can be transferred to the student model for improving HD Map construction. **Ablations on the dual-level feature distillation.** To explore the impact of BEV feature distillation at different levels, we train the model by using low-level or high-level feature distillation solely and present the results in Tab. (b)b. We design the following model variants: (1) MapDistill (w/o \\(\\mathcal{L}_{feature}\\)): we remove the feature distillation loss from MapDistill. (2) Low-level (only) and High-level (only) mean that the MapDistill model is trained only using low-level BEV feature distillation or high-level BEV feature distillation, respectively. (3) Dual-level (ours): we use dual-level feature distillation (the default setting in our MapDistill) to train the model. The results of Low-level (only) and High-level (only) are inferior to the Dual-level (ours), verifying the effectiveness of distilling both low-level and high-level BEV features simultaneously. **Ablations on the map head distillation.** In this ablation, we conduct detailed experiments on the loss selection for both map elements classification and point position regression. We design the following model variants: (1) MapDistill (w/o \\(\\mathcal{L}_{head}\\)): we train the model without the map head distillation loss; (2) \\begin{table} \\begin{tabular}{c c c c|c c c c} \\hline Setting & \\(\\mathcal{L}_{relation}\\) & \\(\\mathcal{L}_{feature}\\) & \\(\\mathcal{L}_{head}\\) & AP\\({}_{ped.}\\) & AP\\({}_{div.}\\) & AP\\({}_{box.}\\) & mAP \\\\ \\hline Baseline & ✗ & ✗ & ✗ & 39.6 & 49.9 & 48.2 & 45.9 \\\\ \\hline a & ✓ & ✗ & ✗ & 44.1 & 49.7 & 52.4 & 48.8 \\\\ b & ✗ & ✓ & ✗ & 44.3 & 49.4 & 51.5 & 48.4 \\\\ c & ✗ & ✗ & ✓ & 44.2 & 50.1 & 52.7 & 49.0 \\\\ \\hline d & ✓ & ✓ & ✗ & 45.4 & 51.4 & 54.1 & 50.3 \\\\ e & ✗ & ✓ & ✓ & 46.3 & 51.8 & 54.3 & 50.8 \\\\ f & ✓ & ✗ & ✓ & 46.5 & 52.3 & 54.5 & 51.1 \\\\ \\hline g & ✓ & ✓ & ✓ & **49.2** & **54.5** & **57.1** & **53.6** \\\\ \\hline \\end{tabular} \\end{table} Table 2: Ablation study on the components in MapDistill. \\(\\mathcal{L}_{head}\\) (w/o \\(\\mathcal{L}_{point}\\)): we remove the point2point loss from the map head distillation loss; (3) \\(\\mathcal{L}_{head}\\) (w/o \\(\\mathcal{L}_{cls}\\)): we remove the classification loss from the map head distillation loss; (4) Using both \\(\\mathcal{L}_{cls}\\) and \\(\\mathcal{L}_{point}\\) (the default setting in our MapDistill). As shown in Tab. 2(c), the results of \\(\\mathcal{L}_{head}\\) (w/o \\(\\mathcal{L}_{cls}\\)) and \\(\\mathcal{L}_{head}\\) (w/o \\(\\mathcal{L}_{point}\\)) are inferior to the default setting, verifying the effectiveness of transferring knowledge of both map elements categories and point positions from the teacher to the student. **Ablation study of the Dual BEV Transform Module.** We further conduct ablation studies on the design choice of the Dual BEV Transform Module, _i.e._, using different 2D-to-BEV methods to obtain subspace BEV features, to verify which combination performs most effectively. We choose three state-of-the-art 2D-to-BEV methods in this study, including LSS [33], Deformable Attention [21] and GKT [4]. The experiment consists of two groups, (1) both subspaces use the same 2D-to-BEV method, and (2) the two subspaces use different 2D-to-BEV methods. As shown in Tab. 4, the experimental results reveal some interesting findings: (1) Using the same 2D-to-BEV method only slightly outperforms the single-branch baseline in (a), implying that using the homogeneous BEV feature space makes it difficult to imitate the cross-modal interactions in the teacher model. (2) Using different 2D-to-BEV methods consistently outperforms the baseline and all model variants in (b). It is reasonable since the cross-modal relation of the teacher is calculated based on BEV features from different modal \\begin{table} \\begin{tabular}{c c c|c c c c} \\hline \\multicolumn{3}{c|}{subspace1 subspace2} & \\multicolumn{1}{c}{AP\\({}_{ped.}\\)} & \\multicolumn{1}{c}{AP\\({}_{div.}\\)} & \\multicolumn{1}{c}{AP\\({}_{bou.}\\)} & \\multicolumn{1}{c}{mAP} \\\\ \\hline (a) & GKT & \\(\\mathbf{\\mathcal{X}}\\) & 44.9 & 49.6 & 52.8 & 49.1 \\\\ \\hline \\multirow{3}{*}{(b)} & LSS & LSS & 45.9 & 51.2 & 54.4 & 50.5 \\\\ & GKT & GKT & 46.7 & 51.6 & 54.5 & 50.9 \\\\ & Deform. & Deform. & 46.8 & 51.6 & 54.6 & 51.0 \\\\ \\hline \\multirow{3}{*}{(c)} & GKT & Deform. & 47.1 & 53.2 & 56.2 & 52.1 \\\\ & Deform. & GKT & 47.3 & 53.4 & 56.1 & 52.3 \\\\ \\cline{1-1} & LSS & Deform. & 48.9 & 53.9 & 56.2 & 53.0 \\\\ \\cline{1-1} & Deform. & LSS & 48.7 & 53.8 & 55.9 & 52.8 \\\\ \\cline{1-1} & LSS & GKT & 49.1 & 54.2 & 56.7 & 53.3 \\\\ \\cline{1-1} & **GKT** & **LSS** & **49.2** & **54.5** & **57.1** & **53.6** \\\\ \\hline \\end{tabular} \\end{table} Table 4: Ablation study of Dual BEV Transform Module. \\begin{table} \\begin{tabular}{c c c c c} \\hline Method & AP\\({}_{ped.}\\) & AP\\({}_{div.}\\) & AP\\({}_{bou.}\\) & mAP \\\\ \\hline MapDistill (w/o \\(\\mathcal{L}_{relation}\\)) & 46.3 & 51.8 & 54.3 & 50.8 \\\\ +Uni-modal Relation & 48.0 & 52.9 & 55.1 & 52.0 \\\\ +Hybrid Relation & 48.3 & 53.4 & 55.5 & 52.4 \\\\ \\hline \\(\\sharp\\)**Cross-modal Relation** & **49.2** & **54.5** & **57.1** & **53.6** \\\\ \\hline \\end{tabular} \\end{table} Table 3: Ablation experiments to validate our distillation losses. ities, using heterogeneous BEV feature spaces makes it possible to learn distinct BEV features and thus could imitate the cross-modal interactions. Specifically, the combination of LSS and GKT achieves the best results. These observations validate the motivation for devising dual BEV spaces using different 2D-to-BEV methods. **Ablation study of various HD map construction methods.** To explore the compatibility of MapDistill with different HD map construction methods, we comprehensively investigate two popular methods and show the results in Tab. 4(a). Specifically, Teacher model-1 and Teacher model-2 mean the MapTR variant model whose camera branch uses Swin-Tiny backbone to extract image features and the most advanced MapTRv2 (improving MapTR with both network design and training strategy techniques), respectively. Note that, both student models employ Resnet18 as the backbone to extract the multi-view features. The experimental results demonstrate that \"Great teachers spawn exceptional students\". As the proficient teacher model has acquired valuable knowledge for HD map construction, the student model can effectively leverage this knowledge through KD techniques (_e.g._, the proposed MapDistill), enhancing its ability to perform the same task. Moreover, the results of consistent performance improvements show that our method is effective with different teacher models. **Ablation study of various student models.** To explore the generalization capability of MapDistill with different student models, we comprehensively investigate two popular backbone networks as the backbone of the student model and show the results in Tab. 4(b). Specifically, Student model-I and Student model-II mean that the student model employs Resnet50 and Swin-Tiny as the backbone to extract the multi-view features, respectively. And here we use MapTR-Teacher, which is the R50&Sec fusion model in Tab. 1, as the teacher model. Experimental results show that our method consistently achieves excellent results, proving the effectiveness and generalization ability of our method. **Sensitivity of hyper-parameters.** We conduct experiments to investigate the impact of different hyper-parameter settings and report results on nuScenes val set, as shown in Fig. 3. When one hyper-parameter is varied within a feasible range, the remaining hyper-parameters retain their default values: \\(\\lambda_{1}=0.3\\), \\(\\lambda_{2}=0.6\\), and \\(\\lambda_{3}=0.9\\). The results indicate that the performance remains relatively stable across a wide range (0.1 to 0.9) of values for \\(\\lambda_{1}\\), \\(\\lambda_{2}\\), and \\(\\lambda_{3}\\), suggesting its robustness to different hyper-parameters. Note that, \"Baseline\" indicates the directly trained camera-based student model. \\begin{table} \\end{table} Table 5: Ablation experiments to verify the generalization ability of MapDistill. ### Qualitative Results We present visualizations of vectorized HD map predictions to demonstrate the efficacy of MapDistill. As depicted in Fig. 4, we compare predictions from various models, namely, the camera-LiDAR-based teacher model, the camera-based student model without MapDistill (referred to as \"Baseline\"), and the camera-based student model with MapDistill. The mAP values of these models are 62.5, 45.9, and 53.6 respectively, as shown in Tab. 1. Note that a common threshold, which is set to 0.4, is employed to filter low-confidence map elements for visualizing the prediction results of all models. We observe significant inaccuracies in the predictions made by the Baseline model. However, employing the MapDistill method substantially corrects these errors and enhances prediction accuracy. ## 5 Conclusion In this paper, we present a novel method called MapDistill for boosting efficient camera-based HD map construction via camera-LiDAR fusion model distillation, yielding a cost-effective yet accurate solution. MapDistill is built upon a camera-LiDAR fusion teacher model, a lightweight camera-only student model, and a specifically designed Dual BEV Transform module. In addition, we present a comprehensive distillation scheme encompassing cross-modal relation distillation, dual-level feature distillation, and map head distillation, which facilitates knowledge transfer within and between different modalities and helps the student model achieve better performance. Extensive experiments and analysis validate the design choice and the effectiveness of our MapDistill. **Limitations and Societal Impact.** With the KD methodology, the student model may inherit the weakness of the teacher model. More specifically, if the teacher model is biased, or not robust to adverse weather conditions and/or long-tail scenarios, the student model is likely to behave similarly. MapDistill enjoys cost-effective camera-only deployment, showing great potential in practical applications, such as autonomous driving. **Acknowledgement.** This work was supported by the National Natural Science Foundation of China No.62106259 and the Beijing Natural Science Foundation under Grant L243008. Figure 4: Qualitative results on nuScenes val set. (a) Six camera inputs. (b) Ground-truth vectorized HD map. (c) Result of the camera-LiDAR-based teacher model. (d) Result of the camera-based student model without MapDistill (Baseline). (e) Result of the camera-based student model with MapDistill. MapDistill helps correct substantial errors in the Baseline’s predictions and improves its accuracy. ## References * [1] Borse, S., Klingner, M., Kumar, V.R., Cai, H., Almuzairee, A., Yogamani, S., Porikli, F.: X-align: Cross-modal cross-view alignment for bird's-eye-view segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 3287-3297 (2023) * [2] Caesar, H., Bankiti, V., Lang, A.H., Vora, S., Liong, V.E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., Beijbom, O.: nuscenes: A multimodal dataset for autonomous driving. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11618-11628 (2020) * [3] Chen, G., Choi, W., Yu, X., Han, T., Chandraker, M.: Learning efficient object detection models with knowledge distillation. Advances in neural information processing systems (2017) * [4] Chen, S., Cheng, T., Wang, X., Meng, W., Zhang, Q., Liu, W.: Efficient and robust 2d-to-bev representation learning via geometry-guided kernel transformer. arXiv preprint arXiv:2206.04584 (2022) * [5] Chen, Z., Li, Z., Zhang, S., Fang, L., Jiang, Q., Zhao, F.: Bevdistill: Cross-modal bev distillation for multi-view 3d object detection. arXiv preprint arXiv:2211.09386 (2022) * [6] Cho, H., Choi, J., Baek, G., Hwang, W.: itkd: Interchange transfer-based knowledge distillation for 3d object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13540-13549 (2023) * [7] Ding, W., Qiao, L., Qiu, X., Zhang, C.: Pivotnet: Vectorized pivot learning for end-to-end hd map construction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3672-3682 (2023) * [8] Gou, J., Yu, B., Maybank, S.J., Tao, D.: Knowledge distillation: A survey. International Journal of Computer Vision pp. 1789-1819 (2021) * [9] Hao, X., Wei, M., Yang, Y., Zhao, H., Zhang, H., Zhou, Y., Wang, Q., Li, W., Kong, L., Zhang, J.: Is your hd map constructor reliable under sensor corruptions? arXiv preprint arXiv:2406.12214 (2024) * [10] Hao, X., Yang, Y., Zhang, H., Wei, M., Zhou, Y., Zhao, H., Zhang, J.: Team samsung-ral: Technical report for 2024 robodrive challenge-robust map segmentation track. arXiv preprint arXiv:2405.10567 (2024) * [11] Hao, X., Zhang, H., Yang, Y., Zhou, Y., Jung, S., Park, S.I., Yoo, B.: Mbfusion: A new multi-modal bev feature fusion method for hd map construction. In: IEEE International Conference on Robotics and Automation (2024) * [12] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 770-778 (2016) * [13] Hendy, N., Sloan, C., Tian, F., Duan, P., Charchut, N., Xie, Y., Wang, C., Philbin, J.: FISHING net: Future inference of semantic heatmaps in grids. arXiv preprint arXiv:2006.09917 (2020) * [14] Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015) * [15] Huang, P., Liu, L., Zhang, R., Zhang, S., Xu, X., Wang, B., Liu, G.: Tig-bev: Multi-view bev 3d object detection via target inner-geometry learning. arXiv preprint arXiv:2212.13979 (2022) * [16] Kong, L., Xie, S., Hu, H., Niu, Y., Ooi, W.T., Cottereau, B.R., Ng, L.X., Ma, Y., Zhang, W., Pan, L., et al.: The robodrive challenge: Drive anytime anywhere in any condition. arXiv preprint arXiv:2405.08816 (2024)* [17] Li, D., Jin, Y., Yu, H., Shi, J., Hao, X., Hao, P., Liu, H., Sun, F., Fang, B., et al.: What foundation models can bring for robot learning in manipulation: A survey. arXiv preprint arXiv:2404.18201 (2024) * [18] Li, J., Lu, M., Liu, J., Guo, Y., Du, L., Zhang, S.: Bev-lgkd: A unified lidar-guided knowledge distillation framework for bev 3d object detection. arXiv preprint arXiv:2212.00623 (2022) * [19] Li, Q., Wang, Y., Wang, Y., Zhao, H.: Hdmapnet: An online hd map construction and evaluation framework. In: IEEE International Conference on Robotics and Automation. pp. 4628-4634 (2022) * [20] Li, Y., Chen, Y., Qi, X., Li, Z., Sun, J., Jia, J.: Unifying voxel-based representation with transformer for 3d object detection. Advances in Neural Information Processing Systems pp. 18442-18455 (2022) * [21] Li, Z., Wang, W., Li, H., Xie, E., Sima, C., Lu, T., Qiao, Y., Dai, J.: Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers. In: European Conference on Computer Vision. pp. 1-18 (2022) * [22] Liang, T., Xie, H., Yu, K., Xia, Z., Lin, Z., Wang, Y., Tang, T., Wang, B., Tang, Z.: Bevfusion: A simple and robust lidar-camera fusion framework. Advances in Neural Information Processing Systems pp. 10421-10434 (2022) * [23] Liao, B., Chen, S., Wang, X., Cheng, T., Zhang, Q., Liu, W., Huang, C.: Maptr: Structured modeling and learning for online vectorized hd map construction. In: International Conference on Learning Representations (2023) * [24] Liao, B., Chen, S., Zhang, Y., Jiang, B., Zhang, Q., Liu, W., Huang, C., Wang, X.: Maptrv2: An end-to-end framework for online vectorized HD map construction. arXiv preprint arXiv:2308.05736 (2023) * [25] Liu, Y., Yuan, T., Wang, Y., Wang, Y., Zhao, H.: Vectormapnet: End-to-end vectorized hd map learning. In: International Conference on Machine Learning. pp. 22352-22369 (2023) * [26] Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., Ning, J., Cao, Y., Zhang, Z., Dong, L., Wei, F., Guo, B.: Swin transformer v2: Scaling up capacity and resolution. In: International Conference on Computer Vision and Pattern Recognition. pp. 11999-12009 (2022) * [27] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 9992-10002 (2021) * [28] Liu, Z., Tang, H., Amini, A., Yang, X., Mao, H., Rus, D.L., Han, S.: Bevfusion: Multi-task multi-sensor fusion with unified bird's-eye view representation. In: IEEE International Conference on Robotics and Automation. pp. 2774-2781 (2023) * [29] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: International Conference on Learning Representations (2019) * [30] Malkauthekar, M.: Analysis of euclidean distance and manhattan distance measure in face recognition. In: Third International Conference on Computational Intelligence and Information Technology (CIIT 2013). pp. 503-507. IET (2013) * [31] Mirzadeh, S.I., Farajtabar, M., Li, A., Levine, N., Matsukawa, A., Ghasemzadeh, H.: Improved knowledge distillation via teacher assistant. In: Proceedings of the AAAI conference on artificial intelligence. pp. 5191-5198 (2020) * [32] Mukhoti, J., Kulharia, V., Sanyal, A., Golodetz, S., Torr, P., Dokania, P.: Calibrating deep neural networks using focal loss. Advances in Neural Information Processing Systems pp. 15288-15299 (2020)* [33] Philion, J., Fidler, S.: Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In: European Conference on Computer Vision. pp. 194-210 (2020) * [34] Qiao, L., Ding, W., Qiu, X., Zhang, C.: End-to-end vectorized hd-map construction with piecewise bezier curve. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13218-13228 (2023) * [35] Salazar-Gomez, G., Gonzalez, D.S., Diaz-Zapata, M., Paigwar, A., Liu, W., Erkent, O., Laugier, C.: Transfusegrid: Transformer-based lidar-rgb fusion for semantic grid prediction. In: International Conference on Control, Automation, Robotics and Vision. pp. 268-273 (2022) * [36] Shang, C., Li, H., Meng, F., Wu, Q., Qiu, H., Wang, L.: Incrementer: Transformer for class-incremental semantic segmentation with knowledge distillation focusing on old class. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7214-7224 (2023) * [37] Tan, M., Le, Q.: Efficientnet: Rethinking model scaling for convolutional neural networks. In: International conference on machine learning. pp. 6105-6114 (2019) * [38] Tang, K., Cao, X., Cao, Z., Zhou, T., Li, E., Liu, A., Zou, S., Liu, C., Mei, S., Sizikova, E., et al.: Thma: Tencent hd map ai system for creating hd map annotations. In: Proceedings of the AAAI Conference on Artificial Intelligence. pp. 15585-15593 (2023) * [39] Wang, S., Li, W., Liu, W., Liu, X., Zhu, J.: Lidar2map: In defense of lidar-based semantic map construction using online camera distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5186-5195 (2023) * [40] Wang, W., Dai, J., Chen, Z., Huang, Z., Li, Z., Zhu, X., Hu, X., Lu, T., Lu, L., Li, H., et al.: Internimage: Exploring large-scale vision foundation models with deformable convolutions. arXiv preprint arXiv:2211.05778 (2022) * [41] Wang, Z., Li, D., Luo, C., Xie, C., Yang, X.: Distillbev: Boosting multi-camera 3d object detection with cross-modal knowledge distillation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 8637-8646 (2023) * [42] Xiong, X., Liu, Y., Yuan, T., Wang, Y., Wang, Y., Zhao, H.: Neural map prior for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 17535-17544 (2023) * [43] Yan, X., Gao, J., Zheng, C., Zheng, C., Zhang, R., Cui, S., Li, Z.: 2dpass: 2d priors assisted semantic segmentation on lidar point clouds. In: European Conference on Computer Vision. pp. 677-695. Springer (2022) * [44] Yan, Y., Mao, Y., Li, B.: SECOND: sparsely embedded convolutional detection. Sensors **18**(10), 3337 (2018) * [45] Yang, C., Chen, Y., Tian, H., Tao, C., Zhu, X., Zhang, Z., Huang, G., Li, H., Qiao, Y., Lu, L., et al.: Bevformer v2: Adapting modern image backbones to bird's-eye-view recognition via perspective supervision. arXiv preprint arXiv:2211.10439 (2022) * [46] Yang, Z., Li, R., Ling, E., Zhang, C., Wang, Y., Huang, D., Ma, K.T., Hur, M., Lin, G.: Label-guided knowledge distillation for continual semantic segmentation on 2d images and 3d point clouds. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 18601-18612 (2023) * [47] Zhang, G., Lin, J., Wu, S., Song, Y., Luo, Z., Xue, Y., Lu, S., Wang, Z.: Online map vectorization for autonomous driving: A rasterization perspective. arXiv preprint arXiv:2306.10502 (2023)* [48] Zhang, H., Meng, Y., Zhao, Y., Qiao, Y., Yang, X., Coupland, S.E., Zheng, Y.: Dtfd-mil: Double-tier feature distillation multiple instance learning for histopathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 18802-18812 (2022) * [49] Zhang, Q., Cheng, X., Chen, Y., Rao, Z.: Quantifying the knowledge in a dnn to explain knowledge distillation for classification. IEEE Transactions on Pattern Analysis and Machine Intelligence **45**(4), 5099-5113 (2022) * [50] Zhang, Y., Zhu, Z., Zheng, W., Huang, J., Huang, G., Zhou, J., Lu, J.: Beverse: Unified perception and prediction in birds-eye-view for vision-centric autonomous driving. arXiv preprint arXiv:2205.09743 (2022) * [51] Zhao, H., Zhang, Q., Zhao, S., Chen, Z., Zhang, J., Tao, D.: Simdistill: Simulated multi-modal distillation for bev 3d object detection. In: Proceedings of the AAAI Conference on Artificial Intelligence. pp. 7460-7468 (2024) * [52] Zheng, Z., Ye, R., Hou, Q., Ren, D., Wang, P., Zuo, W., Cheng, M.M.: Localization distillation for object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023) * [53] Zhou, S., Liu, W., Hu, C., Zhou, S., Ma, C.: Unidistill: A universal cross-modality knowledge distillation framework for 3d object detection in bird's-eye view. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5116-5125 (2023)
Online high-definition (HD) map construction is an important and challenging task in autonomous driving. Recently, there has been a growing interest in cost-effective multi-view camera-based methods without relying on other sensors like LiDAR. However, these methods suffer from a lack of explicit depth information, necessitating the use of large models to achieve satisfactory performance. To address this, we employ the Knowledge Distillation (KD) idea for efficient HD map construction for the first time and introduce a novel KD-based approach called MapDistill to transfer knowledge from a high-performance camera-LiDAR fusion model to a lightweight camera-only model. Specifically, we adopt the teacher-student architecture, _i.e._, a camera-LiDAR fusion model as the teacher and a lightweight camera model as the student, and devise a dual BEV transform module to facilitate cross-modal knowledge distillation while maintaining cost-effective camera-only deployment. Additionally, we present a comprehensive distillation scheme encompassing cross-modal relation distillation, dual-level feature distillation, and map head distillation. This approach alleviates knowledge transfer challenges between modalities, enabling the student model to learn improved feature representations for HD map construction. Experimental results on the challenging nuScenes dataset demonstrate the effectiveness of MapDistill, surpassing existing competitors by over 7.7 mAP or 4.5\\(\\times\\) speedup. Keywords:HD Map Construction Knowledge Distillation Lightweight
Provide a brief summary of the text.
289
arxiv-format/2310_08705v1.md
# A Benchmarking Protocol for SAR Colorization: From Regression to Deep Learning Approaches Kangqing Shen Gemine Vivone Xiaoyuan Yang Simone Lolli Michael Schmitt School of Mathematical Sciences, Beihang University, Beijing, 102206, China Institute of Methodologies for Environmental Analysis, CNR-IMAA, Tito Scalo, 85050, Italy National Biodiversity Future Center, NBFC, Palermo, 90133, Italy Key Laboratory of Mathematics, Information and Behavior, Ministry of Education, Beihang University, Beijing, 102206, China CommSensLab, Department of Signal Theory and Communications, Polytechnic University of Catalonia, Barcelona, 08034, Spain University of the Bundesweht, Neubiberg, 85577, Germany ## 1 Introduction Thanks to the rapid advancements in remote sensing imaging methods, such as panchromatic, multispectral, hyperspectral, infrared, synthetic aperture radar (SAR), and night light imaging, researchers now have more rapid access to various sources of remote sensing data. However, the single different kinds of data only describe the observed scenes from a specific point of view, which may limit the applications; on the other hand, the multiple information sources provided by these images are redundant and complementary. Consequently, by integrating image information obtained from different sensors, fusion techniques can achieve more accurate and comprehensive remote sensing observations (Liu et al., 2022; Zhang et al., 2021; Li et al., 2022; Zhu et al., 2017; Vivone, 2023). Multi-source image fusion can be categorized into four types: _i_) homogeneous remote sensing data fusion; _ii_) heterogeneous remote sensing data fusion; _iii_) remote sensing site data fusion; and _iv_) remote sensing-non-observed data fusion. The goal of remote sensing research includes the fusion of homogeneous and heterogeneous data. Pansharpening is an example of homogeneous data fusion and has been studied extensively, being one of the first areas of research (Vivone et al., 2014, 2021b; Dadrass Javan et al., 2021; Vivone et al., 2021a). On the contrary, SAR and optical image fusion is the fusion of heterogeneous remote sensing data, which is a challenging task due to the unique characteristics of SAR imagery. Recent studies focused on this research topic (Kulkarni and Rege, 2020; Schmitt et al., 2017; Zhu et al.). SAR image is an active microwave-based imaging. Being the wavelength longer, images are not affected by clouds, haze, and other meteorological conditions that otherwise affect visible images (Lolli et al., 2017). The atmosphere is then transparent on SAR images except for heavy rain. Thanks to this feature, SAR images can be independently acquired day and night under almost all environmental conditions. SAR images mainly reveal the structural properties of the target scene, such as dielectric properties, surface roughness, and moisture content (Macelloni et al., 2000). Therefore, from SAR images it is possible to detect various objects based on their surface features containing rich spatial information. However, disadvantages of SAR images are not irrelevant; in fact, the lack of color information and severe speckle noise make image interpretation a challenging task even for well-trained remote sensing experts (Schmitt et al., 2018b). Unlike SAR images, passive optical satellite sensors record the electromagnetic spectrum reflected by the observed scene. Multispectral (MS) images are among the most important optical images due to their richness in spectral information. Therefore, by using the complementary characteristics of these two types of images, understanding and interpretation of SAR images would be much easier. SAR-MS image fusion can be a solution (Kong et al., 2021; Ye et al., 2022; Chibani, 2006; Zhang et al., 2022). However, it requires that the two source images be matched accurately and simultaneously acquired, i.e. general fusion methods cannot particularly help the interpretation of SAR images seen as an independent data source. To overcome the dependency on paired SAR-MS images, SAR colorization is a promising technique that can be used to learn practical colorization for SAR images. As shown in (Lee et al., 2021), the colorization of the SAR can be divided into two categories. The first category is based on radar polarimetry theory. In (Deng et al., 2008), SAR images are colorized according to the fundamental principle that pixels that exhibit similar scattering properties are assigned to a similar hue. This way enables the reconstruction of complete polarimetric information from non-full polarimetric SAR images, but its applicability is strictly constrained to some experts in the field due to its technical complexity. The second category considers the task of SAR colorization as an image-to-image problem and employs neural networks to generate colorized SAR images. To our knowledge, the literature on SAR colorization is limited. Inspired by (Deshpande et al., 2017), (Schmitt et al., 2018a) introduces the first deep learning-based SAR colorization methodology (DivColSAR), which utilizes a variational autoencoder (VAE) and a mixed density network (MDN) to generate multimodal colorization hypotheses. Since there are no ideal color SAR images available for ground truth during supervised training, they create a pseudo-SAR image using a Lab transform-based fusion algorithm. However, the proposed solution has several limitations. First, their article lacks quantitative evaluation. Furthermore, the visual results of the proposed method are not compared against those of any other technique. In the current literature, there is often confusion about SAR colorization and SAR-to-optical image translation. Thus, we would like to first highlight the differences between the two terms, then reviewing instances into the two above-mentioned classes. Indeed, SAR-to-optical image translation main goal is to transform the SAR image into the correspondent optical image (i.e., even losing some spatial features (e.g., speckling) of a classical SAR image). This implies that the objective of the translation task is to generate an output image as similar as possible to the (reference) optical image. Instead, the aim of SAR colorization is to obtain a colorized SAR image that contains both SAR image information (including effects as shadowing, layover, foreshortening, and SAR noise as speckling) and color information coming from the related optical image (but not reproducing the optical image itself). Hence, we have that there are not available labels for SAR colorization training. Instead, labels are accessible for SAR-to-optical image translation. Based on the analysis above, we can say that although (Ji et al., 2021; Lee et al., 2021, 2022; Ku and Chun, 2018) state that their own approaches are for SAR colorization, they are instead related to the SAR-to-optical image translation task. Their training procedure is devoted to the translation from SAR to optical images. Besides, the assessment process relies upon the comparison between the output image and the ground truth (optical) data. Finally, upon examining the outcomes, it becomes clear that all radar effects and speckle are wiped out during the translation between the SAR and optical domains. Thus, these approaches are improperly assigned to the SAR colorization class but, instead, they belong more to the SAR-to-optical image translation family. Thus, except for this work and the pioneering paper in (Schmitt et al., 2018a), no other paper can be found in literature that deeply investigates the SAR colorization problem in the framework of the second category. Indeed, all the other machine learning-based works, claiming to perform SAR image colorization, just convert the SAR image into a pseudo-optical color image losing all SAR image special features. Nevertheless, these misclassified SAR-to-optical image translation works are still valuable, which could be used to borrow some techniques for the SAR colorization task, such as CycleGAN architecture. The main contributions of this work can be described as follows: 1. We propose a full framework for SAR colorization supervised learning-based approaches including: * A protocol to generate synthetic colorized SAR images. * Several baselines from the simple linear regression to recent convolutional neural network architectures. * Multidimensional similarity quality metrics to numerically assess (with reference) the quality of colorized SAR images. 2. An effective cGAN-based method (adopted to address the specific SAR colorization problem) is also proposed, strongly outperforming the current state-of-the-art for SAR colorization using neural networks (i.e., DivColSAR (Schmitt et al., 2018a)). To the best of our knowledge, this represents the first endeavor to introduce a framework for SAR colorization that encompasses a protocol, a comprehensive benchmark, and a thorough performance assessment. This, in turn, lays the foundation for further research in this field. The paper is organized as follows. In Section 2, we introduce some related works. Section 3 introduces the protocol for colorizing SAR images and the related performance metrics. Section 4 describes the proposed solutions. Section 5 presents the experimental results. Section 6 is related to some discussions, while Section 7 draws the conclusions and future developments of this work. ## 2 Related works Generative adversarial network (GAN) has emerged as a formidable class of machine learning models used for the synthesis of artificial data. The pioneering work of Goodfellow et al., 2020) introduces the GAN framework, which has found extensive applications in various image processing domains, including image super resolution (Wang et al.,2018, 2021), pansharpening (Liu et al., 2021; Zhou et al., 2021, 2022; Ozcelik et al., 2021), and image-to-image translation (Chen et al., 2020; Liu et al., 2017; Wang et al., 2022; Yang et al., 2022). GAN consists of two primary components, namely a generator that produces synthetic images that are perceptually indistinguishable from authentic ones and a discriminator that discriminates between real and synthetic data. However, the original GAN is plagued by unstable training issues and suboptimal control of the generated output. Conditional generative adversarial network (cGAN) (Mirza and Osindero, 2014) represents an extension of the GAN model that allows learning of data generation conditioned on specific input variables. This means that the generated samples can be tailored to a particular context. For example, in image generation, a label or a specific characteristic can be provided and the generator will produce an image that matches the condition. In particular, cGAN has an advantage over GAN because they can produce high-quality images with fewer iterations. This happens because the conditional input narrows the search space of the generator network, allowing a high-quality output to be achieved. For this reason, the conditional input facilitates the generation of high-quality outputs. This characteristic also contributes to the attainment of superior training stability. Assume that \\(G\\) and \\(D\\) represent the generator and discriminator, respectively. The loss function of cGAN can be formulated as \\[\\begin{split}\\min_{G}\\max_{D}\\mathcal{L}_{cGAN}(G,D)=& E[\\log(D(\\mathbf{X},\\mathbf{Y}))]\\\\ &+E[\\log(1-D(\\mathbf{X},G(\\mathbf{X})))],\\end{split} \\tag{1}\\] where \\(\\mathbf{Y}\\) is the ground truth, \\(\\mathbf{X}\\) is the conditional input, \\(G(\\mathbf{X})\\) is the generated image of the generator, while \\(E\\) and log represent the expectation symbol and the natural logarithm, respectively. cGAN has become a popular choice for various image generation applications because of its exceptional performance. Pix2pix (Isola et al., 2017) is a supervised cGAN model that is specifically designed to condition an input image and generate a corresponding output image. In contrast to previous studies that have only worked for a specific application, pix2pix is a general-purpose image-to-image translation solution that consistently shows desirable performance in different tasks. For example, it can convert a black-and-white image into a color image or convert a satellite image into a map image. Pix2pix employs a \"U-Net\" shaped architecture as generator and employs a convolutional \"PatchGAN\" classifier as discriminator, which solely penalizes the structure at the scale of image patches. The combination of an adversarial loss and a loss \\(\\ell_{1}\\) during training allows pix2pix to generate high-quality output images that resemble their corresponding input images. Stimulated by its promising performance on diverse multimodal images, we adapt the generator and discriminator architectures from those of pix2pix. The solely deep learning-based SAR colorization work is proposed in (Schmitt et al., 2018). The network consists of two parts: a variational autoencoder (VAE) and a mixed density network (MDN), which aims to train a conditional color distribution from which different colorization hypotheses can be drawn. For the MDN training, they feed the features extracted from the _conv7_ layer of the pre-trained colorization network proposed by Zhang (Zhang et al., 2016), but not directly feeding the SAR image into the MDN. On one hand, the observed performance of this method is not satisfying; on other hand, the procedure and implementation are very complicated compared with the general end-to-end CNN or GAN models. Hence, we decided to investigate an effective but simple end-to-end cGAN-based model inspired from pix2pix. ## 3 Protocol In this section, we propose a new protocol for the SAR colorization task. First, the protocol for colorizing SAR images is proposed allowing a supervised learning scheme and a reference-based performance assessment. Furthermore, three performance metrics are introduced to calculate the similarity between the output colorized images and the reference images. ### A protocol for colorizing SAR images Contrary to the image colorization task commonly found in computer vision community, SAR colorization lacks of reference colorized SAR images to be used as ground-truth during training. To address this issue, we propose a new protocol for creating artificial color SAR images using the SAR-MS image fusion technique. Component substitution is a widely used class of fusion approaches (Vivone et al., 2014). One of the most representative methods in this family is intensity-hue-saturation (IHS), which relies upon the transformation of a red-green-blue (RGB) image into the IHS color space, where the spatial structure information is mainly contained in the intensity (I) component, while the spectral information is contained in the hue (H) and saturation (S) components. The intensity component is then replaced by the SAR image. Obviously, the higher the correlation between the SAR image and the intensity component, the lower the spectral distortion produced by the IHS method. Thus, before substitution, the SAR image should be histogram-matched to ensure comparable distributions between the latter and the intensity component. A powerful and lightweight approach, widely explored in the literature (Vivone et al., 2021), is based on linear matching by adjusting the mean and standard deviations of the distribution of the data to be substituted, i.e., the equalization of the first two moments of the distribution of the SAR image. The fusion process is then completed by applying an inverse transformation to bring the data back into the RGB color space. However, the IHS method is only suitable for processing RGB images, limiting its applicability for processing MS images. Tu _et al._ (Tu et al., 2004) propose a generalization of the IHS methodology, called GIHS, which can be applied to images with more than three bands. GIHS is also referred to as fast IHS due to its computational efficiency, since it avoids sequential transformations, substitutions, and final backward step operations. Assuming that the \\(\\mathbf{MS}\\) image is represented by three RGB spectral bands, we denote the intensity component of the IHS representation of \\(\\mathbf{MS}\\) as \\(\\mathbf{I}\\), and the average operation along the spectral dimension as \\(T\\). The fast IHS-based SAR-MS fusion algorithm can be described as follows: \\[\\mathbf{I}=T(\\mathbf{MS}), \\tag{2}\\] \\[\\mathbf{SAR^{\\prime}}=(\\mathbf{SAR}-\\mu_{\\mathbf{SAR}})\\cdot \\sigma_{\\mathbf{I}}/\\sigma_{\\mathbf{SAR}}+\\mu_{\\mathbf{I}},\\] (3) \\[\\mathbf{D}=\\mathbf{SAR^{\\prime}}-\\mathbf{I},\\] (4) \\[\\mathbf{GT}=\\mathbf{MS}+\\mathbf{D}, \\tag{5}\\] where \\(\\mathbf{SAR}\\) is the SAR image, \\(\\mathbf{SAR^{\\prime}}\\) is its histogram-matched version, \\(\\mathbf{D}\\) is the difference matrix between \\(\\mathbf{SAR^{\\prime}}\\) and the intensity component \\(\\mathbf{I}\\), \\(\\mathbf{GT}\\) is the result of image fusion (ground truth for our objective), \\(\\mu\\) and \\(\\sigma\\). are the average and standard deviation operators, respectively. Supervised training for SAR colorization can be easily implemented, and performance assessment can be carried out when ground truth is available. A graphical representation of the entire process can be found in Fig. 1. Several fusion results are illustrated in Fig. 2. ### Performance metrics The lack of a standardized quantitative evaluation scheme for SAR colorization is a well-known gap within the research community. In fact, the unique article based on supervised learning in the literature, (Schmitt et al., 2018), does not perform a numerical assessment. Instead, methods in similar tasks (e.g., SAR-to-optical image translation) borrow some common metrics used in the natural image processing community, such as structural similarity index (SSIM) (Wang et al., 2004), peak signal-to-noise ratio (PSNR), and mean squared error (MSE). In this work, we use of a set of remote sensing-based metrics to assess the performance of the problem at hand. Specifically, we propose the use of three metrics commonly employed in image fusion for remote sensing (Vivone et al., 2021): the normalized root mean square error (NRMSE) (that is the well-known mervatr relative globale adimensionnelle de synthese (ERGAS) index (Vivone et al., 2021) but neglecting the resolution ratio between the images), the spectral angle mapper (SAM) (Yuhas et al., 1992), and the multidimensional (extended) version of the universal image quality index (Q4) (Wang and Bovik, 2002; Alparone et al., 2004). All these indexes take into account the spectral dimension of the colorized images, which is one of the most important aspects for colorization. Instead, metrics such as SSIM, PSNR, and MSE are indexes that work on single-band images. Even if they can be calculated for all the spectral bands of the colorized product and then averaged along the spectral dimension, spectral distortion (relevant for colorization) is not taken into consideration by these indexes with respect to the other ones. Thus, the suitability of our proposal is clear with respect to the previously adopted indexes and the problem at hand. More specifically, NRMSE is given by: \\[NRMSE=\\frac{1}{N}\\sum_{n=1}^{N}\\left(\\frac{\\text{RMSE}(n)}{\\mu(n)}\\right), \\tag{6}\\] where RMSE represents the root mean square error between the ground-truth and the colorized image coming from a colorization approach, \\(\\mu_{n}\\) is the mean (average) of the \\(n\\)-th band of the ground-truth, and \\(N\\) is the number of spectral bands (i.e. 3 for the problem at hand). Lower values of NRMSE indicate more similarity between the colorized SAR image and ground-truth data. The ideal value is 0. SAM measures the spectral dissimilarity (spectral distortion) of the colorized image compared with the ground-truth. Assuming \\(\\mathbf{x}\\) and \\(\\mathbf{y}\\) are two spectral vectors, both having \\(C\\) components, in which \\(\\mathbf{x}=[x_{1},x_{2},\\ldots,x_{C}]\\) is the ground-truth spectral pixel vector and \\(\\mathbf{y}=[y_{1},y_{2},\\ldots,y_{C}]\\) is the colorized SAR image spectral pixel vector, SAM is defined as the absolute value of the spectral angle between the two vectors in the same pixel, which can be given by: \\[\\text{SAM}(\\mathbf{x},\\mathbf{y})=\\arccos\\left(\\frac{\\langle\\mathbf{x}, \\mathbf{y}\\rangle}{\\|\\mathbf{x}\\|_{2}\\cdot\\|\\mathbf{y}\\|_{2}}\\right), \\tag{7}\\] where \\(\\|\\cdot\\|_{2}\\) denotes the norm \\(\\ell_{2}\\), arccos is the arccosine function, and \\(\\langle\\cdot,\\cdot\\rangle\\) indicates the dot product. A global measurement of spectral dissimilarity is obtained by averaging the values in the image. The lower the value of the SAM index, the better the performance. The SAM value is usually measured in degrees and the ideal value is 0. Q4 is the four-band extension of the universal image quality index (UIQI), which has been introduced for the quality assessment of pansharpening (Alparone et al., 2004). It can be calculated by: \\[\\text{Q4}=\\frac{4\\left|\\sigma_{x_{1}x_{2}}\\right|\\cdot\\left|\\mu_{x_{1}} \\right|\\cdot\\left|\\mu_{x_{2}}\\right|}{\\left(\\sigma_{x_{1}}^{2}+\\sigma_{x_{2}}^ {2}\\right)\\left(\\mu_{x_{1}}^{2}+\\mu_{x_{2}}^{2}\\right)}, \\tag{8}\\] where z\\({}_{1}\\) and z\\({}_{2}\\) are two quaternions, formed with spectral vectors of multi-band images, that is z = \\(a+ib+jc+kd\\), \\(\\mu_{x_{1}}\\) and \\(\\mu_{x_{2}}\\) are the means of z\\({}_{1}\\) and z\\({}_{2}\\), \\(\\sigma_{x_{1}x_{2}}\\) denotes the covariance between z\\({}_{1}\\) and z\\({}_{2}\\), and \\(\\sigma_{x_{1}}^{2}\\) and \\(\\sigma_{x_{2}}^{2}\\) are the variances of z\\({}_{1}\\) and z\\({}_{2}\\). Because we only chose three bands from the Sentinel-2 multispectral images, we added an all-zero band (as the fourth band) to the ground truth and the colorized SAR image to fit the requirement of the Q4 index. The higher the value of the Q4 index, the better the quality. The optimal value is 1. ### Overview of colorization process Based on our protocol, colorization task training and testing can be easily performed. An overview of this process is given in Algorithm 1. ## 4 Baseline and methodology Considering the lack of established baseline methods, we have developed several baseline approaches for comparative analysis. These include spectral-based and spatial-spectral-based methods. Additionally, we have introduced a supervised conditional adversarial network method, which has demonstrated significant performance improvements. Moreover, the current state-of-the-art for SAR colorization using neural networks, i.e., DivColSAR (Schmitt et al., 2018), is introduced as baseline. ### Spectral-based baselines for SAR colorization The objective of the SAR colorization task is to extract color information from paired MS images while retaining the spatial and radiometric information of SAR images. When a single-band image is considered as a unique entity, the task can be outlined as learning a linear or nonlinear mapping from the MS band image to the three-band MS image. The underlying assumption is that there exists a correlation/relationship between the SAR band and the MS bands. This kind of approaches represents the simplest solution for this problem. However, methods belonging to this family have several limitations when the relationship between the compared data is not clearly explainable through linear or non-linear models. Therefore, to validate this assumption, 2400 paired samples are selected for data analysis using scatter plots. Several examples of such analysis are presented in Fig. 3, where the horizontal axis represents the values of the SAR band, and the vertical axis represents the values of the MS band. The regression line is in red. \\(R^{2}\\) is a statistical measure that represents the variance proportion for a dependent variable that is explained by an independent variable in a regression model (in this case a linear regression model). Therefore, to be more practical, in this case, \\(R^{2}\\) indicates how much the scatter plot follows a linear distribution (that is, given by linearly correlated data). Obviously, because optimal values are distributed along a line, the higher the \\(R^{2}\\) value, the better the results. The optimal value is 1 obtained when the scatter plot is depicted as a line. The \\(R^{2}\\) value of the corresponding example is marked in the upper left corner of each subfigure in Fig. 3, and the mean and standard deviation of the \\(R^{2}\\) values calculated on 2400 samples are 0.9193 and 0.0650, respectively. These results show that SAR and MS bands are approximately linearly correlated. Several (spectral-based) solutions mapping this correlation are proposed as follows. #### 4.1.1 NoColSAR The simplest spectral-based solution relies on the replication of a single SAR band along the three channels. Although this approach does not capture any color, it does allow for preservation of spatial structure information. The NoColSAR method plays an important role as a baseline. By comparing this with Figure 1: Outline of the protocol, the colorization and the assessment. Figure 2: IHS fusion examples. the NoColSAR method, we are able to clearly observe how much the coloring effect of a colorization method has been improved (i.e., if the colorization can introduce proper colors). #### 4.1.2 LR4ColSAR The second spectral-based solution is a linear regression method (Chatterjee and Hadi, 1986; Draper and Smith, 1998), named LR4ColSAR, and depicted in Fig. 4. In this solution, the SAR band vector is represented by \\(\\mathbf{x}\\), while the predicted and target vectors are indicated as \\(\\mathbf{\\hat{y}}\\) and \\(\\mathbf{y}\\), respectively. The adopted regression model is linear, and the coefficients are obtained through the minimum mean square error (MMSE) estimator. \\[w_{i}^{*},b_{i}^{*}=\\arg\\min_{w_{i},b_{i}}\\frac{1}{n}\\sum_{j=1}^{n }[(w_{i}*x_{j}+b_{i})-\\mathbf{y}_{j}]^{2}, \\tag{9}\\] \\[\\mathbf{\\hat{y}}_{i}=w_{i}^{*}*\\mathbf{x}+b_{i}^{*}, \\tag{10}\\] where \\(i\\) is the index of the multispectral band to be reconstructed starting from the SAR image, while \\(n\\) corresponds to the length of the vectorized version of the SAR image and \\(j\\) represents the \\(j\\)-th pixel. The weight and bias of the linear model are represented by \\(w_{i}\\) and \\(b_{i}\\), respectively. The result of this procedure is the set of biases and weights for each band, i.e. \\(w_{i}^{*}\\) and \\(b_{i}^{*}\\) for all \\(i\\in[1,\\dots,3]\\), respectively. To give a more clear presentation for the operation of this approach, we describe the process in Algorithm 2. ``` INPUT \\(\\mathbf{X}:\\) SAR image; \\(\\mathbf{Y}:\\) ground truth. OUTPUT \\(\\mathbf{\\hat{Y}}:\\) Colorized SAR image. TRAINING Transform the SAR image and ground truth to vectors, which are denoted by \\(\\mathbf{x}\\) and \\(\\mathbf{y}\\), respectively. Then, training process is done with Equation (9). TESTING Testing process is done with Equation (10) to get the colorized SAR image vector and then transform the vector to image \\(\\mathbf{\\hat{Y}}\\). return\\(\\mathbf{\\hat{Y}}\\) ``` **Algorithm 2** Colorization process of LR4ColSAR. #### 4.1.3 NL4ColSAR In this section, we present a nonlinear regression technique as a further spectral-based solution, called NL4ColSAR, to capture the nonlinearity that the previous approach cannot map. This method is implemented using a simple neural network model. The neural network has one hidden layer and an output layer, with the number of neurons in each layer being 2 and 3, respectively. Additionally, the hidden layer is followed by a Tan-sigmoid type transfer function. The Levenberg-Marquardt algorithm is selected as a training algorithm and the mean square error (\\(\\ell_{2}\\) norm) is used as a loss function. The Tan-sigmoid transfer function can be formulated as follows. \\[f(x)=\\frac{2}{1+e^{-2x}}-1, \\tag{11}\\] Figure 4: The overall architecture of LR4ColSAR. Figure 5: The overall architecture of CNN4ColSAR. Figure 3: Data analysis with scatter plots. For each subfigure, \\(x\\)-axis and \\(y\\)-axis represent the value of the SAR band and one band of the corresponding ground-truth, respectively. \\(R^{2}\\), ranging from 0 to 1, indicates the degree of goodness of fit between the variables, with higher values indicating stronger linear correlations. Figure 6: The overall architecture of cGAN4ColSAR. where \\(e\\) is the exponential function. The complete processing flow can be described as Algorithm 3. ``` INPUT \\(\\mathbf{X}:\\) SAR image; \\(\\mathbf{Y}:\\) ground truth. OUTPUT \\(\\hat{\\mathbf{Y}}:\\) Colorized SAR image. TRAINING Transform the SAR image and ground truth to vectors, which is denoted by \\(\\mathbf{x}\\) and \\(\\mathbf{y}\\), respectively. Then, train the neural network model by Levenberg-Marquardt algorithm and \\(\\ell_{2}\\) loss function. TESTING Obtain the colorized SAR image vector and then transform the vector to image \\(\\hat{\\mathbf{Y}}\\). return\\(\\hat{\\mathbf{Y}}\\) ``` **Algorithm 3** Colorization process of NL4ColSAR. ### Spatial-spectral based baseline The three spectral methods described above solve the colorization issue only from a spectral perspective, neglecting the spatial correlation between the SAR imagery and the ground-truth, thus often getting not satisfying results. To overcome this limitation, we introduce a convolutional neural network, which is well known for its powerful ability to capture local spatial features. Our proposed approach, named CNN4ColSAR, can be considered as a spatial-spectral solution. It consists of three convolutional layers, each of which is equipped with a ReLU activation function, with the exception of the final layer. CNN4ColSAR adopts a simple \\(\\ell_{1}\\) loss function. The CNN4ColSAR architecture is illustrated in Fig. 5 and the algorithm process can be expressed as Algorithm 4. ``` INPUT \\(\\mathbf{X}:\\) SAR image; \\(\\mathbf{Y}:\\) ground truth. OUTPUT \\(\\hat{\\mathbf{Y}}:\\) Colorized SAR image. TRAINING Update \\(\\Phi\\) with the Adam optimizer to minimize the \\(\\ell_{2}\\) loss function to learn the optimal solution. TESTING Input the SAR image into the converged model to obtain the colorized SAR image \\(\\hat{\\mathbf{Y}}\\). return\\(\\hat{\\mathbf{Y}}\\) ``` **Algorithm 4** Colorization process of CNN4ColSAR. ### DivColSAR We exploit the pioneering work, DivColSAR (Schmitt et al., 2018), as baseline. DivColSAR consists of a variational autoencoder (VAE) and a mixed density network (MDN). The VAE is trained first to generate a low level embedding of the ground truth. Based on the low-dimensional latent variable embedding, the MDN is then trained to generate a multimodal conditional distribution that models the relationship between the input SAR image and the embedding. By sampling from the generated distribution, the decoder of the VAE can generate a variety of colorized SAR images of the original gray-level SAR image. We retained the original structure and parameter settings of this method, only making some minor adjustments where necessary. First, the ground truth is generated (coherently with our protocol) by the IHS fusion algorithm instead Figure 7: Details for cGAN4ColSAR. of the Lab transformation. Second, to compare this result with the other approaches in the benchmark, we applied the average of the top eight instances of the MDN-predicted Gaussian mixture distribution to have a unique colorized SAR image for the given SAR image in input. For fairness, the network has been retrained from scratch using the same training set as for the other approaches and the hyperparameters have been tuned, confirming the optimal configuration proposed in the original paper. Methodology: Image-to-image translation with conditional generative adversarial network for SAR colorization (cGAN4ColSAR) #### 4.4.1 Motivations of network design The image colorization task can be viewed as a scene recognition task followed by the assignment of specific colors to objects or regions, somewhat similar to image segmentation. However, preserving spatial and radiometric information while colorizing makes the colorization task more complex. Therefore, the network needs to focus on both global and local information to achieve satisfactory colorization results. U-Net (Ronneberger et al., 2015) is a well-known and widely used network architecture in the field of image segmentation due to its exceptional performance. Its distinctive \"U\" shape design is composed of a contracting path and a symmetric expanding path that enables precise localization and context capture, making it well-suited for the image colorization task. Specifically, in the SAR colorization task, the input SAR image and the output colorized SAR image have different surface appearances but share the same underlying structure. In particular, the location of prominent edges in the SAR image is roughly aligned with that in the colorized image. Therefore, it would be advantageous to incorporate this information using skip connections directly across the network. #### 4.4.2 Architecture details of cGAN4ColSAR In this study, we present a new conditional generative adversarial network, named cGAN4ColSAR, adapting the pix2pix generator and discriminator architectures. The architecture of the proposed model is illustrated in Figs. 6 and 7. The cGAN4ColSAR generator adopts a \"U\"-shaped structure, which includes a contracting path and a symmetric expanding path. A skip connection is used between two symmetric layers to facilitate the transmission of low-level details. The input of the generator is the SAR image, and the output is the corresponding colorized SAR image. However, the discriminator is a full convolutional network composed of five layers. As a conditional GAN framework, it takes as input either the SAR image with the ground-truth or the output of the generator and is designed to distinguish between the true colorized SAR image and the fake one. **Network architecture of generator.** The generator in our proposed cGAN4ColSAR is made up of a contracting path and a symmetric expanding path, where each path consists of seven layers. The contracting path applies seven \\(4\\times 4\\) filter size convolutions with stride 2 for downsampling. The number of convolution kernels is doubled at each downsampling step until it reaches a maximum of 512, while the first convolutional layer's feature channel number is 64. Each convolutional layer is followed by a leaky ReLU activation function and a batch normalization layer, except for the first and last layers, where the first layer is a single convolutional layer and the last layer lacks a batch normalization layer. For the expansive path, the size of the feature map is doubled layer by layer by repeated use of \\(4\\times 4\\) deconvolutions with stride 2. The number of feature channels is halved layer by layer, starting from the fifth layer, while the number of feature channels of the last convolutional layer is the same as the bands of the colorized SAR image. Each layer is followed by a ReLU activation function, a deconvolutional layer, and a batch normalization layer, except for the last layer, where the batch normalization layer is replaced by a Tanh activation function. To preserve low-level image information, we add skip connections between the \\(i\\)-th layer of the contracting path and the \\((9-i)\\)-th layer of the expansive path. Each skip connection concatenates all channels in the \\(i\\) -th layer with those in the \\((9-i)\\) -th layer. Additionally, we do not include a dropout layer in our architecture, as its effect is limited according to the findings of pix2pix. **Network architecture of discriminator.** The discriminator proposed in our approach, based on the PatchGAN concept introduced in the pix2pix model, focuses on the local structure of the image patches to promote high-frequency detail capture in our approach. Our discriminator consists of five sequential convolution layers, each equipped with kernels of \\(4\\times 4\\). The first three layers use a stride of 2 for downsampling, while the remaining layers use a stride of 1. The first convolution layer has 64 feature channels and is followed by a leaky ReLU activation function. The subsequent three layers have output feature maps of 128, 256, and 512, respectively, and each is followed by a batch normalization layer and a leaky ReLU activation function. The final layer is a single convolutional layer that produces a single feature map output. #### 4.4.3 Loss function The purpose of this study is to solve the SAR colorization problem by presenting it as a conditional image generation task that can be effectively solved using conditional GAN. Specifically, we propose a generative network, denoted \\(G\\) (using a set of parameters \\(\\Theta_{G}\\) to be trained), which is designed to map the distribution \\(p_{data}(\\mathbf{X})\\) to the target distribution \\(p_{r}(\\mathbf{Y})\\). By generating a colorized image \\(\\mathbf{\\hat{Y}}\\) that is indistinguishable from the reference image \\(\\mathbf{Y}\\), evaluated by a discriminative network \\(D\\) (using a set of parameters \\(\\Theta_{D}\\) to be trained) trained adversarially, our aim is to optimize a Min-Max objective function: \\[\\begin{split}&\\min_{\\Theta_{G}}\\max_{\\Theta_{D}}\\mathbb{E}_{ \\mathbf{X}\\sim p_{data}(\\mathbf{X}),\\mathbf{Y}\\sim p_{r}(\\mathbf{Y})}\\left[ \\log D_{\\Theta_{D}}(\\mathbf{X},\\mathbf{Y})\\right]\\\\ &+\\mathbb{E}_{\\mathbf{X}\\sim p_{data}(\\mathbf{X})}\\left[\\log\\left( 1-D_{\\Theta_{D}}\\left(\\mathbf{X},G_{\\Theta_{G}}(\\mathbf{X})\\right)\\right], \\right.\\end{split} \\tag{12}\\] where \\(\\log\\) is the natural logarithm function. Using adversarial learning, a conditional GAN architecture can be used to produce accurate and realistic colorized SAR images. The generator network \\(G\\) and the discriminator network \\(D\\) are trained in an alternating manner. To optimize \\(G\\), a combination of pixelloss and adversarial loss is used. To mitigate image blurring issues, the loss \\(\\ell_{1}\\) is utilized to calculate the absolute difference between the colorized SAR image and the ground truth as a pixel loss. The generator loss function, \\(\\mathcal{L}(G)\\), is made up of two components, the \\(\\ell_{1}\\) loss term and the adversarial loss term, which can be expressed as follows: \\[\\begin{split}&\\mathcal{L}(G)=\\sum_{i=1}^{B}\\left[-\\log D_{\\Theta_{D}} \\left(\\mathbf{X},G_{\\Theta_{G}}(\\mathbf{X})\\right)\\right.\\\\ &\\left.+\\alpha\\left\\|\\mathbf{Y}-G_{\\Theta_{G}}(\\mathbf{X})\\right\\| _{1}\\right],\\end{split} \\tag{13}\\] where \\(B\\) represents the number of samples in a minibatch and \\(\\alpha\\) is a hyperparameter to weigh the contribution of the \\(\\ell_{1}\\) loss. In our experiments, the value of \\(\\alpha\\) is empirically set at 210. The loss function, \\(\\mathcal{L}(D)\\), for the discriminator can be expressed as follows: \\[\\mathcal{L}(D)=\\beta\\sum_{i=1}^{B}\\left[\\log(1-D_{\\Theta_{D}}\\left(\\mathbf{X},G_{\\Theta_{G}}(\\mathbf{X})\\right))+\\log D_{\\Theta_{D}}(\\mathbf{X},\\mathbf{Y} )\\right], \\tag{14}\\] where the hyperparameter \\(\\beta\\) is used to weigh the contribution of the different loss terms. In our experiments, the value of \\(\\beta\\) is empirically set at 0.5. The algorithm process is shown in Algorithm 5. ``` \\(\\mathbf{INPUT}\\) \\(\\mathbf{X}:\\) SAR image; \\(\\mathbf{Y}:\\) ground truth. OUTPUT \\(\\mathbf{\\hat{Y}}:\\) Colorized SAR image. TRAINING Update \\(\\Phi\\) with the Adam optimizer to minimize loss function defined in Equation (13) and (14) to learn the optimal solution. TESTING Input the SAR image into the converged model to obtain the colorized SAR image \\(\\mathbf{\\hat{Y}}\\). return\\(\\mathbf{\\hat{Y}}\\) ``` **Algorithm 5** Colorization process of cGAN4ColSAR. ## 5 Results This section will be dedicated to presenting the experimental analysis. The dataset will be presented first. Subsequently, the training details will be introduced. Finally, quantitative and qualitative results among the proposed methods and the pioneering method (Schmitt et al., 2018) will be shown. ### Dataset The dataset used in our study comes from SEN12MS-CR, a large-scale dataset for cloud removal provided by Ebel _et al._(Ebel et al., 2021). Each patch of SEN12MS-CR contains a triplet of orthorectified georeferenced Sentinel-2 images, including a cloudy and a cloud-free multispectral image, as well as the corresponding Sentinel-1 image. The multispectral Sentinel-2 image consists of thirteen bands, while the Sentinel-1 image comprises two polarimetric channels. The diversity of SEN12MS-CR is ensured, as it covers different topographies of the world over four seasons. For RGB optical images, we follow the standard practice of selecting the related bands from Sentinel-2 (i.e., 4, 3, and 2). Similarly, we use channel 1 (polarimetric VV) from the corresponding Sentinel-1 images to generate the gray-scale SAR image. Due to the large size of the SEN12MS-CR dataset, we randomly select a subset of 9663 SAR-MS image pairs for training and 800 pairs for testing. Basic information about this dataset is reported in Tab. 1. It should be noted that the images in SEN12MS-CR remain in their raw bit depths, with 12 bit MS images and floating point SAR data. Before feeding the SAR images into the network, we perform a pre-processing step to adjust the SAR images in the range \\([0,\\dots,2^{p}]\\). Thus, we have the following: \\[\\mathbf{SAR}_{adj}=\\frac{\\mathbf{SAR}-min(\\mathbf{SAR})}{max\\left(\\mathbf{SAR }-min(\\mathbf{SAR})\\right)}\\cdot 2^{p}, \\tag{15}\\] where \\(\\mathbf{SAR}\\) and \\(\\mathbf{SAR}_{adj}\\) represent the SAR image and its adjusted version, respectively, \\(min\\) and \\(max\\) are the minimum and maximum operators, respectively, and \\(p\\) is the bit depth of the Sentinel-2 image (i.e. 12). ### Training details The three spectral-based methods, namely NoColSAR, LR4ColSAR, and NL4ColSAR, are implemented in Matlab R2015b. NoColSAR is a non-parametric method that requires no training, thus the colorization is directly conducted on the test set. LR4ColSAR and NL4ColSAR are based on regression models to retrieve the relationship between each colorized band and the SAR image. Specifically, LR4ColSAR adopts the least squares algorithm to estimate the parameters of the underlying linear regression model, while NL4ColSAR uses one hidden layer followed by a nonlinear transfer function (that is, the Tan sig) to map the nonlinear regression relationship. To train these regression models, 20 patches of size \\(256\\times 256\\) are flattened to a vector, resulting in a sufficient amount of training data. The two deep learning-based methods, CNN4ColSAR and cGAN4ColSAR, are implemented in Pytorch and are supported by a single NVIDIA Tesla V100 GPU. The batch size and learning rate are fixed at 8 and 1e-4, respectively, and the Adam optimizer is used to train the network from scratch. Training epochs of CNN4ColSAR and cGAN4ColSAR are both set to 300. CNN4ColSAR adopts a simple \\(\\ell_{1}\\) loss function. cGAN4ColSAR uses a loss function consisting of different loss terms, with the weight of the discriminator loss set at 0.5 and the weight of the loss set \\(\\ell_{1}\\) at 210. The compared method (Schmitt \\begin{table} \\begin{tabular}{c c c} \\hline \\hline Dataset & Training & Testing \\\\ \\hline \\multirow{3}{*}{SEN12MS-CR} & Patches: 9663 & Pathches: 800 \\\\ & SAR size: 256\\(\\times\\)256 & SAR size: 256\\(\\times\\)256 \\\\ & GT size: 256\\(\\times\\)256\\(\\times\\)3 & GT size: 256\\(\\times\\)256\\(\\times\\)3 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Dataset information. et al., 2018) has been re-implemented and adapted for our data following the training details and setting the hyperparameters in agreement with the original paper. ### Quantitative and qualitative results comparison In the previous sections, we introduced three spectral-based methods and two deep-learning-based methods. The quantitative and qualitative results of our proposed methods and the compared method DivColSAR (Schmitt et al., 2018) are reported in Tab. 2 and depicted in Fig. 15. Although the NoColSAR method does not capture color information, it is used as a baseline to assess the improvements in colorizing provided by the other methods. Deep learning-based methods generally outperform spectral-based methods. cGAN4ColSAR shows the best performance, and it is much better than CNN4ColSAR and DivColSAR. This behavior can be attributed to the \"U\" shape architecture, which captures both local and global information. CNN4ColSAR achieves the second best performance on NRMSE and SAM indexes while DivColSAR obtains the second best Q4 value. The performance of LR4ColSAR and NL4ColSAR is comparable. Instead, the baseline NoColSAR is the worst approach. For visual inspection, four test cases have been considered in Fig. 15. To point out the improvements in the colorization task, we added the residual images obtained by subtracting the colorized SAR image and the NoColSAR image. From Fig. 15, the performance gap between the proposed cGAN4ColSAR and the rest of the benchmark is clear. DivColSAR can be considered the second-best as visual performance. Finally, we analyze the results using scatter plots, as shown in Fig. 16. More specifically, we flatten the colorized image and \\begin{table} \\begin{tabular}{c c c c} \\hline Method & Q4 & NRMSE & SAM \\\\ \\hline NoColSAR & 0.3678\\(\\pm\\)0.1563 & 1.0515\\(\\pm\\)0.3905 & 6.4530\\(\\pm\\)2.4532 \\\\ LR4ColSAR & 0.7400\\(\\pm\\)0.1120 & 0.2294\\(\\pm\\)0.0842 & 5.8586\\(\\pm\\)1.8270 \\\\ NL4ColSAR & 0.7551\\(\\pm\\)0.1000 & 0.2332\\(\\pm\\)0.0874 & 5.8125\\(\\pm\\)1.8048 \\\\ CNN4ColSAR & 0.7929\\(\\pm\\)0.0802 & 0.1994\\(\\pm\\)0.0766 & 5.4443\\(\\pm\\)1.8325 \\\\ DivColSAR & 0.839\\(\\pm\\)0.091 & 0.2066\\(\\pm\\)0.1274 & 6.0847\\(\\pm\\)3.3086 \\\\ cGAN4ColSAR & **0.9324\\(\\pm\\)0.0655** & **0.9955\\(\\pm\\)0.0643** & **3.2592\\(\\pm\\)1.8803** \\\\ \\hline ideal value & 1 & 0 & 0 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Numerical assessment for the proposed SAR colorization approaches. Best results are in boldface. Figure 8: Analysis by varying the parameter \\(\\alpha\\). The \\(x\\)-axis is represented by the values of \\(\\alpha\\). Figures from (a) to (c) show the results of Q4, NRMSE and SAM, respectively. Figure 10: Model identification for LR4ColSAR. Visual results with and without the bias term. Figure 9: Visual results varying the parameter \\(\\alpha\\). the corresponding ground-truth, and then report the scatter plots to check their similarity. To aid in visual inspection, the regression line is drawn in red, and the optimal line (i.e., the quadrant bisector) is in blue dotted. Optimal results lie in the optimal line. Thus, the best results have a scatter plot that is closer to the optimal line (and the related regression line should also lie on the optimal line). Moreover, in Tab. 3, we show the mean and standard deviation of \\(R^{2}\\) values for 2400 test cases. Analyzing both Fig. 16 and Tab. 3, it is easy to see that cGAN4ColSAR clearly outperforms all other approaches for all three test cases. Instead, DivColSAR achieves the second-best performance. ## 6 Discussion In this section, we will carry out a series of experiments to analyze the impact of the hyperparameter settings and the different structures. More specifically, for LR4ColSAR, we will explore the influence of the bias coefficient; for NL4ColSAR, the network layer depth and the number of neurons per layer will be investigated; for CNN4ColSAR, the layer depth, the kernel size, and the numbers of filters will be analyzed; for cGAN4ColSAR, the influence of loss function and the network depth will be discussed. ### LR4ColSAR The LRColSAR model is a linear regression model that features three weighting and bias coefficients as learnable parameters. We exploit the model with bias coefficients because it exhibited superior performance in terms of all the metrics compared to the model without bias as shown in Tab. 4. Fig. 10 illustrates the visual results of the LR4ColSAR model with and without bias coefficients. It can be seen that the performance of both variants is comparable, with no significant differences. Moreover, it seems that neither of the models can get useful color information. ### CNN4ColSAR Parameters about kernel size, kernel numbers, and number of layers are investigated to find the best settings. Tab. 6 reports the performance achieved as a function of the different combinations of parameters. Let us assume that \\(K_{i}\\) represents the kernel size of the \\(i\\)-th convolutional layer and \\(n_{j}\\) represents the kernel numbers of the \\(j\\)-th convolutional layer. In the kernel size field on Tab. 6, for example, the value 9-1-5 means that the kernel size of the convolutional layer is 9, 1, 5 in a row, that is, \\(K_{1}=9\\), \\(K_{2}=1\\), and \\(K_{3}=5\\). In the field filters in Tab. 6, for example, the 64-32-3 value represents the kernel numbers of each layer, that is, \\(n_{1}=64\\), \\(n_{2}=32\\), and \\(n_{3}=3\\). To analyze the sensitivity of the network to different kernel sizes, we set the kernel size of the second layer at (i) \\(K_{2}=1\\), (ii) \\(K_{2}=3\\), and (iii) \\(K_{2}=5\\), while \\(K_{1}\\) and \\(K_{3}\\) are fixed to 9 and 5, respectively. The results in Tab. 6 show that the use of a larger kernel size can improve the colorization performance. Specifically, the best Q4, NRMSE and SAM values are achieved by setting 9-5-5. The results suggest that expanding the receptive field is helpful, and thus the neighborhood information is used to improve the colorization performance. Generally, a wider network will improve performance due to the increase of the representation capability. Thus, we also studied the impact of kernel numbers. We set three width levels, namely (i) \\(n_{1}=32\\) and \\(n_{2}=16\\); (ii) \\(n_{1}=64\\) and \\(n_{2}=32\\); (iii) \\(n_{1}=128\\) and \\(n_{2}=64\\). \\(c_{3}\\) is fixed to 3 depending on the output image bands. The numerical results show that the a wider network can improve the colorization performance. Specifically, the widest level architecture obtains the best values for the NRMSE and SAM indexes, and the equally best value for the Q4 index. In (He and Sun, 2015), It is suggested that \"the deeper the better,\" meaning that the network can benefit from moderately deepening the network. Based on the 9-1-5, 9-3-5, and 9-5-5 Figure 14: Visual results varying the number of layers of cGAN4ColSAR. Figure 12: Visual results for different sets of parameters of CNN4ColSAR. k935n64 represents a convolutional neural network with three convolutional layers, where the kernel sizes of the convolutional layers are 9, 3, and 5, respectively, and the number of kernels in the first convolutional layer is 64. The same notation is used for the other names in the figure. Figure 13: Visual results varying the composition of the loss function of cGAN4ColSAR. Figure 15: Figures from (a) to (d) show some examples of products obtained from the compared SAR colorization methods. Images in the second row in each subfigure depict the residuals between the colorized data (outcome of the colorization approach) and the corresponding NoCoISAR image. architectures with a middle-level kernel number, we added another convolutional layer with a kernel size of 1 and a kernel number of 32 before the last layer to deepen the structures. The deepened structures are 9-1-1-5, 9-3-1-5, and 9-5-1-5, as shown in Tab. 6. It is observed that deeper structures improve the performance, albeit slightly. As shown in Fig. 12, despite the different architectures leading to distinct visual results, discerning the dissimilarity remains challenging. Specifically, for the colorization task, incorporating contextual information is crucial. However, in instances where the layers are shallow and the receptive field is limited, the network encounters difficulties in capturing global information. This observation may elucidate the reason for the relatively modest performance observed in various CNN4ColSAR variants. In order to strike a balance between performance and efficiency, we ultimately selected the k9515n64 structure as our optimal model. This structure comprises four layers with kernel sizes of 9-5-1-5 and the number of kernels for the first three layers set at 64, 32, and 32, respectively. ### cGAN4ColSAR There are two terms related to the loss function of cGAN4ColSAR, and we conduct experiments to analyze the performance sensitivity for the different combinations of the loss terms. The quantitative results are listed in Tab. 7. It is clear that the use of the sole adversarial loss term is associated with a significant reduction of the performance. The \\(\\ell_{1}\\) loss term produces significantly better results than only the adversarial loss, indicating that the \\(\\ell_{1}\\) loss is a stronger constraint than the adversarial loss. Taking into account the comparable performance between the sole adversarial loss term and the two loss terms, the need of the GAN loss may be doubted. However, it is also observed that combining the two loss terms further improves the performance of all the indicators. Therefore, with the combination of \\(\\ell_{1}\\) and the adversarial losses, cGAN4ColSAR can achieve competitive results. Hence, the GAN loss is surely necessary. Qualitative results also verify the conclusion, as shown in Fig. 13. Considering the balance between effectiveness and efficiency, we explore architectures that vary the different levels of network depth. Specifically, shallow, middle, and deep levels are set to 6, 7 and 8, respectively. The adjustable layers are closed to the bottom layers, namely C512k4s2 DBlock and C512k4s2 UBlock. According to the numerical results shown in Tab. 8, the performance becomes better as the layers increase and Layers 8 achieves the best performance. Visual inspection of the results are shown in Fig. 14. Thus, we selected Layers 8 as the optimal setting. Finally, we discuss the influence of the weight of the \\(\\ell_{1}\\) loss term of cGAN4ColSAR. Fig. 8 shows the results for each quality metric that fluctuates \\(\\alpha\\). It can be seen that the performance first increases and then slowly decreases as the value of the parameter \\(\\alpha\\) increases. These results are taken from visual inspection in Fig. 9. Thus, the best performance is obtained by \\begin{table} \\begin{tabular}{c c c c c} \\hline Kernel size & Filters & Q4 & NRMSE & SAM \\\\ \\hline 9-1-5 & & 0.7849\\(\\pm\\)0.0822 & 0.2150\\(\\pm\\)0.0753 & 5.656\\(\\pm\\)1.9902 \\\\ 9-3-5 & 64-32-3 & 0.7936\\(\\pm\\)0.0800 & 0.2162\\(\\pm\\)0.0743 & 5.6212\\(\\pm\\)1.9263 \\\\ 9-5-5 & & **0.7942\\(\\pm\\)0.0807** & 0.2096\\(\\pm\\)0.0762 & 5.6017\\(\\pm\\)1.9424 \\\\ \\hline & 32-16-3 & 0.7839\\(\\pm\\)0.0788 & 0.2193\\(\\pm\\)0.0750 & 5.6524\\(\\pm\\)1.9704 \\\\ 9-3-5 & 64-32-3 & 0.7936\\(\\pm\\)0.0800 & 0.2162\\(\\pm\\)0.0743 & 5.6212\\(\\pm\\)1.9263 \\\\ & 128-64-3 & 0.7839\\(\\pm\\)0.0788 & 0.2097\\(\\pm\\)0.0730 & 5.5545\\(\\pm\\)1.8864 \\\\ \\hline & 9-1-1-5 & & 0.7867\\(\\pm\\)0.0835 & 0.2135\\(\\pm\\)0.0782 & 5.6557\\(\\pm\\)1.9806 \\\\ 9-3-1-5 & 64-32-32-3 & 0.7854\\(\\pm\\)0.0809 & 0.2100\\(\\pm\\)0.0745 & 5.5980\\(\\pm\\)1.8802 \\\\ 9-5-1-5 & & 0.7929\\(\\pm\\)0.0802 & **0.1994\\(\\pm\\)0.0766** & **5.4443\\(\\pm\\)1.8325** \\\\ \\hline & ideal value & 1 & 0 & 0 \\\\ \\hline \\end{tabular} \\end{table} Table 6: Performance for different architectures of CNN4ColSAR. Best results are in boldface. Figure 16: Scatter plots for the different SAR colorization methods on three test cases. \\(x\\)-axis and \\(y\\)-axis represent the values of ground-truth and the corresponding colorized SAR image, respectively. The red line and the blue dotted line represent the regression line and the optimal line (i.e., the quadrant bisector), respectively. \\(\\alpha=210\\), which is used in our loss function. ## 7 Conclusions and future developments In this work, we present a comprehensive research framework for SAR colorization based on supervised learning. This approach simplifies the utilization of supervised learning techniques and the evaluation of performance for the problem at hand. We have introduced three spectral-based methods, accompanied by textcolorredthree deep learning-based techniques. The latter category includes a simple three-layer convolutional neural network, textcolorreda model based on a variational autoencoder and mixture density network, and an image-to-image model based on a conditional generative adversarial network. Furthermore, we propose a novel protocol for generating ground-truth samples for both training and testing deep learning-based approaches. As for quality metrics, we consider a set of well-known and widely used indices within the remote sensing image fusion community, namely Q4, SAM, and NRMSE. We conducted extensive experiments to explore the optimal hyperparameters for all the proposed methods. Subsequently, we conducted a thorough comparison among the proposed and optimized techniques from the benchmark to advocate for the cGAN-based solution for SAR colorization. Future developments go towards the design of novel architectures devoted to the SAR colorization task based on generative models as the cGAN framework. ## Acknowledgments This research was supported by the high performance computing (HPC) resources at Beihang University and the Supercomputing Platform of the School of Mathematical Sciences at Beihang University. This work was supported in part by the China Scholarship Council (CSC) under Grant 202206020138; in part by the National Natural Science Foundation of China under Grant 62371017 and in part by Academic Excellence Foundation of BUAA for PhD Students. ## References * Alparone et al. (2004) Alparone, L., Baronti, S., Garzelli, A., Nencini, F., 2004. A global quality measurement of pan-sharpened multispectral imagery. IEEE Geoscience and Remote Sensing Letters 1, 313-317. * Chatterjee and Hadi (1986) Chatterjee, S., Hadi, A.S., 1986. Influential observations, high leverage points, and outliers in linear regression. Statistical science, 379-393. * Chen et al. (2020) Chen, R., Huang, W., Huang, B., Sun, F., Fang, B., 2020. Reusing discriminators for encoding: Towards unsupervised image-to-image translation, in: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE. pp. 8165-8174. * Chibani (2006) Chibani, Y., 2006. Additive integration of sar features into multispectral spot images by means of the a trous wavelet decomposition. ISPRS journal of photogrammetry and remote sensing 60, 306-314. * Dadrassa et al. (2021) Dadrassa Javan, F., Samadzadegan, F., Mehravar, S., Toosi, A., Khatami, R., Stein, A., 2021. A review of image fusion techniques for pan-sharpening of high-resolution satellite imagery. ISPRS Journal of Photogrammetry and Remote Sensing 171, 101-117. * Deng et al. (2008) Deng, Q., Chen, Y., Zhang, W., Yang, J., 2008. Colorization for polarimetric sar image based on scattering mechanisms, in: 2008 Congress on Image and Signal Processing, IEEE. pp. 697-701. * Deshpande et al. (2017) Deshpande, A., Lu, J., Yeh, M.C., Chong, M.J., Forsyth, D., 2017. Learning diverse image colorization, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2877-2885. * Draper and Smith (1998) Draper, N.R., Smith, H., 1998. Applied regression analysis. volume 326. John Wiley & Sons. * Ebel et al. (2021) Ebel, P., Meraner, A., Schmitt, M., Zhu, X., 2021. Sen12ms-cr: Multi-sensor data fusion for cloud removal in global and all-season sentinel-2 imagery. IEEE Transactions on Geoscience and Remote Sensing 59, 5866-5878. * Goodfellow et al. (2020) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y., 2020. Generative adversarial networks. Communications of the ACM 63, 139-144. * He and Sun (2015) He, K., Sun, J., 2015. Convolutional neural networks at constrained time cost, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5353-5360. * Isola et al. (2017) Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A., 2017. Image-to-image translation with conditional adversarial networks, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125-1134. * Ji et al. (2021) Ji, G., Wang, Z., Zhou, L., Xia, Y., Zhong, S., Gong, S., 2021. Sar image colorization using multidomain cycle-consistency generative adversarial network. IEEE Geoscience and Remote Sensing Letters 18, 296-300. * Kong et al. (2021) Kong, Y., Hong, F., Leung, H., Peng, X., 2021. A fusion method of optical image and sar image based on dense-ugan and gram-schmidt transformation. Remote Sensing 13, 4274. * Ku and Chun (2018) Ku, W., Chun, D., 2018. The method for colorizing sar images of kompsat-5 using cyclegan with multi-scale discriminators. Korean Journal of Remote Sensing 34, 1415-1425. * Kulkarni and Rege (2020) Kulkarni, S.C., Rege, P.P., 2020. Pixel level fusion techniques for sar and optical images: A review. Information Fusion 59, 13-29. * Lee et al. (2021) Lee, J.H., Kim, K., Kim, J.H., 2021. Design of cyclegan model for sar image colorization, in: 2021 IEEE VTS 17th Asia Pacific Wireless Communications Symposium (APVCS), pp. 1-5. * Lee et al. (2022) Lee, S.Y., Chung, D.W., et al., 2022. Labeling dataset based colorization of sar images using cycle gan. The Journal of Korean Institute of Electromagnetic Engineering and Science 33, 776-783. * Li et al. (2022) Li, J., Hong, D., Gao, L., Yao, J., Zheng, K., Zhang, B., Chanussot, J., 2022. Deep learning in multimodal remote sensing data fusion: A comprehensive review. International Journal of Applied Earth Observation and Geoinformation 112, 102926. * Liu et al. (2017) Liu, M.Y., Breuel, T., Kautz, J., 2017. Unsupervised image-to-image translation networks. Advances in neural information processing systems 30. * Liu et al. (2022) Liu, P., Li, J., Wang, L., He, G., 2022. Remote sensing data fusion with generative adversarial networks: State-of-the-art methods and future research directions. IEEE Geoscience and Remote Sensing Magazine 10, 295-328. * Liu et al. (2021) Liu, Q., Zhou, H., Xu, Q., Liu, X., Wang, Y., 2021. P8gan: A generative adversarial network for remote sensing image pan-sharpening. IEEE Transactions on Geoscience and Remote Sensing 59, 10227-10242. * Lolli et al. (2017) Lolli, S., Alparone, L., Garzelli, A., Vivone, G., 2017. Haze correction for contrast-based multispectral pansharpening. IEEE Geoscience and Remote Sensing Letters 14, 2255-2259. * Macelloni et al. (2017) Macelloni, G., Nesti, G., Pampaloni, P., Sigismoli, S., Tarchi, D., Lolli, S., \\begin{table} \\begin{tabular}{c c c c c} \\(\\ell_{1}\\) loss & GAN loss & Q4 & NRMSE & SAM \\\\ \\hline \\(\\checkmark\\) & \\(\\times\\) & 0.9217\\(\\pm\\)0.0669 & 0.1028\\(\\pm\\)0.0634 & 3.4151\\(\\pm\\)1.8242 \\\\ \\(\\times\\) & \\(\\checkmark\\) & 0.5300\\(\\pm\\)0.2034 & 0.4942\\(\\pm\\)0.2954 & 10.8928\\(\\pm\\)5.1114 \\\\ \\(\\checkmark\\) & \\(\\checkmark\\) & **0.9324\\(\\pm\\)0.0655** & **0.0955\\(\\pm\\)0.0643** & **3.2592\\(\\pm\\)1.8803** \\\\ \\hline \\multicolumn{4}{c}{ideal value} & 1 & 0 & 0 \\\\ \\hline \\end{tabular} \\end{table} Table 7: Variation of the loss function of cGAN4ColSAR. Best results are in boldface. \\begin{table} \\begin{tabular}{c c c c} Layers & Q4 & NRMSE & SAM \\\\ \\hline 6 & 0.9192\\(\\pm\\)0.0661 & 0.1062\\(\\pm\\)0.0648 & 3.3382\\(\\pm\\)1.8163 \\\\ 7 & 0.9418\\(\\pm\\)0.0624 & 0.0873\\(\\pm\\)0.0618 & 3.2829\\(\\pm\\)1.8528 \\\\ 8 & **0.9324\\(\\pm\\)0.0655** & **0.0955\\(\\pm\\)0.0643** & **3.2592\\(\\pm\\)1.8803** \\\\ \\hline ideal value & 1 & 0 & 0 \\\\ \\hline \\end{tabular} \\end{table} Table 8: Variation of the number of layers of cGAN4ColSAR. Best results are in boldface. 2000. Experimental validation of surface scattering and emission models. IEEE Transactions on Geoscience and Remote Sensing 38, 459-469. * Mirza and Osindero (2014) Mirza, M., Osindero, S., 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 * Ozcelik et al. (2021) Ozcelik, F., Alganci, U., Sertel, E., Unal, G., 2021. Pancolorgan:rethinking cnn-based pansharpening: Guided colorization of panchromatic images via gans. IEEE Transactions on Geoscience and Remote Sensing 59, 3486-3501. * Ronneberger and Fischer (2015) Ronneberger, O., Fischer, P., Brox, T., 2015. U-net: Convolutional networks for biomedical image segmentation, in: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015. 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, Springer. pp. 234-241. * Schmitt et al. (2018a) Schmitt, M., Hughes, L., Korner, M., Zhu, X.X., 2018a. Coloring sentinel-1 sar images using a variational autoencoder conditioned on sentinel-2 imagery. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 42, 1045-1051. * Schmitt et al. (2018b) Schmitt, M., Hughes, L.H., Zhu, X.X., 2018b. Sen12: The sen1-2 dataset for deep learning in sar-optical data fusion. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences I-V, 141-146. * Schmitt et al. (2017) Schmitt, M., Tupin, F., Zhu, X.X., 2017. Fusion of sar and optical remote sensing data: Challenges and recent trends, in: 2017 IEEE International Geoscience and Remote Sensing Symposium, pp. 5458-5461. * Tu et al. (2004) Tu, T.M., Huang, P.S., Hung, C.L., Chang, C.P., 2004. A fast intensity-hue-saturation fusion technique with spectral adjustment for kionos imagery. IEEE Geoscience and Remote sensing letters 1, 309-312. * Vivone (2023) Vivone, G., 2023. Multispectral and hyperspectral image fusion in remote sensing: A survey. Information Fusion 89, 405-417. * Vivone et al. (2014) Vivone, G., Alparone, L., Chanussot, J., Dalla Mura, M., Garzelli, A., Licciardi, G.A., Restaino, R., Wald, L., 2014. A critical comparison among pansharpening algorithms. IEEE Transactions on Geoscience and Remote Sensing 53, 2565-2586. * Vivone et al. (2021a) Vivone, G., Dalla Mura, M., Garzelli, A., Pacifici, F., 2021a. A benchmarking protocol for pansharpening: Dataset, preprocessing, and quality assessment. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14, 6102-6118. * Vivone et al. (2021b) Vivone, G., Dalla Mura, M., Garzelli, A., Restaino, R., Scarpa, G., Ulfarsson, M.O., Alparone, L., Chanussot, J., 2021b. A new benchmark based on recent advances in multispectral pansharpening: Revisiting pansharpening with classical and emerging pansharpening methods. IEEE Geoscience and Remote Sensing Magazine 9, 53-81. * Wang et al. (2021) Wang, X., Xie, L., Dong, C., Shan, Y., 2021. Real-esrgan: Training real-world blind super-resolution with pure synthetic data, in: International Conference on Computer Vision Workshops (ICCVW). * Wang et al. (2018) Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Loy, C.C., 2018. Esrgan: Enhanced super-resolution generative adversarial networks, in: The European Conference on Computer Vision Workshops (ECCVW). * Wang and Bovik (2002) Wang, Z., Bovik, A.C., 2002. A universal image quality index. IEEE signal processing letters 9, 81-84. * Wang et al. (2004) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P., 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13, 600-612. * Wang et al. (2022) Wang, Z., Ma, Y., Zhang, Y., 2022. Hybrid cgan: Coupling global and local features for sar-to-optical image translation. IEEE Transactions on Geoscience and Remote Sensing 60, 1-16. * Yang et al. (2022) Yang, X., Wang, Z., Zhao, J., Yang, D., 2022. Fg-gan: A fine-grained generative adversarial network for unsupervised sar-to-optical image translation. IEEE Transactions on Geoscience and Remote Sensing 60, 1-11. * Ye et al. (2022) Ye, Y., Liu, W., Zhou, L., Peng, T., Xu, Q., 2022. An unsupervised sar and optical image fusion network based on structure-texture decomposition. IEEE Geoscience and Remote Sensing Letters 19, 1-5. * Yuhas et al. (1992) Yuhas, R.H., Goetz, A.F., Boardman, J.W., 1992. Discrimination among semi-arid landscape endmembers using the spectral angle mapper (sam) algorithm, in: JPL, Summaries of the Third Annual JPL Airborne Geoscience Workshop. Volume 1: AVIRIS Workshop. * Zhang et al. (2022) Zhang, H., Shen, H., Yuan, Q., Guan, X., 2022. Multispectral and sar image fusion based on laplacian pyramid and sparse representation. Remote Sensing 14, 870. * Zhang et al. (2021) Zhang, H., Xu, H., Tian, X., Jiang, J., Ma, J., 2021. Image fusion meets deep learning: A survey and perspective. Information Fusion 76, 323-336. * Zhang et al. (2016) Zhang, R., Isola, P., Efros, A.A., 2016. Colorful image colorization, in: Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14, Springer. pp. 649-666. * Zhou et al. (2021) Zhou, H., Liu, Q., Wang, Y., 2021. Pgman: An unsupervised generative multi-adversarial network for pansharpening. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14, 6316-6327. * Zhou et al. (2022) Zhou, H., Liu, Q., Weng, D., Wang, Y., 2022. Ucgan: Unsupervised cycle-consistent generative adversarial networks for pan shapening. IEEE Transactions on Geoscience and Remote Sensing 60, 1-14. * Zhu et al. (2021) Zhu, X.X., Montazeri, S., Ali, M., Hua, Y., Wang, Y., Mou, L., Shi, Y., Xu, F., Bamler, R.,. Deep learning meets sar: Concepts, models, pitfalls, and perspectives. IEEE Geoscience and Remote Sensing Magazine 9, 143-172. * Zhu et al. (2017) Zhu, X.X., Tuia, D., Mou, L., Xia, G.S., Zhang, L., Xu, F., Fraundorfer, F., 2017. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine 5, 8-36.
Synthetic aperture radar (SAR) images are widely used in remote sensing. Interpreting SAR images can be challenging due to their intrinsic speckle noise and grayscale nature. To address this issue, SAR colorization has emerged as a research direction to colorize gray scale SAR images while preserving the original spatial information and radiometric information. However, this research field is still in its early stages, and many limitations can be highlighted. In this paper, we propose a full research line for supervised learning-based approaches to SAR colorization. Our approach includes a protocol for generating synthetic color SAR images, several baselines, and an effective method based on the conditional generative adversarial network (cGAN) for SAR colorization. We also propose numerical assessment metrics for the problem at hand. To our knowledge, this is the first attempt to propose a research line for SAR colorization that includes a protocol, a benchmark, and a complete performance evaluation. Our extensive tests demonstrate the effectiveness of our proposed cGAN-based network for SAR colorization. The code will be made publicly available. keywords: Synthetic aperture radar images, Sentinel images, regression models, conditional generative adversarial network, image-to-image translation, colorization, image fusion, remote sensing. + Footnote †: journal: Journal
Give a concise overview of the text below.
258
cambridge_university_press/cab7c791_dbcb_45fd_b85a_39ac3e550214.md
Glacier subsurface heat-flux characterizations for energy-balance modelling in the Donjek Range, southwest Yukon, Canada Brett A. Wheler Present address: Wek-Vezhil Land and Water Board, 4910 50th Avenue, Yellowknife, Nunavt X1A 355, Canada. Gwenn E. FLOWERS ## 1 Introduction Snow- and ice-melt modelling in alpine watersheds has a variety of important applications, from predictions of seasonal runoff, streamflow and freshwater-resource availability (e.g. Moore and Demuth, 2001; Lemke and others, 2007) to estimations of sea level associated with climate change (e.g. Raper and Braithwaite, 2006). Since 1961, glaciers in southeast Alaska and the Coast Mountains are thought to have contributed more to sea-level rise than any other glaciological source (Kaser and others, 2006), though volume estimates of this contribution have recently been revised (Berthier and others, 2010). Glaciers in the St Elias Mountains are among the most significant contributors in this region (Arendt and others, 2002). Despite these high rates of mass loss, few detailed studies of glacier energy and mass balance have been undertaken in the St Elias Mountains. Accurate models of glacier energy balance are essential for understanding the detailed physical mechanisms linking climate and glaciers, and for predicting future climate-driven glacier change. Energy- and mass-balance modelling on cold (polar) glaciers requires consideration of sub-freezing surface temperatures and the heat flux into the glacier (e.g. Greuell and Oerlemans, 1986; Konzelmann and Braithwaite, 1995; Klok and others, 2005). Even on temperate glaciers, however, changes in the temperature of the glacier surface layer can be associated with the storage and release of substantial amounts of energy, which can reduce total ablation, and delay the seasonal and daily onset of both ablation and streamflow (e.g. Katelmann and Yang, 1992; Hock, 2005; Pelliccotti and others, 2008). Serious questions remain regarding the necessity of taking sub-freezing surface and englacial temperatures into account for glacier melt modelling in regions such as the St Elias Mountains, where both temperate and polythermal glaciers exist (Clarke and Holdsworth, 2002). The main objective of this work is to evaluate the applicability of glacier surface temperature and subsurface heat-flux characterizations for point-scale energy-balance modelling on glaciers in the Donjek Range, St Elias Mountains. We test four different treatments of the glacier surface/subsurface within an energy-balance model using meteorological and melt data collected in 2008 on an unnamed valley glacier. The treatments range in complexity from one that employs a multilayer subsurface model to one that assumes a constant glacier surface temperature of 0\\({}^{\\circ}\\)C. We use cumulative ablation and measured ice-temperature profiles to evaluate model performance. We then compare surface temperatures, hourly ablation and daily and total energy fluxes simulated with the various models. While previous studies have compared the results of energy-balance models that include different treatments of surface temperature and the glacier heat flux (e.g. Reijmer and Hock, 2008; Pellicciootti and others, 2009), the combination of models considered here is unique. ## 2 Background Numerous treatments of glacier surface and subsurface temperatures have been incorporated in energy-balance models, but the applicability of a given method for a wide range of situations is not well established. A constant, 0\\({}^{\\circ}\\)C glacier surface temperature is often assumed in energy- and mass-balance studies, and seems to provide satisfactory results for many glaciers (e.g. Hock and Nocelf, 1997; Brock and others, 2000; Oerlemans, 2000; Willis and others, 2002). In other studies, conceptual approaches that compensate for energy deficits before allowing melt (e.g. Braun and Aellen, 1990; Van de Wal and Russell, 1994) or iteratively adjust the glacier surface temperature in order to balance energy deficits (Escher-Vetter, 1985; Braithwaite and others, 1998; Braun and Hock, 2004; Hock and Holmgren, 2005), have been assumed to be sufficient to account for changes in surface temperature, ablation onset time and total ablation on temperate or near-temperate glaciers. In order to simulate englacial temperatures and the conductive heat flux into a glacier, subsurface models have been applied (e.g. Klok and Oerlemans, 2002; Coripio, 2003). An additional level of complexity can account for the transfer of heat related to percolation and refreezing of surface meltware (e.g. Brun and others, 1989; Jordan, 1991; Greuel and Konzelmann, 1994; Bartelt and Lehning, 2002; Andreas and others, 2004). Huang (1990) and Braithwaite and others (2002) suggest that where mean annual air temperatures are below \\(-6\\) to \\(-8^{\\circ}\\)C, significant refreezing and percolation of meltwater can occur. When energy-balance models are extended from the point to the glacier scale (e.g. Arnold and others, 1996; Hock and Noetzli, 1997; Klok and Oerlemans, 2002), many variables cannot be measured and must be parameterized in each model gridcell. Some of these variables, namely the turbulent heat fluxes and the outgoing longwave radiation, are highly sensitive to assumptions made about the glacier surface and subsurface temperature conditions. It is therefore advisable to evaluate the validity of any such assumptions at the point scale before attempting to extend models in space. ## 11 Study area and field measurements The study site is located at 2280 m a.s.l. on a small valley glacier in the Donjek Range (\\(60^{\\circ}50^{\\prime}\\)N, \\(139^{\\circ}10^{\\prime}\\)W) of the St Elias Mountains (Fig. 1a). Separated from the Gulf of Alaska by the highest peaks in the St Elias Mountains, the Donjek Range has a notably continental climate despite being \\(<\\)100 km from the coast. The study glacier is \\(\\sim\\)5.3 km\\({}^{2}\\) in area and ranges in altitude from 1970 to 2960 m a.s.l. (Fig. 1b). Its present equilibrium-line altitude is \\(\\sim\\)2550 m a.s.l. (Wheler, 2009). It has a soundly aspect over most of its length and occupies a valley on the southeast side of the range crest. It has a history of surging and is currently thought to be undergoing a slow surge (De Paoli and Flowers, 2009). The mean annual air temperature at the study site (\\(\\sim\\)2300 m a.s.l.), in the mid-ablation area of the glacier, is about \\(-8^{\\circ}\\)C. The glacier internal temperature structure appears to be weakly polythermal (De Paoli and Flowers, 2009). It is thus unclear to what extent the glacier heat flux and the variability in glacier surface temperature will impact the surface energy balance and melt regime. An automatic weather station (AWS) was installed in the ablation area as part of a field program initiated in summer 2006 at this site. In 2008 an allodometer was added to the AWS. Meteorological instruments, installed at a nominal height of 2 m above the surface, measure variables at 30 s intervals, and a Campbell CR1000 data logger records 5 min averages of these quantities (Table 1). An ultrasonic depth gauge (USDG) is installed on a structure drilled into the ice several metres away from the AWS and provides half-hourly surface-lowering data used for model validation. Englacial temperatures were measured daily at 1 m intervals down to \\(\\sim\\)12 m depth at the AWS site in July-October 2008. These data were collected using a custom cable embedded with digital temperature sensors and comprise an additional source of model validation. The cable failed in October 2008 for reasons not yet understood. \\begin{table} \\begin{tabular}{l l l} \\hline \\hline Variable & Instrument & Precision \\\\ \\hline Air temperature & HMP45C212 TRH Probe & \\(\\pm\\)0.28\\({}^{\\circ}\\)C \\\\ Relative humidity & HMP45C212 TRH Probe & \\(\\pm\\)4\\% \\\\ Wind speed & RM Young 05103-10 & \\(\\pm\\)3 ms\\({}^{-1}\\) \\\\ Wind direction & RM Young 05103-10 & 3\\({}^{\\circ}\\) \\\\ Net radiation & Kipp \\& Zonen NR-IITE & \\(\\pm\\)5\\% \\\\ SW radiation in/out & Kipp \\& Zonen CMM6 & \\(\\pm\\)3\\% \\\\ Barrometric pressure & RM Young 01205V & \\(\\pm\\)0.5 hPa \\\\ Distance to surface & S850 Sonic Ranger & \\(\\pm\\)0.4\\% \\\\ Englacial temperature & Beaded stream digital sensors & Not given \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Instrumentation used in this study. Precision taken from manufacturer documentation. SW: shortwave Figure 1: Study site. (a) Donjek Range study area within the St Elias Mountains of southwest Yukon with study glacier outlined. (b) Contoured surface elevation of study glacier (20 m contour interval) with locations of the ablation stakes (dots) and the AWS (\\(\\times\\)). Image in (a) provided through NASA’s Scientific Data Purchase Project and produced under NASA contract by Earth Satellite Corporation. We have constructed a map of ice-surface elevation from kinematic GPS surveys conducted in 2006-07, and we maintain a network of 18 ablation stakes on the study glacier to measure mass balance (accumulation and ablation) (Fig. 1b). Profiles of snowpack density and temperature at the AWS location were determined from snow pit measurements made in May 2008. Densities and temperatures were measured at 10 cm depth intervals in the snow pit using, respectively, a 250 cm\\({}^{3}\\) stainless-steel wedge and spring scale, and a 5 in dial-stem snow thermometer. The necessary data for energy-balance modelling are recorded at the AWS and comprise a dataset covering a complete ablation season in 2008 (Fig. 2). ## Energy-balance model formulation Conservation of energy requires that incoming energy be balanced by the amount outgoing and the amount consumed. The energy balance over a snow or ice surface can be written as (e.g. Hock, 2005): \\[Q_{\\mathrm{N}}+Q_{\\mathrm{R}}+Q_{\\mathrm{I}}+Q_{\\mathrm{G}}=Q_{\\mathrm{M}}, \\tag{1}\\] where \\(Q_{\\mathrm{M}}\\) is net radiation, \\(Q_{\\mathrm{R}}\\) is the sensible heat supplied by rain, \\(Q_{\\mathrm{I}}\\) and \\(Q_{\\mathrm{I}}\\) are the turbulent fluxes of sensible and latent heat, respectively, \\(Q_{\\mathrm{G}}\\) is the glacier or subsurface heat flux and \\(Q_{\\mathrm{M}}\\) is the energy available for melt. Heat fluxes toward the surface are considered positive (downward for \\(Q_{\\mathrm{N}}\\), \\(Q_{\\mathrm{M}}\\) and \\(Q_{\\mathrm{I}}\\) and upward for \\(Q_{\\mathrm{G}}\\)). The small quantity of rain observed at the study sites, combined with the low air temperatures, allows us to neglect the heat flux from rain, \\(Q_{\\mathrm{R}}\\). We evaluate four model formulations, M1-M4, in which the glacier surface temperature, \\(T_{\\mathrm{s}}\\), and glacier heat flux, \\(Q_{\\mathrm{G}}\\), are treated in an increasingly simple manner (Table 2). M1 includes a multilayer subsurface model (MSM) in order to compute the glacier heat flux, \\(Q_{\\mathrm{G}}\\), and the surface temperature, \\(T_{\\mathrm{s}}\\) (following Greuell and Konzelmann, 1994). The MSM presented by Greuell and Konzelmann (1994) has been successfully applied to studies of energy and mass balance at point locations, at 30 m spatial resolution across the small valley glacier Storglicaer, Sweden (Reijmer and Hock, 2008), and at 20 km resolution across the Greenland ice sheet (Bougamont and others, 2005). Of the models used in this study, the MSM (included in M1) has the strongest physical basis and is thus used as a reference against which to compare the results obtained with other models (M2, M3, M4). Below we describe our treatment of each component of the energy balance in the various models. Unless otherwise noted, the treatment applies to all models. Shortwave, longwave and net radiation (\\(Q_{\\mathrm{SW}}\\), \\(Q_{\\mathrm{M}}\\) and \\(Q_{\\mathrm{N}}\\)) Net radiation is directly measured in this study by a Kipp & Zonen NR-LIIF net radiometer. The spectral response of this instrument has two peaks (short- and longwave) between 0.3 and 30 m. Incoming and outgoing shortwave radiation are also measured independently by a Kipp & Zonen CMA6 allodometer (spectral range 310-2800 nm). Both radiation sensors are installed to be horizontal and are relveelled periodically; the allodometer was installed in May 2008, and both sensors were relveelled on 14 July 2008, at which time the albedoeter was found to be \\(<\\)1\\({}^{\\circ}\\) from horizontal. Measurements of shortwave radiation have been adjusted for the glacier surface slope (southerly aspect, 6\\({}^{\\circ}\\) slope). To do this, the potential direct solar radiation is first computed separately for a flat surface and for a surface of 6\\({}^{\\circ}\\) slope, determined from the glacier surface digital elevation model. This value of the slope is within the 5-7\\({}^{\\circ}\\) range measured with a compass inclinometer at the AWS location. The difference between potential direct solar radiation computed for the AWS location for the horizontal surface and a 6\\({}^{\\circ}\\) surface is added to the measured value of incoming shortwave radiation at 5 min intervals. Details of this correction, including the method used to compute potential direct solar radiation, are given by Wheeler (2009). This difference averages to 7.8 W m\\({}^{-2}\\) over the melt season. Note that the incoming shortwave radiation in early morning and late evening for a 6\\({}^{\\circ}\\) surface is less than that for a flat surface, due to the northerly component of incoming radiation. We do not measure longwave radiation directly. For M1, the net longwave radiation, \\(Q_{\\mathrm{M}}\\), is determined by subtracting the measured net shortwave radiation, \\(Q_{\\mathrm{SW}}\\), from the measured net all-wavelength radiation, \\(Q_{\\mathrm{N}}\\). Incoming longwave radiation, for all models, M1-M4, is \\begin{table} \\begin{tabular}{c c c c c} \\hline M1 & M2 & M3 & M4 \\\\ \\hline \\(T_{\\mathrm{s}}\\) & MSM & Residual & ITS & 0\\({}^{\\circ}\\)C \\\\ \\(Q_{\\mathrm{G}}\\) & MSM & Residual & — & — \\\\ \\hline \\end{tabular} \\end{table} Table 2: Model formulations M1–M4, distinguished by their treatments of glacier surface temperature, \\(T_{\\mathrm{s}}\\), and glacier heat flux, \\(Q_{\\mathrm{G}}\\): multilayer subsurface model (MSM), residual \\(Q_{\\mathrm{G}}\\) method, iterative temperature scheme (ITS) or \\(T_{\\mathrm{s}}=0^{\\circ}\\)C assumption Figure 2: Hourly meteorological variables at a nominal height of 2 m above the glacier surface at the AWS in 2008. (a) Net shortwave radiation, SWnet. (b) Incoming longwave radiation, LWin. (c) Air temperature, \\(T_{\\mathrm{s}}\\). (d) Wind speed, \\(u_{\\mathrm{Z}}\\). (e) Vapour pressure, \\(e_{\\mathrm{z}}\\). \\(T_{\\mathrm{Z}}\\) and \\(u_{\\mathrm{Z}}\\) are measured directly, \\(e_{\\mathrm{z}}\\) is calculated from the measured barometric pressure, SWnet is the sum of measured SWin and SWout, and LWin is calculated from \\(Q_{\\mathrm{N}}=\\)SWnet+LWin–LWout, where \\(Q_{\\mathrm{N}}\\) is measured directly and LWout is calculated by model M1. then computed as the difference between \\(Q_{\\rm{MW}}\\) and the outgoing longwave radiation simulated by M1. Outgoing longwave radiation is calculated individually for each model as a function of the modelled glacier surface temperature, according to the Stefan-Boltzmann law: \\(L_{\\rm{out}}=\\epsilon\\,\\sigma\\,T_{\\rm{s}}^{4}\\), where \\(\\epsilon\\) is the emissivity of the surface which we take as 1.0 (Arnold and others, 1996; Hock and Holmgren, 2005), \\(\\sigma=5.6704\\times 10^{-8}\\,{\\rm J}\\,{\\rm s}^{-1}\\,{\\rm m}^{-2}\\,{\\rm K}^{-4}\\) is the Stefan-Boltzmann constant and \\(T_{\\rm{s}}\\) is the surface temperature. ### Turbulent heat fluxes, \\(Q_{\\rm{H}}\\) and \\(Q_{\\rm{L}}\\) We compute the turbulent fluxes of sensible (\\(Q_{\\rm{H}}\\)) and latent (\\(Q_{\\rm{L}}\\)) heat using the bulk aerodynamic method (Murno, 1989). The glacier surface temperature and vapour pressure are either computed or assumed constant (\\(0^{\\circ}\\)C and 611 Pa), depending on the surface/subsurface model applied. The turbulent fluxes are thus calculated based on Monin-Obukhov similarity theory using a single level of wind speed, temperature and relative humidity measurements by the equations: \\[Q_{\\rm{H}}=\\rho_{\\rm{a}}\\,c_{\\rm{P}}\\,C_{\\rm{H}}\\,u_{\\rm{Z}}\\,(T_{\\rm{z}}-T_{ \\rm{s}}) \\tag{2}\\] \\[Q_{\\rm{L}}=\\rho_{\\rm{a}}\\,L_{\\rm{V}}\\,C_{\\rm{E}}\\,u_{\\rm{z}}\\,\\left(\\frac{0.6 22}{P}\\right)\\left({\\rm{e}}_{\\rm{z}}-{\\rm{e}}_{\\rm{s}}\\right), \\tag{3}\\] where \\(\\rho_{\\rm{a}}\\) is the density of air computed from the measured air pressure, \\(P\\), \\(c_{\\rm{P}}=1010\\,{\\rm l}\\,{\\rm kg}^{-1}\\,{\\rm K}^{-1}\\) is the specific heat capacity of air, \\(L_{\\rm{v}}=2.514\\times 10^{6}\\,{\\rm l}\\,{\\rm kg}^{-1}\\) is the latent heat of vaporization, and \\(C_{\\rm{H}}\\) and \\(C_{\\rm{E}}\\) are the turbulent exchange coefficients for heat and vapour pressure, respectively. The wind speed, \\(u_{\\rm{z}}\\), temperature, \\(T_{\\rm{z}}\\), and air pressure, \\(P\\), are measured at height \\(\\chi\\) (nominally 2 m). \\(T_{\\rm{s}}\\) and \\({\\rm{e}}_{\\rm{s}}\\) are the temperature and vapour pressure at the surface and \\({\\rm{e}}_{\\rm{z}}\\) is the 2 m vapour pressure computed from measured relative humidity. The turbulent exchange coefficients \\(C_{\\rm{H,E}}\\) are computed as (e.g. Hock, 2005): \\[C_{\\rm{H,E}}=\\frac{k^{2}}{\\left[\\ln(z/z_{\\rm{M}})-\\Psi_{\\rm{M}}(z/L)\\right] \\left[\\ln(z/z_{\\rm{H,E}})-\\Psi_{\\rm{H,E}}(z/L)\\right]}, \\tag{4}\\] where \\(k=0.4\\) is the von Karman constant, \\(L\\) is the Monin-Obukhov length, \\(\\Psi_{\\rm{M,H,E}}\\) are the integrated stability constants and \\(z_{\\rm{M,H,E}}\\) are the roughness lengths for momentum, heat and water vapour, respectively. We adopt a value \\(z_{\\rm{M}}=1\\,\\rm{mm}\\) for both snow and ice, and set \\(z_{\\rm{H}}=z_{\\rm{E}}=z_{\\rm{M}}/100\\) (e.g. Hock and Holmgren, 2005). Values of \\(z_{\\rm{M}}\\) in the range 0.01-1.1 mm for ice (mean of 0.65 mm) and 0.015-1.8 mm for snow (mean of 0.38 mm) have been measured at the study site (MacDougall and Flowers, in press). Order-of-magnitude changes in the value of \\(z_{\\rm{M}}\\) have little effect on our results. For example, varying the value of \\(z_{\\rm{M}}\\) for ice from 0.5 mm to 10 mm yields only a 0.06 m (or 3%) difference in calculated total ablation over the melt season. This suggests a low model sensitivity to \\(z_{\\rm{M}}\\); however, a more robust sensitivity analysis is required to confirm this. The turbulent heat fluxes are small contributors to the energy balance in our study area, but their importance in the modelled energy balance could increase, for example, if stability corrections were not applied. \\(C_{\\rm{H,E}}\\) are computed for stable atmospheric conditions using the functions presented by Beljaars and Holtslag (1991) and for unstable conditions using the nonlinear functions of Paulson (1970) and Dyer (1974). In order to compute the stability corrections, an iterative procedure is employed following Munro (1990). To initiate the iteration, neutral atmospheric conditions are assumed and preliminary values of the turbulent exchange coefficients, \\(C_{\\rm{H,E}}\\), and the sensible and latent heat fluxes, \\(Q_{\\rm{H}}\\) and \\(Q_{\\rm{L}}\\), are computed. Nonzero values of the stability correction functions are then estimated and used to correct \\(C_{\\rm{H,E}}\\) for atmospheric stability and compute corrected values of \\(Q_{\\rm{H}}\\) and \\(Q_{\\rm{L}}\\). Full correction is achieved when the value of \\(Q_{\\rm{H}}\\) stabilizes, which we define as a mean change from the previous step of \\(<\\)0.1 W m\\({}^{-2}\\) and a maximum change of \\(<\\)0.5 W m\\({}^{-2}\\). ### Melt energy, \\(Q_{\\rm{M}}\\) The melt energy, \\(Q_{\\rm{M}}\\), is computed as the residual of the energy balance in Equation (1). The simulated water-equivalent (w.e.) ablation rate, \\(M_{\\rm{s}}\\) (m w.e. s\\({}^{-1}\\)), is determined by: \\[M_{\\rm{s}}=\\frac{Q_{\\rm{M}}}{\\rho_{\\rm{w}}\\,L_{\\rm{f}}}-\\frac{Q_{\\rm{L}}}{ \\rho_{\\rm{w}}\\,L_{\\rm{v}}}, \\tag{5}\\] where \\(\\rho_{\\rm{w}}=1000\\,{\\rm kg}\\,{\\rm m}^{-3}\\) is the density of water and \\(L_{\\rm{f}}=3.34\\times 10^{5}\\,{\\rm l}\\,{\\rm kg}^{-1}\\) is the latent heat of fusion. Only positive values of \\(Q_{\\rm{M}}\\) are considered in the ablation-rate calculation, but both negative and positive values of \\(Q_{\\rm{L}}\\) are considered. Both sublimation/deposition and evaporation/condensation are expected to occur, so the application of the latent heat of vaporization is an arbitrary choice (following e.g. Oerlemans, 2000). Depending on the partitioning of the negative latent heat flux between evaporation and sublimation, this assumption may result in ablation differences of a few millimetres to centimetres water equivalent for the dataset under consideration. ### Glacier surface temperature, \\(T_{\\rm{s}}\\), and heat flux, \\(Q_{\\rm{G}}\\) #### M1: multilayer subsurface model There are a variety of ways that heat is transferred from the surface into the glacier. In addition to refreezing meltwater, showwave radiation penetrates below the surface (Patterson, 1994) and conduction through the snow/ice occurs wherever a temperature gradient exists. We follow the approach used in the SOMARS (Simuation Of glacier surface Mass balance And Related Sub-surface processes) subsurface model of Greuell and Konzelmann (1994), which takes all of these heat transfer mechanisms into account. For the one-dimensional MSM, the following thermodynamic equation is solved in each layer of the subsurface grid (Greuell and Konzelmann, 1994): \\[\\rho c\\,\\frac{\\partial T}{\\partial t}=\\frac{\\partial}{\\partial z}\\left(K_{\\rm{G}} \\frac{\\partial T}{\\partial z}\\right)+\\frac{\\partial Q_{\\rm{s}}}{\\partial z}- \\frac{\\partial}{\\partial z}(M_{\\rm{L}}l)+\\frac{\\partial}{\\partial z}(L_{\\rm{f}}l), \\tag{6}\\] with temperature \\(T\\), material density \\(\\rho\\) and specific heat capacity \\(c=152.5+7.122(T+273.15)\\)(Paterson, 1994). The term \\(\\frac{\\partial}{\\partial z}\\left(K_{\\rm{G}}\\frac{\\partial T}{\\partial z}\\right)\\) on the right-hand side of the equation is the conductive heat flux, \\(Q_{\\rm{C}}\\), where \\(K_{\\rm{e}}\\) is the effective conductivity and \\(\\partial T/\\partial z\\) is the subsurface temperature gradient. Quantity \\(\\partial Q_{\\rm{s}}/\\partial z\\) is the absorption of energy from the atmosphere, where \\(Q_{\\rm{s}}=Q_{\\rm{N}}+Q_{\\rm{H}}+Q_{\\rm{L}}\\) is the sum of the radiative and turbulent fluxes. \\(M\\) and \\(F\\) are the melt and refreezing rates, respectively, and \\(L_{\\rm{f}}\\) is the latent heat of fusion. To implement the MSM we establish a grid comprising 25 subsurface layers, with the upper boundary defined as the snow or ice surface and the lower boundary set to a depth of \\(\\sim\\)40 m (i.e. well below the depth of ice affected by seasonal temperature fluctuations). The thickness of the subsurface layers varies from 0.04 m near the surface to \\(\\sim\\)5 m for the deepest layers. Precise layer thicknesses are adjusted so the depth of the snow-ice interface at the beginning of the melt season (i.e. the previous year's summer surface) coincides with a boundary between two subsurface layers (following Greuell and Konzelmann, 1994). Glacier surface conditions at the beginning of the melt season must be prescribed to initialize the model. The simplified MSM applied here requires only initial snow depth and profiles of density and temperature, which we obtain from snow pit measurements made adjacent to the AWS in May 2008. Following Greuell and Konzelmann (1994), a constant initial density equal to the integrated snow pit density is assigned to all snow layers, and a constant ice density, \\(\\rho_{\\rm t}\\), of \\(900\\,\\rm kg\\,m^{-3}\\) is assigned to the ice. Measured snow pit temperatures are linearly interpolated onto the subsurface model grid. Initial ice temperatures are linearly interpolated onto the grid between the measured \\(12\\,\\rm m\\) ice temperature (recorded in September 2008) and the measured temperature at the snow-ice interface at the beginning of the ablation season (\\(-10^{\\circ}\\)C in early May 2008). With the initial subsurface temperature and density profiles defined, the surface temperature, \\(T_{\\rm s}\\), is calculated by linear extrapolation of the temperature of the two uppermost model layers. This extrapolation is preferable to setting \\(T_{\\rm s}\\) equal to the temperature of the uppermost layer, because of the large temperature gradients just below the surface (Greuell and Konzelmann, 1994). According to Reijmer and Hock (2008), in order to obtain the most accurate surface temperature, the two uppermost layers must be of the order of a few centimetres thick; we use thicknesses of \\(4\\) and \\(5\\,\\rm cm\\). The surface temperature obtained from the energy balance at a given time-step is then used to calculate the outgoing longwave radiation, the turbulent heat fluxes (Equations (2) and (3)) and the surface energy balance (Equation (1)) for the following time-step. We found a negligible difference in results between using a linear versus shape-preserving spline function to compute the surface temperature. The penetration of shortwave radiation beneath the surface is taken into account following Greuell and Oerlemans (1986) by assuming 36% is absorbed entirely at the surface (represented by the uppermost model layer). The radiation absorbed at the surface corresponds approximately to wavelengths \\(>\\)0.8 \\(\\rm\\,nm\\). The remaining 64% is assumed to penetrate below the surface and is absorbed by all layers (including the uppermost model layer) according to the Beer-Lambert law: \\[S(z)=0.64\\,Q_{\\rm SW}e^{-nz}, \\tag{7}\\] where \\(S(z)\\) is the shortwave radiation flux at depth \\(z\\), \\(Q_{\\rm SW}\\) is the net shortwave radiation incident at the surface and \\(\\kappa\\) is the extinction coefficient. Snow/ice absorption of shortwave radiation is predicted to increase with material density up to \\(\\sim\\)450 \\(\\rm kg\\,m^{-3}\\) and then decrease with density from \\(\\sim\\)450 to 9000 \\(\\rm kg\\,m^{-3}\\)(Bohren and Barkstrom, 1974). In the datasets presented here, the minimum snow density is \\(\\sim\\)250 \\(\\rm kg\\,m^{-3}\\) and we apply a constant extinction coefficient of \\(\\kappa=20\\,\\rm m^{-1}\\) for all snow densities up to 450 \\(\\rm kg\\,m^{-3}\\) (for comparison, the maximum value of \\(\\kappa\\) given by Greuell and Konzelmann (1994) was applied to the summer snow density of 300 \\(\\rm kg\\,m^{-3}\\)). Following Greuell and Konzelmann (1994), \\(\\kappa\\) is assumed to be a linear function of density from \\(\\sim\\)450 to 900 \\(\\rm kg\\,m^{-3}\\) such that \\(\\kappa=2.5\\,\\rm m^{-1}\\) at 900 \\(\\rm kg\\,m^{-3}\\). Because the extinction coefficient is taken from the literature, we have not explicitly accounted for any site-specific effects related to contaminants or dust in the study glacier snowpack. The major steps taken to implement the MSM, after the grid is established, are described below. The conductive heat flux depends on the effective conductivity, \\(K_{\\rm e}\\), which governs conduction, convection, radiation and vapour diffusion processes. We compute \\(K_{\\rm e}\\) as a function of snow or ice density, \\(\\rho\\) (\\(\\rm kg\\,m^{-3}\\)), following Sturm and others (1997): \\[K_{\\rm e}=0.138-1.01\\times 10^{-3}\\rho+3.233\\times 10^{-6}\\rho^{2}. \\tag{8}\\] Conduction between adjacent subsurface layers is then determined from \\(K_{\\rm e}\\) and the temperature gradient, \\(\\partial T/\\partial z\\). If the computed temperature of any layer exceeds \\(0^{\\circ}\\)C, the temperature is set to \\(0^{\\circ}\\)C and the excess energy is applied to melt. Melting can occur in subsurface layers to the penetration of shortwave radiation. When snow is present at the glacier surface, meltwater percolates until it reaches a layer with a subzero temperature and rerefeezes. The amount of rerefeering in any given model layer is limited by the temperature, which cannot exceed \\(0^{\\circ}\\)C, and the available pore space in the snow. An amount of meltwater equal to the irreducible water content is retained even if the refreezing limit has been reached, and the remaining liquid meltwater percolates down to the underlying layer. Following Reijmer and Hock (2008), we describe \\(\\theta_{\\rm mix}\\), the ratio of the mass of irreducible liquid water to the total mass of the layer, based on an empirical relationship developed by Schneider and Jansson (2004) using data from Storglaciaren and laboratory experiments (Coleou and Lesaffre, 1998): \\[\\theta_{\\rm mi}=0.0143\\rm e^{3.3n} \\tag{9}\\] \\[n=1-\\frac{\\rho_{\\rm rot}(1-\\theta_{\\rm m})}{\\rho_{\\rm t}} \\tag{10}\\] \\[\\theta_{\\rm m}=\\frac{m_{\\rm liq}}{m_{\\rm liq}+m_{\\rm sn}}, \\tag{11}\\] where \\(n\\) is the porosity of the snow layer, \\(\\rho_{\\rm rot}\\) is the total density including both solid and liquid, \\(m_{\\rm liq}\\) is the mass of liquid water, \\(m_{\\rm niq}\\) is the mass of snow and \\(\\theta_{\\rm m}\\) is referred to as the gravimetric liquid water content. Increases in the total density correspond to increases in porosity and to increases in the irreducible water content according to Equations (9) and (10). If the sum of the available meltwater (produced within a given layer or percolating from above) and the liquid water content present in the layer exceeds the irreducible water content, the excess percolates to the underlying layer. Any meltwater that does not refreeze and is not retained as irreducible water content percolates to the base of the snowpack, where it runs off along the impermeable ice surface. Runoff is computed using a runoff coefficient for snow, \\(\\kappa_{\\rm s}=0.8\\), which is the ratio of runoff to available water (e.g. Seidel and Martinec, 2004). This means that 80% of the water runs off immediately, while 20% is retained in the snowpack at each time-step. This runoff is equivalent to the model ablation. meltwater that does not immediately run off is used to build a saturated slush layer that extends upward from the base of the snowpack. In the dataset considered, the results are not particularly sensitive to the value of the runoff coefficient (values from 0.5 to 1.0 were examined), in part because snow makes a much smaller contribution than ice to the total ablation. Furthermore, the glacier surface slopes in the study area are \\(>\\)\\(5^{\\circ}\\), producing relatively rapid runoff and little opportunity for the formation of slush layers. For situations in which the contribution of snow is more significant, or slopes more conducive to retaining water, the runoff coefficient may warrant more careful investigation. In addition to changes in the density of snow layers due to melting and retreezing, which will dominate in a melting snowpack, small changes in layer densities can also occur due to dry densification. The dry densification rate, \\(\\mathrm{d}\\rho/\\mathrm{d}t\\), is computed following the approach of Li and Zwally (2004), which is a modification of the empirical relationship developed by Herron and Langway (1980): \\[\\frac{\\mathrm{d}\\rho}{\\mathrm{d}t}=K(T)A^{\\alpha}\\frac{\\rho_{\\mathrm{i}}-\\rho }{\\rho_{\\mathrm{i}}}-\\frac{\\partial I}{\\partial x}, \\tag{12}\\] where \\(K(T)=K_{0}(T)\\exp(-E(T)/RT)\\) is the temperature-dependent rate factor with universal gas constant \\(R=8.3144\\,\\mathrm{K}^{-1}\\,\\mathrm{mol}^{-1}\\), rate constant \\(K_{0}\\) and activation energy \\(E\\). \\(E\\) and \\(K_{0}\\) are functions of temperature, as described by Zwally and Li (2002). \\(A\\) is the mean accumulation rate representing the change in overburden pressure and is set to the initial snow layer thickness (Reijmer and Hock, 2008), \\(\\alpha\\) is \\(\\sim\\)1 and \\(I\\) is the vapour flux. Changes in density due to the vapour flux are computed following Sturm and Benson (1997). Calculated surface lowering in the model thus includes contributions from dry densification. In the final step, the subsurface grid is redefined so it is effectively shifted down to compensate for the change in thickness of the uppermost layer due to ablation. Summer snow accumulation is not accounted for in this version of the MSM. Mass (solid and liquid) and energy (i.e. cold content) are conserved when layers are shifted. The thickness of the bottom snow layer is incrementally reduced as snowmelt proceeds, until it is \\(<\\)1 cm, at which time the layer is merged with the overlying snow layer. In summary, the MSM computes the glacier heat flux, \\(Q_{\\mathrm{G}}\\), as the integrated change in cold content in all model layers at each time-step and facilitates calculation of outgoing longwave radiation, \\(Q_{\\mathrm{M}\\mathrm{V}}\\), and the sensible (\\(Q_{\\mathrm{M}}\\)) and latent (\\(Q_{\\mathrm{L}}\\)) heat fluxes by estimating the surface temperature. Melting is computed directly from the energy balance in each model layer. ### M2: solving \\(Q_{\\mathrm{G}}\\) as the residual energy flux In this formulation, the glacier heat flux, \\(Q_{\\mathrm{G}}\\), is solved as the residual of the energy-balance equation when the surface energy flux is negative (e.g. Klok and Oerlemans, 2002). The resultant changes in surface temperature, and related changes in the turbulent fluxes and the surface energy balance are then recomputed. For each 5 min time-step, the surface energy balance is initially calculated assuming \\(T_{\\mathrm{s}}=0^{\\circ}\\mathrm{C}\\). If the surface energy flux is negative, the glacier heat flux, \\(Q_{\\mathrm{G}}\\), is computed as the residual required to achieve energy balance (Equation (1)) with \\(Q_{\\mathrm{M}}\\). The temperature change in the 5 cm thick surface layer is then computed from \\(Q_{\\mathrm{G}}\\) as \\[\\Delta T_{\\mathrm{s}}=\\frac{Q_{\\mathrm{G}}}{\\rho_{\\mathrm{s}}\\,c_{\\mathrm{s}} \\,d_{\\mathrm{s}}}\\Delta t, \\tag{13}\\] where \\(\\Delta t\\) is the time-step in seconds, and \\(\\rho_{\\mathrm{s}}\\), \\(c_{\\mathrm{s}}\\) and \\(d_{\\mathrm{s}}\\) are the density, specific heat capacity and thickness of the surface layer, respectively. Varying the layer thickness, \\(d_{\\mathrm{s}}\\), affects the temperature, \\(T_{\\mathrm{s}}\\), but does not affect the amount of energy associated with warming/cooling the surface layer, since the amount of energy is determined by \\(Q_{\\mathrm{G}}\\). The surface energy balance is then recalculated with the surface temperature, \\(T_{\\mathrm{s}}\\), assumed equal to the new temperature of the 5 cm surface layer. The iteration continues until the value of \\(Q_{\\mathrm{G}}\\) stabilizes, which usually occurs after two or three iterations. This simple approach to estimating the glacier heat flux and the surface temperature is a compromise between the MSM (M1) and the iterative temperature scheme (M3). M2 extends the iterative temperature scheme (Braun and Hock, 2004) by taking into account the energy associated with temperature changes in a 5 cm thick surface layer and has some similarities to the two-layer subsurface model of Klok and Oerlemans (2002). M2 is expected to be most applicable on temperate or near-temperate glaciers where the conductive component of the glacier heat flux is small. ### M3: iterative adjustment of surface temperature; M4 The third method we use to determine the glacier surface temperature, \\(T_{\\mathrm{s}}\\), referred to as the iterative temperature scheme (ITS), is applied following Braun and Hock (2004). In this method we assume that \\(T_{\\mathrm{s}}=0^{\\circ}\\mathrm{C}\\) when the surface energy flux (Equation (1)) is positive. If the surface energy flux is negative, \\(Q_{\\mathrm{M}}\\) is set to zero and the surface is assumed to cool (Escher-Vetter, 1985; Braithwaite and others, 1998). The surface temperature is then lowered iteratively in 0.25\\({}^{\\circ}\\mathrm{C}\\) steps and the turbulent heat fluxes recomputed at each step. A lowered surface temperature increases the turbulent heat fluxes and decreases the outgoing longwave radiation in order to compensate for the negative surface energy flux, and the iteration continues until balance is achieved (Braun and Hock, 2004; Hock and Holmgren, 2005). The glacier heat flux, \\(Q_{\\mathrm{G}}\\), is neglected in this method. Reijmer and Hock (2008) compare this method to the MSM of Greuell and Konzelmann (1994) and find that the latter reproduces observed surface temperatures with significantly greater skill. This is expected, since the MSM accounts for the relevant surface and subsurface processes in a physical manner. However, the potential advantage of the iterative scheme is that it does not require knowledge of the temperature or density structure of the ice or snowpack. M3 is differentiated from M2 in that it assumes an instantaneous adjustment of surface temperature, whereas M2 allows the temporary storage of heat in a shallow subsurface layer. In the extreme case where wind speed \\(u_{\\mathrm{z}}=0\\), the turbulent heat fluxes will be zero and the surface temperature iteration will never converge. Surface temperatures are therefore capped at a minimum of \\(-30^{\\circ}\\mathrm{C}\\) for M3. We assume the surface energy is balanced when temperatures reach this threshold. When the melt energy is positive, M3 (ITS) is identical to M4, in which the surface temperature is assumed constant at \\(0^{\\circ}\\mathrm{C}\\). Melt rates simulated with these two methods are thus identical. ## Results and discussion We present results for the 2008 melt season, beginning with a comparison of simulated and measured ice temperatures and cumulative ablation. This is followed by an intercomparison of model results, including modelled surface temperatures, hourly ablation and daily and total energy fluxes. ### Subsurface temperature structure The MSM applied in M1 computes temperature and density changes down to \\(\\sim\\)40 m below the glacier surface. In this study, the primary purpose of the MSM is to calculate the glacier heat flux, as outlined above. The MSM provides a sound physical basis for modelling subsurface temperaturesand densities, and yields better agreement with the observed englacial temperature profiles when shortwave penetration into the subsurface is included (Fig. 3). The temperature profile is captured within \\(1\\,\\mathrm{\\SIUnitSymbolDegree}\\) for the model initialized on 5 May with a simple temperature profile linearly interpolated between the measured temperature at the base of the snowpack and at \\(12\\,\\mathrm{m}\\) ice depth (left column, Fig. 3). If the initial ice-temperature profile is instead assumed constant and equal to the measured temperature at the ice-snow interface at the beginning of the melt season, the modelled temperature profile is substantially colder. For this dataset, where the initial snowpack was very thin, the difference was up to \\(\\sim 6\\,\\mathrm{\\SIUnitSymbolDegree}\\)C from July to September; this difference is expected to decrease with an increasing thickness of the initial snowpack. Based on visual inspection, good agreement with the observations is achieved when the measured temperature profile is used as an initial condition for a simulation beginning on 18 July (middle column, Fig. 3). It could be argued that the measured near-surface temperatures are better reproduced by the simulation shown in the left column of Figure 3. Neglecting shortwave penetration into the subsurface leads to modelled near-surface temperatures that depart substantially from the measured values late in the season (right column, Fig. 3). In this simulation, these departures extend to \\(5\\,\\mathrm{m}\\) depth, with modelled temperatures being up to \\(\\sim\\)\\(2.5\\,\\mathrm{\\SIUnitSymbolDegree}\\)C colder near the surface by 11 September 2008. The model neglecting shortwave penetration would presumably also under-predict near-surface temperatures in the early melt season. ### Cumulative ablation Figure 4 shows observed and simulated cumulative ablation from 5 May to 11 September 2008. The USDG ablation record is constructed from differences in daily average surface lowering using the integrated snow pit density for the period of snowmelt, and a density of \\(900\\,\\mathrm{kg}\\,\\mathrm{m}^{-3}\\) for the period of ice melt. Note that ablation of summer snowwall is included in the USDG ablation record, while snowwall is not explicitly included in the model. However, because the model uses measured albedo, the influence of summer snowwall on surface albedo is implicitly included. Figure 4 also shows several ablation measurements at the stake location (a few metres from the AWS) and their attendant uncertainties. These uncertainties reflect those estimated for individual stake measurements based on the complexity of the glacier surface in the immediate vicinity of the stake; they do not reflect the standard deviation between measurements or include uncertainty in the representativeness of the stake for a larger region. Significant differences in cumulative ablation as simulated by the models arise during the period of snowmelt prior to 10 June, with M1 underestimating ablation near the beginning of this period. Models M2-4 perform better than M1 from roughly 15 to 25 May; M1 performs significantly better than the other models immediately after this, from 25 May to 10 June. All models overestimate the USDG measured ablation rate similarly and significantly during the period 10-25 June. We interpret this mismatch as being due to the evolution of the glacier surface during this phase of the melt season: a heavily weathered and uneven cryoconite surface emerges from beneath the snow cover and later collapses. This results in depressed rates of surface lowering (and hence ablation) as recorded by the USDG prior to the cryoconite collapse. From about \\(25\\,\\mathrm{M}\\)ay onward, M2-M4 generally over-predict ablation, resulting in modelled ablation that exceeds the measured value by 18%, 27% and 29% for M2, M3 and M4, respectively, by the end of the season (Table 3). M1 Figure 4: Cumulative ablation from 5 May to 11 September 2008 simulated by M1–M4, compared to ablation-stake measurements and daily average values of ultrasonic depth gauge (USDG) surface lowering. Initial snow depth is 0.41 m (0.103 mw.e) and total ablation measured at the stake is 1.80 mw.e. while that measured by the USDG is 1.78 mw.e. M1 underestimates ablation by 6%, while M2, M3 and M4 overestimate ablation by 18%, 27% and 29%, respectively. Figure 3: Subsurface temperature profiles measured (diamonds) and modelled with M1 (crosses) for 18 July (top row), 15 August (middle row) and 11 September (bottom row). Left column: M1 initialized on 5 May with initial temperature profile linearly interpolated between temperature measured at snow-ice interface on 5 May at 12 m ice depth. Centre column: M1 initialized on 18 July with measured ice-temperature profile. Right column: as in centre column, but with shortwave penetration neglected (i.e. all radiative energy absorbed entirely by uppermost model layer). undrestimates cumulative ablation by 6%, increasing to 10% if a uniform initial ice-temperature profile is assumed. The subsurface model in M1 produces a reasonably accurate simulation of the timing of the snow-to-ice surface transition. According to the USDG record, the ice surface became temporarily snow-free on 7 June for a few hours, was covered by snow later that night, and again became snow-free on 10 June, after which ice remained exposed at the surface, aside from a few small snow events later in the melt season. The M1-simulated surface transition occurs on 11 June. The actual surface transition manifests itself in all the modelled ablation records as an inflection point, due to the increase in net radiation as the surface albedo drops substantially in the transition from snow to ice. The surface transition date is important for simulating changes in surface albedo, subsurface heat-transfer processes, supralgacial hydrology and associated impacts on the melt and runoff. These changes have a particularly large impact on spatially distributed melt modelling, where albedo, outgoing longwave radiation and glacier surface conditions are simulated rather than measured or observed (e.g. Arnold and others, 1996; Klok and Oerlemans, 2002; Hock and Holmgren, 2005). It should be noted that the snowpack was relatively thin (0.41 m) in 2008 at the USDG location, compared with some other recent years where we have measured \\(>\\)1.5 m snow depth at the same site. For the thin snowpack, including dry densification in the model did not result in significant surface lowering. However, percolation and retreezing were significant and resulted in 0.1 m w.e. of internal accumulation; this is comparable to the winter accumulation for 2008. A deeper snowpack will provide a more informative test of the model in some respects; however, we have chosen to focus on the year for which we have subsurface temperature measurements. A future study will examine the performance of the MSM over several years, encompassing a range of initial snowpack conditions. ### Surface temperature, \\(T_{\\rm s}\\) Glacier surface temperatures simulated with M1-M3 are shown in Figure 5; M4 assumes \\(T_{\\rm s}=0^{\\circ}\\)C. The mean difference between surface temperatures simulated by M1 initialized with a constant as opposed to a linearly varying initial temperature profile was 0.04\\({}^{\\circ}\\)C; the maximum difference was 2.32\\({}^{\\circ}\\)C. The modelled temperature records are distinct for M1-M3, with M1 consistently producing the highest daily temperature minima, often greater than \\(-7^{\\circ}\\)C during the mid-ablation season. M1 also produces multi-day periods of subfreezing surface temperatures, particularly late in the melt season. Sub-freezing temperatures rarely persist for more than a day in M2 or M3 after 15 May. Surface temperatures simulated with M3 are highly variable and exhibit the lowest daily minima. The arbitrary threshold of \\(-30^{\\circ}\\)C is rarely reached in M3 after 15 May, though the temperature frequently dips below \\(-20^{\\circ}\\)C. While M1 and M2 both account for the thermal inertia of the snow or ice layers, hence limiting the magnitude of surface temperature variability, there is still a significant difference between including a full subsurface model (M1) and only a thin subsurface layer (M2). The penetration of shortwave radiation below the surface in M1 contributes to the higher modelled surface temperature minima: warmer subsurface layers are able to transfer heat toward the surface as it cools, thus limiting the surface temperature drop. The glacier heat flux, Q\\({}_{\\rm G}\\), is generally positive (upward), transferring heat from depth toward the surface, during sub-freezing conditions at the surface. Without accounting for thermal inertia and the energy released due to cooling of the surface/subsurface at night, M3 requires that the nightly surface temperature be adjusted to a relatively low value in order to compensate for a negative energy flux at the surface. The negative energy flux is the result of the negative net radiative flux, which occurs on most nights in this dataset. The smaller the energy deficit, the smaller the surface temperature adjustments required to achieve energy balance in M2 and M3, and the smaller the differences in simulated surface temperature minima between models. On a few nights, when the net \\begin{table} \\begin{tabular}{l c c c c} \\hline & M1 & M2 & M3 & M4 \\\\ \\hline _Full period_ & & & & \\\\ \\(I_{\\rm out}\\) & \\(-299\\) & \\(-294\\) & \\(-288\\) & \\(-316\\) \\\\ Q\\({}_{\\rm t}\\) & \\(6\\) & \\(6\\) & \\(7\\) & \\(-3\\) \\\\ Q\\({}_{\\rm t}\\) & \\(-3\\) & \\(-3\\) & \\(-3\\) & \\(-12\\) \\\\ \\(Q_{\\rm G}\\) & \\(-6\\) & \\(0\\) & \\(-\\) & \\(-\\) \\\\ _Mehting_ & & & & \\\\ \\(I_{\\rm out}\\) & \\(-316\\) & \\(-316\\) & \\(-316\\) & \\(-316\\) \\\\ Q\\({}_{\\rm t}\\) & \\(6\\) & \\(6\\) & \\(5\\) & \\(5\\) \\\\ Q\\({}_{\\rm t}\\) & \\(-5\\) & \\(-8\\) & \\(-8\\) & \\(-8\\) \\\\ Q\\({}_{\\rm G}\\) & \\(-30\\) & \\(0\\) & \\(-\\) & \\(-\\) \\\\ \\(M_{\\rm M}\\) & \\(26.4\\) & \\(40.3\\) & \\(50.9\\) & \\(50.9\\) \\\\ \\(M_{\\rm Q_{\\rm t}}\\) & \\(0.01\\) & \\(0.01\\) & \\(0.05\\) \\\\ \\(M_{\\rm ROT}\\) & \\(1.68\\) & \\(2.10\\) & \\(2.26\\) & \\(2.30\\) \\\\ \\hline \\end{tabular} \\end{table} Table 3: Averages of simulated hourly energy fluxes (W m\\({}^{-2}\\)) for the entire 2008 ablation season, and for periods of melting only, with M1-M4: outgoing longwave radiation, \\(I_{\\rm out}\\), sensible heat flux, Q\\({}_{\\rm t}\\), latent heat flux, Q\\({}_{\\rm t}\\), and glacier subsurface heat flux, Q\\({}_{\\rm G}\\). Also reported is the duration of surface melting in days, \\(t_{\\rm M}\\); the ablation due to sublimation, \\(M_{\\rm Q_{\\rm t}}\\) (m w.e.); and the total ablation, \\(M_{\\rm ROT}\\) (m w.e.), which includes mass loss due both melt and sublimation. If the melting period for M1 is calculated based on simulation of internal (subsurface) melting, rather than surface melting, \\(I_{\\rm out}=-310\\), \\(Q_{\\rm H}=6\\), \\(Q_{\\rm L}=-5\\), \\(Q_{\\rm G}=-26\\) and \\(M_{\\rm M}=60.8\\) Figure 5: Modelled surface temperature, \\(T_{\\rm s}\\), at the AWS in 2008. (a) M1. (b) M2. (c) M3. longwave energy loss is small and winds are relatively high, surface temperature minima simulated with M1, M2 and M3 are similar (e.g. on 30 August). When the surface energy balance remains positive at night, the simulated surface temperature remains at \\(0^{\\circ}\\)C with all models (e.g. on 5 July). The assumption of a constant surface temperature of \\(0^{\\circ}\\)C does not necessarily have a large impact on simulated melt rates, since melting does not occur when \\(T_{\\rm s}<0^{\\circ}\\)C. However, simulation of the energy fluxes during times when \\(T_{\\rm s}<0^{\\circ}\\)C does have an impact on the timing of melt onset the following day. ### Hourly ablation To understand the detailed differences in simulated ablation between models, we examine hourly ablation rates and surface temperatures over 5 days in August (Fig. 6). The highest hourly mean nightly surface temperatures are obtained with M1 and the lowest temperatures with M3 (Figs 5 and 6b). The instantaneous response of M3 to changes in the surface energy balance results in large and rapid surface temperature fluctuations. Following cold night-time temperatures, melting temperatures at the glacier surface are often achieved earlier in the morning with M3, compared with M1 and M2 (Fig. 6a); the thermal inertia of the snow/ice layers modelled with M1 and M2 generally reduces the rate and magnitude of surface temperature variability. The conductive heat flux simulated with M1 is positive at night, which causes warmer surface temperatures and allows ablation to begin earlier on some days compared with M2 (e.g. 10-11 August). Although the timing of daily ablation onset can differ by a few hours between models, there is little difference in simulated peak daily ablation rates. When the surface is melting simulated ablation rates are equal with all models except M1. Ablation rates are frequently less with M1 than the other models due to the negative glacier heat flux, and due to the refreezing and retention of meltwater during periods of snowmelt. This difference is larger during ice melt than snowmelt, but the difference between peak rates of ice melt simulated by M1 and the other models is generally small (Fig. 6a). However, these differences can accumulate to substantial amounts over time (Fig. 4). ### Daily energy fluxes Differences in modelled surface temperature are important, because they affect the calculation of the outgoing longwave radiation and the turbulent heat fluxes. In Figure 7 we compare the daily energy fluxes simulated with M1-M4 to examine the implications of varying treatments of \\(T_{\\rm s}\\) and \\(Q_{\\rm G}\\) on the energy-balance components. To facilitate comparison between models, in Figure 7b-d we plot the components as differences from those simulated by M1. The most notable features of Figure 7 are: (1) the large difference between M4 and the other models; (2) the similarity of most energy fluxes simulated by M2 and M3; and (3) the minimal discrepancy between M1 and M2 in the second half of the melt season. The large difference in daily energy fluxes between M4 and the other models, especially at the beginning and end of the melt season, is due entirely to the assumption of a \\(0^{\\circ}\\)C glacier surface temperature. Prior to 16 May the simulated surface temperature (Fig. 5a) rarely reaches \\(0^{\\circ}\\)C, so M4 simulates much more outgoing longwave radiation than the other models. In contrast to M2 and M3, M4 generally underestimates net longwave radiation relative to M1 throughout the melt season. Differences in the simulated values of net longwave radiation are entirely a product of differences in calculated outgoing longwave, as incoming longwave has been prescribed identically for all models. Q\\({}_{\\rm W}\\) is generally higher for M2 and M3 relative to M1, as M2 and M3 predict lower surface temperatures (Fig. 5) and hence less outgoing longwave. Aside from during the early melt season, when temperatures are notably colder for M2, compared with M1, Q\\({}_{\\rm W}\\) is fairly similar between M1 and M2. Although minimum modelled surface temperatures are significantly different between M1 and M2, the daily mean modelled temperatures are often similar and thus produce similar daily mean values of Q\\({}_{\\rm W}\\). The turbulent heat fluxes, Q\\({}_{\\rm M}\\) and Q\\({}_{\\rm L}\\), also distinguish M4 from the other models. The sensible heat flux, Q\\({}_{\\rm H}\\), is second in importance to the net radiative flux as an energy source, providing 15% of the daily energy on average. This percentage was calculated by separating the positive and negative contributions of each daily flux and dividing their magnitudes by the respective total positive and negative daily fluxes. The sensible heat flux provides most or all of the daily energy on several days early and late in the melt season. After mid-May, there is little difference in the sensible heat flux (which generally acts as a small source of energy) as simulated by M1-M3. In contrast, Q\\({}_{\\rm M}\\) simulated by M4 is a significant energy sink (large and negative), both at the beginning and end of the melt season. Simulations of the latent heat flux with M1, M2 and M3 have very similar patterns of overall variability. All models yield negative fluxes of latent heat, but M4 frequently yields values 5-25 W m\\({}^{-2}\\) lower than the other models. The daily mean glacier heat flux, Q\\({}_{\\rm G}\\) (only computed in M1 and M2), is the most substantial energy sink (aside from melting, which is not shown in Fig. 7). On average, Q\\({}_{\\rm G}\\) is twice the magnitude of the other consistent energy sink, Q\\({}_{\\rm G}\\). With M1, the glacier heat flux is a significant source of energy on only nine days in the melt season. Early in the melt season, the snowpack is rapidly warmed due to percolation and refreezing of meltwater. This warming can result in a positive glacier heat flux when the surface cools at night, Figure 6: Hourly variables simulated for 5 days in August by M1, M2 and M3. (a) Ablation rates. Values are identical for M3 and M4. (b) Surface temperatures, \\(T_{\\rm s}\\). but the net glacier heat flux is predominantly negative when daily averages are computed. Other than M1, the models presented here do not consider heat exchange between the surface and subsurface, aside from the thin, 5 cm surface layer considered in M2. Similar discrepancies in \\(Q_{\\rm G}\\) with respect to M1 between M2 and M3 (Fig. 7) suggest the treatment of the glacier heat flux in M2 is inadequate for the near-surface temperature conditions at the study site. Although \\(Q_{\\rm G}\\) varies dimethyl throughout the melt season in M1, there is a shift from values near zero (during snowmelt) to negative values (during ice melt) at the simulated snow-to-ice surface transition (not shown). Two factors contribute to this shift. The most important is the effect of meltwater percolation and refreezing during snowmelt. Refreezing meltwater rapidly warms the upper snowpack, resulting in an isothermal near-surface temperature profile. In the absence of a near-surface temperature gradient, the conductive heat flux is zero. Once the surface is snow-free (i.e. after the transition date), percolation no longer occurs in the model, and without the energy released by refreezing meltwater, subsurface temperatures remain below zero when the surface is melting. This establishes a near-surface temperature gradient in which heat is conducted away from the surface, into the glacier, and \\(Q_{\\rm G}\\) is thus negative. The second factor contributing to the shift in \\(Q_{\\rm G}\\) from the period of snow- to ice melt is the conductivity itself. As defined in Equation (8), effective conductivity depends on density and is thus significantly greater in ice than in snow. For example, for typical snow and ice densities of 500 and \\(900\\,{\\rm kg}\\,{\\rm m}^{-3}\\), respectively, the conductivity of ice is more than four times that of snow. The increased conductivity facilitates more efficient energy exchange between the surface and subsurface layers. The combined results of changes in refreezing and conductivity described above are: relatively cold temperatures in the upper subsurface layers during ice melt (due to the lack of percolation and refreezing) and enhanced energy exchange between the cold subsurface layers and the melting surface (due to the increased conductivity). The major difference between the glacier heat flux in M1 versus M2 is that M2 does not include conduction in the subsurface, but simply calculates changes in the temperature of the 5 cm surface layer. The glacier heat flux, \\(Q_{\\rm G}\\), is solved as the residual in the surface-energy-balance equation in M2 when the sum of the radiative and turbulent fluxes is Figure 7: Daily mean energy fluxes: full components for M1 (a), as well as components expressed as differences from those calculated by M1 (b–d) (net longwave radiation, \\(Q_{\\rm MW}\\), glacier heat flux, \\(Q_{\\rm G}\\), sensible heat flux, \\(Q_{\\rm H}\\), and latent heat flux, \\(Q_{\\rm L}\\)). Values are stacked rather than superimposed (i.e. the total length of each bar is the sum of the components). Only values \\(>\\)5 W m\\({}^{2}\\) are shown, as smaller values cannot be resolved in this figure. (a) M1. (b) M2 – M1. (c) M3 – M1. (d) M4 – M1. negative. This effectively lumps the energy that is physically associated with temperature changes in both the surface and subsurface layers into a single surface layer. While M2 captures the diurnal reversal of Q\\({}_{\\text{C}}\\) reasonably well (not shown), contributions of opposite sign tend to cancel one another over the course of a day in this model. This explains the significant discrepancy in daily values of \\(Q_{\\text{C}}\\) between M1 and M2 in Figure 7. ### Total energy fluxes The mean contributions of each energy flux to the surface energy balance are given in Table 3. Considering the full period, the sensible (Q\\({}_{\\text{M}}\\)) and latent (Q\\({}_{\\text{L}}\\)) heat fluxes are similar for M1-M3, with values of 6-7 and \\(-3\\,\\text{W}\\,\\text{m}^{-2}\\), respectively. The glacier heat flux, \\(Q_{\\text{C}}\\), simulated with M1 is \\(-6\\,\\text{W}\\,\\text{m}^{-2}\\) when averaged over the full period of record, indicating a net flux of heat from the surface to depth. During the period of melting, \\(Q_{\\text{C}}\\) averaged \\(-30\\,\\text{W}\\,\\text{m}^{-2}\\). Energy fluxes computed with M3 and M4 are identical during melting conditions, as the two models are identical when the surface energy balance is positive. The total ablation simulated by M3 and M4 (2.26 and 2.30 m w.e., respectively) differs only by the contribution of sublimation. The negative latent heat fluxes simulated with M4 result in the largest contribution of all the models to ablation by sublimation. The \\(T_{\\text{s}}=0\\) assumption results in an assumed 611 Pa surface vapour pressure, which is typically larger than the vapour pressure in the atmosphere (see Fig. 2). This increases simulated sublimation (Equation (3)), especially when air temperatures are low (e.g. at night when the surface is not melting). However, even the relatively large 0.05 m w.e. of sublimation simulated with M4 is only a few per cent of the total ablation. Models that incorporate the 0\\({}^{\\circ}\\)C assumption, such as M4, generally only compute ablation when the surface energy balance is positive (e.g. Arnold and others, 1996), and often ignore sublimation because of its generally small contribution to the total ablation (e.g. Willis and others, 2002). Values of \\(L_{\\text{out}}\\), \\(Q_{\\text{M}}\\) and \\(Q_{\\text{L}}\\) are similar between M2 and M3 for the full period and for the period of melting. The difference in total ablation predicted by these two models is therefore primarily a function of the modelled duration of melting, 40.3 and 50.9 days for M2 and M3, respectively. Values of \\(L_{\\text{out}}\\) and \\(Q_{\\text{M}}\\) are also similar between M1 and M2, but the subsurface heat flux, \\(Q_{\\text{C}}\\), comprises a significant heat sink for M1. In M2, \\(Q_{\\text{C}}\\) acts as a short-term (i.e. daily to a few days) energy storage term and thus has a value of zero when averaged over longer timescales. In contrast, the glacier heat flux simulated with M1 is an estimate of the true heat exchange between the glacier surface and subsurface. The result is a total ablation of 1.68 m w.e. simulated by M1 versus 2.10 m w.e. simulated by M2. While the duration of surface melting predicted by M1 is only 26.4 days, the duration of surface and subsurface melting combined is 60.8 days. ## 10 Summary and conclusions We quantify the differences between various treatments of the glacier surface temperature and subsurface heat flux in a point-scale energy-balance model, aiming to identify an optimal balance of model simplicity and accuracy. The most physically defensible model, M1, is a MSM that includes dry densification of snow, penetration of shortwave radiation and subsurface melting, percolation and refereeing of meltwater in the snowpack and generation of slush layers. This model simulates the measured ice-temperature profile well, and provides a reasonably robust simulation of total ablation with different initial temperature profiles. The subsurface heat flux, \\(Q_{\\text{C}}\\), plays a non-negligible role in reducing the ablation simulated by M1 compared with the other models. While the other models significantly overestimate the total ablation measured at the AWS site in 2008, the ablation modelled with M1 was within 6% of the measured value. Given the advantages of this model, an effort should be made to collect at least the snow temperature and density data (i.e. vertical profiles) required to initialize the model. Such data can be readily collected with minimal additional effort as part of most traditional mass-balance studies. Model M2 solves for the glacier heat flux, \\(Q_{\\text{C}}\\), as the residual in the energy-balance equation when the surface energy flux is negative, and computes the temperature of a 5 cm surface layer based on the value of the glacier heat flux. Because of the significance of \\(Q_{\\text{C}}\\) as a heat sink in this study, and the limitation of M2 furnishing only short-term heat storage, cumulative total melt was overestimated by 18% with this model. Surface temperatures simulated by M2 were systematically lower than those simulated by M1. M3 employs an iterative temperature scheme whereby the surface temperature is adjusted until the energy fluxes are balanced. This model produces highly variable and significantly lower surface temperatures than M2 or M1, and ignores any delay in the onset of melting after a period of sub-freezing temperatures. This results in a 27% overestimation of total ablation by the end of the melt season, primarily as a result of overestimating the duration of melt. M3 performs only marginally better than the simplest model, M4, which assumes a constant surface temperature of 0\\({}^{\\circ}\\)C. While M1 is the only model that arguably performs well in this study, M2 has an appeal in that it does not require any knowledge of the subsurface snow or ice properties. M2 produces results intermediate between the physically based subsurface model in M1 and the commonly made 0\\({}^{\\circ}\\)C assumption in M4. In particular, ablation and surface temperature are simulated with greater accuracy by applying M2 than by applying M3 or M4. For studies that require treatment of sub-freezing surface temperatures and the glacier heat flux, but that aim to avoid the complexity of a full subsurface model, a shallow multilayer hybrid model may be a productive compromise (e.g. MacDougall and Flowers, in press). ## 111 Acknowledgements We are grateful to the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Foundation for Innovation (CFI), the Canada Research Chairs (CRC) Program, the Northern Scientific Training Program (NSTP) and Simon Fraser University for funding. Permission to conduct this research was granted by the Kluane First Nation, Parks Canada and the Yukon Territorial Government. Support from the Kluane Lake Research Station (KLRS) and Kluane National Park and Reserve is greatly appreciated. We are indebted to A. Williams, S. Williams, L. Goodwin (KLRS) and Trans North Helicporters for logistical support, and to A. MacDougall, L. Mingo, A. Jarosch, A. Rushmere, P. Belliveau, J. Logher, C. Schoof and F. Anslow for field assistance. A. MacDougall provided the processed USDGrecord. We thank F. Pellicciotti, D. van As and three anonymous reviewers for their contributions to improving various incarnations of the manuscript. ## References * (1) * (2)Andres, E.L., R.E. Jordan and A.P. Makshtas. 2004. Simulations of snow, ice, and near-surface atmospheric processes on Ice Station Wededell. _J. Hydromet._, **5**(4), 611-624. * (3)Arendt, A.A., K.A. Echelmeyer, W.D. Harrison, C.S. Lingle and V.B. Valentine. 2002. Rapid usage of Alaska glaciers and their contribution to rising sea level. _Science_, **297**(5580), 382-386. * (4)Amold, N.S., I.C. Willis, M.J. Sharp, K.S. Richards and W.J. Lawson. 1996. A distributed surface energy-model for a small-valley glacier. I. Development and testing for Htat Glacier of Arolla, Valais, Switzerland. _J. Glacicol._, **42**(140), 77-89. * (5)Bartelet, P. and M. Lehning. 2002. A physical SNOWPACK model for the Swiss avalanche warning. Part I: numerical model. _Cold Reg. Sci. Technol._, **35**(3), 123-145. * (6)Beljaars, A. and A. Hotslag. 1991. Flux parameterization over land surface for atmospheric models. _J. Appl. Meteorol._, **30**(3), 327-341. * (7)Bertheier, E., E. Schiefer, G.K.C. Clarke, B. Menounos and F. Remy. 2010. Contribution of Alaska glaciers to sea-level rise derived from satellite imagery. _Nature Geosci._, **3**(2), 92-95. * (8)Bohren, C.F., and B.R. Barkstrom. 1974. Theory of the optical properties of snow. _J. Geophys. Res._, **79**(30), 4527-4535. * (9)Boisampton, M., J.L. Bamber and W. Greuell. 2005. A surface mass balance model for the Greenland Ice Sheet. _J. Geophys. Res._, **110**(F4), F04018. (10.1029/2005J)F000348. * (10)Barithwaite, R.J., T. Konzelmann, C. Marty and O.B. Olesen. 1998. Reconsanace study of glacier energy balance in North Greenland. 1993-94. _J. Glacicol._, **44**(147), 239-247. * (11)Braithwaite, R.J., Y. Zhang and S.C.B. Raper. 2002. Temperature sensitivity of the mass balance of mountain glaciers and ice caps as a climatological characteristic. _Z. Gletschekt. Glazialgool._, **38**(1), 35-61. * _Hydroology in Mountains Regions I: Hydrological Measurements: the Water Cycle_, 99-106. * (13)Braun, M. and R. Hock. 2004. Spatially distributed surface energy balance and ablation modelling on the ice cap of King George Island (Anctracia). _Global Planet. Change_, **42**(1-4), 45-58. * (14)Brock, B.W., I.C. Willis, M.J. Sharp and N.S. Arnold. 2000. Modelling seasonal and spatial variations in the surface energy balance of Haut Glacier d'Arolla, Switzerland. _Ann. Glacicol._, **31**, 53-62. * (15)Brun, E., E. Martin, V. Simon, C. Gendre and C. Coleou. 1989. An energy and mass model of snow cover suitable for operational avalanche forecasting. _J. Glacicol._, **35**(121), 333-342. * (16)Clarke, G.K.C. and G. Holdsworth. 2002. Glaciers of the St Elias Mountains. _In Williams, R.S., Jr and J.G. Ferrigno_, eds. _Satellite image atlas of glaciers of the world_. Denver, CO, US Geological Survey, 3101-328. (USGS Professional Paper 1386-4). * (17)Coleou, C. and B. Lesaffre. 1998. Irreducible water saturation in snow: experimental results in a cold laboratory. _Ann. Glacicol._, **26**, 64-68. * (18)Corrigion, J. 2003. Modeling the energy balance of high altitude glacierised basins in the Central Andes. (PhD thesis, University of Edinburgh.) * (19)De Paoli, L. and G.E. Flowers. 2009. Dynamics of a small surge-type glacier investigated using one-dimensional geophysical inversion. _J. Glacicol._, **55**(194), 1101-1112. * (20)Dyer, A.J. 1974. A review of flux-profile relationships. _Bound.-Layer Meteorol._, **7**(3), 363-372. * (21)Esche-Vetter, H. 1985. Energy balance calculations for the ablation period 1982 at Vernagfermer, Oetzal Alps. _Ann. Glaciol._, **6**, 158-160. * (22)Greuell, J.W. and T. Konzelmann. 1994. Numerical modeling of the energy balance and the englacial temperature of the Greenland ice sheet: calculations for the ETH-Camp location (West Greenland, 1155 m\\(\\,\\)s.t.). _Global Planet. Change_, **9**(1-2), 91-114. * (23)Greuell, W. and J. Oerlemans. 1986. Sensitivity studies with a mass balance model including temperature profile calculations inside the glacier. _Z. Gletschekt. Glazialgool._, **22**(10), 101-124. * (24)Heron, M.M. and C.C. Langway, Jr. 1980. First densification: an empirical model. _J. Glacol._, **25**(93), 373-385. * (25)Hock, R. 2005. Glacier melt: a review on processes and their modelling. _Prog. Phys. Geogr._, **29**(3), 362-391. * (26)Hock, R. and B. Holmgren. 2005. A distributed surface energy-balance model for complex topography and its application to Storglaciation, Sweden. _J. Glacicol._, **51**(172), 25-36. * (27)Hock, R. and C. Noetzli. 1997. Areal melt and discharge modelling of Storglaciation, Sweden. _Ann. Glacicol._, **24**, 211-216. * (28)Huang, M. 1990. On the temperature distribution of glaciers in China. _J. Glacicol._, **36**(123), 210-216. * (29)Jordan, R. 1991. A one-dimensional temperature model for a snow cover: technical documentation for SNTHERBM.89. _CCREL Spec. Rep._, 9-116. * (30)Kac, J.G., Cogley, M.B. Dyurgero, M.F. Meier and A. Ohmura. 2006. Mass balance of glaciers and ice caps: consensus estimates for 1961-2004. _Geophys. Res. Lett._, **33**(19), L19501. (10.1029/2006GL027511). * (31)Kattelmann, R., and D. Yang. 1992. Factors delaying spring runoff in the upper Orump River basin, China. _Ann. Glacicol._, **16**, 225-230. * (32)Klok, E.J. and J. Oerlemans. 2002. Model study of the spatial distribution of the energy and mass balance of Morteratschgelscher, Switzerland. _J. Glacicol._, **48**(163), 505-518. * (33)Klok, E.J., M. Noal and M.R. van den Broeke. 2005. Analysis of meteorological data and the surface energy balance of McCall Glacier, Alaska, _J. Glacicol._, **51**(174), 451-461. * (34)Konzelmann, T. and R.J. Braithwaite. 1995. Variations of ablation, albedo and energy balance at the margin of the Greenland ice sheet, Knoprins Christian Land, eastern north Greenland. _J. Glacicol._, **41**(137), 174-182. * (35)Leme, P. and _10 others._ 2007. Observations: changes in snow, ice and frozen ground. _In Solomon, S. and Y others_, eds. _Climate change 2007: the physical science basis. Contribution of Working Group 1 to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change._ Cambridge, etc., Cambridge University Press, 339-383. * (36)Li, J. and H.J. Zwally. 2004. Modeling the density variation in the shallow fm layer. _Ann. Glacicol._, **38**, 309-313. * (37)MacDougall, A.H. and G.E. Flowers. In press. Spatial and temporal transferability of a distributed energy-balance glacier melt model. _J. Climate._ * (38)Moore, R.D. and M.N. Demuth. 2001. Mass balance and streamflow variability at Place Glacier, Canada in relation to recent climate fluctuations. _J. Hydromet._, **15**(18), 3472-3486. * (39)Munro, D.S. 1989. Surface roughness and bulk heat transfer on a glacier: comparison with eddy correlation. _J. Glaciol._, **35**(121), 343-348. * (40)Munro, D.S. 1990. Comparison of melt energy computations and ablatometer measurements on melting ice and snow. _Acct. Alp. Res._, **22**(2), 153-162. * (41)Oerlemans, J. 2000. Analysis of a 3 year meteorological record from the ablation zone of Morteratschgeltescher, Switzerland: energy and mass balance. _J. Glacicol._, **46**(155), 571-579. * (42)Paterson, W.S.B. 1994. _The physics of glaciers. Third edition._ Oxford, etc., Elsevier. * (43)Paulson, C.A. 1970. The mathematical representation of wind speed and temperature profiles in the unstable atmospheric surface layer. _J. Appl. Meteorol._, **9**(6), 857-861. Pellicciotti, F. _and 7 others_. 2008. A study of the energy balance and melt regime on luncet M\\(\\acute{\\text{e}}\\) Glacier, semi-arid Andes of central Chile, using melt models of different complexity. _Hydrol. Process._, **22**(19), 3980-3997. * [Pellicotti, Carenzo, Helbing, Rinkus, and Burlando2009] Pellicotti, F., M. Carenzo, J. Helbing, S. Rimkus and P. Burlando. 2009. On the role of the subsurface heat conduction in glacier energy-balance modelling. _Ann. Glaciol._, **50**(50), 16-24. * [Rapey, Braithwaite2006] Rapey, S.C.B. and R.J. Braithwaite. 2006. Low sea level rise projections from mountain glaciers and icecaps under global warming. _Nature._, **439**(7074), 311-313. * [Reimer and Rock2008] Reimer, C.H. and R. Rock. 2008. Internal accumulation on Storglaciaren, Sweden, in a multi-layer snow model coupled to a distributed energy- and mass-balance model. _J. Glaciol._, **54**(184), 61-72. * [Schneider and Jansson2004] Schneider, T. and P. Jansson. 2004. Internal accumulation in firn and its significance in the mass balance of Storglaciaren, Sweden. _J. Geophys. Res._, **109**(F2), F02009. (10.1029/2003JF00110.) * [Seidel and Martinec2004] Seidel, K. and J. Martinec. 2004. _Remote sensing in snow hydrology: runoff modelling, effect of climate change_. Chichester, Springer-Praxis. * [Sturm and Benson1997] Sturm, M. and C.S. Benson. 1997. Vapor transport, grain growth and depth-hoar development in the subarctic snow. _J. Glaciol._, **43**(143), 42-59. * [Sturm, Holmgren, Konig, and Morris1997] Sturm, M., J. Holmgren, M. Konig and K. Morris. 1997. The thermal conductivity of seasonal snow. _J. Glaciol._, **43**(143), 26-41. * [Van de Wal, Russell1994] Van de Wal, R.S.W. and A.J. Russell. 1994. A comparison of energy balance calculations, measured ablation and meltwater runoff near Sondre Stromfjord, West Greenland, _Global Planet. Change_, **9**(1-2), 29-38. * [Wheeler2009] Wheeler, B.A. 2009. Glacier melt modelling in the Donjek Range, St Elias Mountains, Yukon Territory. (MSC thesis, University of Alberta.) * [Willis, Arnold, and Brock2002] Willis, I.C., N.S. Arnold and B.W. Brock. 2002. Effect of snowpack removal on energy balance, melt and runoff in a small supraglacial catchment. _Hydrol. Process._, **16**(14), 2721-2749. * [Zwally and Jl2002] Zwally, H.J. and J. Li. 2002. Seasonal and interannual variations of firm densification and ice-sheet surface elevation at Greenland summit. _J. Glaciol._, **48**(161), 199-207.
We apply a point-scale energy-balance model to a small polythermal glacier in the St Elias Mountains of Canada in order to investigate the applicability and limitations of different treatments of the glacier surface temperature and subsurface heat flux. These treatments range in complexity from a multilayer subsurface model that simulates snowpack evolution, to the assumption of a constant glacier surface temperature equal to 0\\({}^{\\circ}\\)C. The most sophisticated model includes dry densification of the snowpack, penetration of shortwave radiation into the subsurface, internal melting, refreezing of percolating meltwater and generation of slush layers. Measurements of subsurface temperature and surface lowering are used for model validation, and highlight the importance of including subsurface penetration of shortwave radiation in the model. Using an iterative scheme to solve for the subsurface heat flux as the residual of the energy-balance equation results in an overestimation of total ablation by 18%, while the multilayer subsurface model underestimates ablation by 6%. By comparison, the 0\\({}^{\\circ}\\)C surface assumption leads to an overestimation of ablation of 29% in this study where the mean annual air temperature is about -8\\({}^{\\circ}\\)C. + Footnote †: Present address: Wek-Vezhil Land and Water Board, 4910 50th Avenue, Yellowknife, Nunavt X1A 355, Canada.
Write a summary of the passage below.
307
isprs/9acc09b2_bd8e_4cbd_83ee_13cdfcb7aaa1.md
# Edge-Preserving Denoising Based on Dynamic Programming on the Full Set of Adjacency Graphs Pham Cong Thang The University of Da Nang - University of Science and Technology, 54 Nguyen Luong Bang, Da Nang, Viet Nam [email protected] Andrei V. Kopylov Tula State University, 92 Lenin Ave., Tula, Russia Sergei D. Dvoenko Tula State University, 92 Lenin Ave., Tula, Russia [email protected], [email protected] ## 1 Introduction Medical images such as X-ray, computed tomography (CT), magnetic resonance imaging (MRI) or other tomographic modalities, like SPECT, PET, or ultrasound, is an essential source of non-invasive information both for diagnosis and treatment planning. However, the image acquisition and transmission process inevitably causes noise in the image due to Imaging plate nonuniformity, noise in the electronics chains, source power fluctuations, quantization noise in the analog-to-digital conversion process, and so on. But in the field of medical imaging, a precise representation of the fine local image structure is extremely important since the detection of small objects or a tissue type is often an objective. Thereby, the edge preserving properties of the denoising procedures become especially significant. Despite of many present powerful methods, described in the literature, like nonlinear total variation (Rudin et al., 1992; Wang Y., et al., 2011), fourth order PDEs (You Y., Kaveh M., 2000), and nonlinear anisotropic diffusion (Perona, P. and Malik, J., 1990; Gerig G., et al., 1992; Yuanquan Wang, et al., 2013), none of them can simultaneously achieves both sufficient accuracy to provide a highly reliable data, and computational speed to process super-resolution or dynamic images at practice-relevant time. In this paper, we proposed a parametric procedure for edge preserving image denoising using Bayesian framework as one of the most popular approaches to image processing. In this approach, the image analysis problem can be described as the problem of estimation of a hidden Markov component of a two-component random field, where the analyzed image plays the role of the observed component. An equivalent representation of Markov random fields in the form of Gibbs random fields, according to the Hammersley-Clifford theorem, can be used to define a priori probability properties of a hidden Markov field by means of so called Gibbs potentials for cliques. In the case of a so called singular loss function, the Bayesian estimation of the hidden component can be found as a maximum a posteriori probability (MAP) estimation, which leads us to the problem of minimization of the objective function, often called the Gibbs energy function (Besag J.E., 1974). A new non-convex type of pairwise Gibbs potentials was proposed in the papers (Pham C.T. and Kopylov A.V., 2015, 2016), with the ability to flexibly define a priori preferences, using separate penalties for various ranges of differences between values of an image adjacent elements. A special version of the parametric dynamic programming procedure was elaborated for optimization of the objective function, based on the tree-like approximation of the lattice neighborhood graph. Experiments show that the proposed procedure can effectively manage heterogeneities and discontinuities in the source data. The tree-like approximation method (Mottl V., et al., 1998) decomposes the original lattice-like adjacency graph into several tree-like ones. Each of these graphs covers all elements of the pixel grid. The final decision for each variable is based on the separate partial objective function with the tree-like adjacency of variables instead of the overall objective function. Thus, some relations between goal variables are eliminated and are not taken into account, which leads to decreasing in accuracy. In the paper (Dvoenko S.D., 2012) another way for the tree-like approximation of a lattice based on the full set of acyclic adjacency graphs was proposed. Let a hypothetical covering set of all spanning acyclic graphs (the full set) be given. For the finite set of image elements, the number of such graphs is also finite. Let usassume that all elements of the data array be roots for several unknown for us acyclic adjacency graphs from the full set. Expanding step-by-step vicinities of descendants for each element simultaneously, we can obtain its maximal vicinity including the element itself, and thus obtain the final decision for that element as a combination of decisions based on acyclic adjacency graphs from the full set. The paper describes the general probabilistic framework for dependent objects recognition. In this work, we propose a new edge-preserving procedure for medical images, that combines the flexibility in prior assumptions, and computational effectiveness of parametric dynamic programming shown in (Pham C.T. and Kopylov A.V., 2015, 2016) with the increased accuracy of the tree-like representation of a lattice on the basis of the full set of adjacency graphs, described in (Dvoenko S.D., 2012). It should be noticed, in this case the only forward move of the dynamic programming procedure in needed to find the final solution. According to it, we sufficiently simplify the recalculation of the intermediate Bellman functions. In experimental studies, we compare the performance of image denoising algorithms by using well-known criteria (Bovik A. C., Wang Z., 2006) like the Mean Structure SIMilarity Index (MSSIM) and the Peak to Signal Noise Ratio (PSNR) for Gaussian denoising. We provide experimental results in medical image denoising as well, as comparison with other related methods. ## 2 The Prior Model for Edge-Preserving Denoising Within the Bayesian framework to image processing, denoising problem can be formulated as the problem of estimation of a hidden Markov component \\(\\mathbf{X}=(x_{\\mathbf{t}},\\mathbf{t}\\in T)\\) on the basis of observation \\[\\mathbf{Y}=(y_{i},\\mathbf{t}\\in T)\\text{, where }T=\\{\\mathbf{t}=(t_{1},t_{2})\\} \\text{, }t_{1}=1 N_{1}\\text{, }t_{2}=1 N_{2}\\] is a discrete image lattice. Hidden component \\(\\mathbf{X}\\) represents the \"true\" image that we would like to recover, and \\(\\mathbf{Y}\\) is observed intensity function of a noisy image. An equivalent representation of Markov random fields in the form of Gibbs random fields, according to the Hammersley-Clifford theorem, can be used to define a priori probability properties of a hidden Markov field by means of so called Gibbs potentials for cliques of adjacency graph \\(G\\subset T\\times T\\) of image elements (Figure 3). The clique number of a lattice-like graph is equal to two. Therefore, the pairwise Gibbs potentials, or so-called edge functions are the primary means to specify the statistical relations between image elements and, in turn, to define an edge-preserving properties of estimation procedures. Edge functions \\(\\gamma_{t,x}(x_{v},x_{v})\\) is defined over \\(\\mathbf{X}\\times\\mathbf{X}\\) for each pair \\((\\mathbf{t}^{\\prime},\\mathbf{t}^{\\prime})\\in G\\) of neighboring pixels and takes the greater value the greater is the discrepancy between the respective hidden values. Different pairwise potentials such as Huber (Stevenson R., Stevenson D.E., 1990), semi-Huber function Fleury G., De la Rosa J. I., 2004), generalized gaussian function (Bouman, C., Sauer K., 1993), Besag function (Besag J., 1986), Green function (Green P. J. 1990) and others, can be found in the literature (Nikolova M., et al., 2010). Nonconvex functions offer the best possible quality of image reconstruction with neat and exact edges. One of the main problems in these approaches is high computational complexity of corresponding estimation procedures which can hardly be applied to high-resolution images. A new non-convex type of pairwise Gibbs potentials (Figure 1): \\[\\gamma_{\\mathbf{t}}(x_{\\mathbf{t}},x_{\\mathbf{t-1}})=\\min\\Bigl{[}\\gamma_{ \\mathbf{t}}^{(1)}(x_{\\mathbf{t}},x_{\\mathbf{t-1}}), ,\\gamma_{\\mathbf{t}}^{ (L)}(x_{\\mathbf{t}},x_{\\mathbf{t-1}})\\Bigr{]}\\text{,} \\tag{1}\\] where \\(\\gamma_{\\mathbf{t}}^{(l)}(x_{\\mathbf{t}},x_{\\mathbf{t-1}})=\\hat{x}^{(l)}(x_{ \\mathbf{t}}-x_{\\mathbf{t-1}})^{2}+d^{(i)}\\) - are quadratic functions with parameters \\(\\hat{x}^{(i)}\\) u \\(d^{(i)}\\), \\(i=1, ,L\\), was proposed in the papers (Pham C.T. and Kopylov A.V., 2015, 2016). This type of edge functions has the ability to flexibly define a priori preferences, using separate penalties for various ranges of differences between values of an image adjacent elements, and leads to computationally effective procedure of maximum a posteriori probability (MAP) estimation of a hidden component of two component random field. MAP estimation, can be formulated as a problem of minimization of the objective function of a special kind, called Gibbs energy function (Besag J.E., 1974). \\[J(\\mathbf{X}\\mid\\mathbf{Y})=\\sum_{\\mathbf{t}\\in T}\\psi_{t}(x_{v}\\mid\\mathbf{ Y})+\\sum_{(i,x)\\in G}\\gamma_{t,x}(x_{v},x_{v})\\text{.} \\tag{2}\\] where node function \\(\\psi_{t}(x_{v}\\mid\\mathbf{Y})\\), \\(x_{\\mathbf{t}}\\in\\mathbf{X}\\) represents unary Gibbs potentials and takes the greater value the more evident is the contradiction between the hypothesis that \\(x_{\\mathbf{t}}\\in\\mathbf{X}\\) is just the correct local value we are seeking and the current observable value \\(y_{v}\\in\\mathbf{Y}\\). ## 3 The Basic Optimization Procedure for a Separable Function Supported by a Tree If \\(G\\) is a tree on the set of nodes \\(T\\) (Figure 2.), there exists the highly effective global optimization procedure, based on a recurrent decomposition of the initial problem of minimizing multivariate function into a succession of partial problems, each of which consists in minimizing a function of only one variable. Such a procedure is nothing else than one of versions of the famous dynamic programming procedure. Let the tree \\(G_{\\mathbf{t}}\\in G\\) formed by a node \\(\\mathbf{t}\\) and its descendants be called descendant tree of this node. The set of all nodes in the descendant tree of a node \\(\\mathbf{t}\\) will be denoted by \\(T_{\\mathbf{t}}\\), symbol \\(T_{(\\mathbf{t})}\\) will mean the same set without this node itself. Analogously, symbols \\(\\mathbf{X}_{\\mathbf{t}}=(x_{\\mathbf{s}},\\mathbf{s}\\in T_{\\mathbf{t}})\\) and \\(\\mathbf{X}_{(\\mathbf{t})}=(x_{\\mathbf{s}},\\mathbf{s}\\in T_{(\\mathbf{t})})\\) will mean partial vectors of variables at the respective sets of nodes. The principal idea of the optimization procedure for separable goal functions supported by trees is based, like the classical dynamic programming procedure for chains, on the notion of Bellman function. The fundamental property of the Bellman function (Mottl V., et al., 1998):\\[\\tilde{J}_{i}(x_{i})=\\psi_{\\mathbf{x}}(x_{\\mathbf{t}})+\\sum_{\\mathbf{s}\\in T_{(i)} ^{0}}\\min_{x_{\\mathbf{s}}\\in\\mathbf{X}}\\left\\{\\gamma_{\\mathbf{t},\\mathbf{s}}(x_{ \\mathbf{t}},x_{\\mathbf{s}})+\\tilde{J}_{\\mathbf{s}}(x_{\\mathbf{s}})\\right\\} \\tag{3}\\] is called the upward recurrent relation. The inverted form of this relation \\[\\tilde{x}_{\\mathbf{s}}(x_{\\mathbf{t}})=\\operatorname*{arg\\,min}_{x_{\\mathbf{s }}\\in\\mathbf{X}}\\left\\{\\gamma_{\\mathbf{t},\\mathbf{s}}(x_{\\mathbf{t}},x_{ \\mathbf{s}})+\\tilde{J}_{\\mathbf{s}}(x_{\\mathbf{s}})\\right\\},\\ \\ \\ \\mathbf{s}\\in T_{(\\mathbf{t})}^{0}, \\tag{4}\\] is referred to the backward recurrent relation. Let us call \\[\\tilde{F}_{i}(x_{\\mathbf{t}})=\\sum_{\\mathbf{s}\\in T_{(i)}^{0}}\\min_{x_{\\mathbf{ s}}\\in\\mathbf{X}}\\left\\{\\gamma_{\\mathbf{t},\\mathbf{s}}(x_{\\mathbf{t}},x_{ \\mathbf{s}})+\\tilde{J}_{\\mathbf{s}}(x_{\\mathbf{s}})\\right\\}, \\tag{5}\\] a partial Bellman function, then \\(\\tilde{J}_{i}(x_{\\mathbf{t}})=\\psi_{i}(x_{\\mathbf{t}})+\\tilde{F}_{i}(x_{ \\mathbf{t}})\\). The procedure of dynamic programming searches for the minimum of the objective function (2) in two passes according to forward (3) and backward (4) recurrent relations. Nevertheless, this procedure cannot be applied immediately to the image reconstruction tasks since discrete image lattice is not a tree. The tree-like approximation method (Mottl V., et al., 1998) decomposes the original lattice-like adjacency graph into several tree-like ones. Each of these graphs covers all elements of the pixel grid. The final decision for each variable is based on the separate partial objective function with the tree-like adjacency of variables instead of the overall objective function. Thus, some relations between goal variables are eliminated and are not taken into account, which leads to decreasing in accuracy. As it was shown in (Pham C.T. and Kopylov A.V., 2015, 2016), in the case of a minimum of a finite set of quadratic functions of pairwise Gibbs potentials (1), and node functions are in quadratic form, the Bellman functions at each step of the dynamic programming are a minimum of a finite set of quadratic functions. The procedure breaks down at each step into several parallel procedures, according to the number of quadratic functions forming the intermediate optimization problems of one variable. The corresponding procedure is called a multi quadratic dynamic programming procedure (MQDP). A special version of the multi quadratic dynamic programming procedure was elaborated for optimization of the objective function, based on the tree-like approximation of the lattice neighborhood graph. Experiments show that the proposed procedure can effectively manage heterogeneities and discontinuities in the source data. The number of quadratic functions in a representation of Bellman functions grows during the forward move and a special technique on the basis of k-means clustering (Dvoenko S. D., 2009) was proposed to reduce their number. We proposed here a new procedure on the basis of another type of tree-like representation of image lattice, which does not eliminate any pairwise connections between image elements and can sufficiently simplify the recalculation of the intermediate Bellman functions. ## 4 The full set of acyclic adjacency graphs In the paper (Dvoenko S.D., 2012) another way for the tree-like approximation of a lattice based on the full set of acyclic adjacency graphs was proposed. For an element \\(\\in\\mathbb{T}\\), we have its vicinity relative to a certain acyclic adjacency graph \\(G\\), which can be divided arbitrarily into two parts \\(T_{(\\mathbf{t})}^{0}=T_{(\\mathbf{t})}^{-0}\\cup T_{(\\mathbf{t})}^{+0}\\) (Fig. 2.b). Let us expand vicinity \\(T_{(\\mathbf{t})}^{1}=T_{(\\mathbf{t})}^{+0}\\) of descendants \\[T_{(\\mathbf{t})}^{0\\beta}=(T_{(\\mathbf{t})}^{+0})^{+0}=T_{( \\mathbf{t})}^{+0}\\cup(\\cup T_{(\\mathbf{t})}^{ Note that all the PSNR results (in dB) and MSSIM reported in Fig. 5 have been averaged over 10 noise realizations. The figure 4 shows PSNR Variations on Loops at level of additive white Gaussian noise with a standard deviation \\(\\sigma=15\\). Figure 5 shows result images of proposed and compared methods: Multi quadratic dynamic programming (MQDP), nonlinear anisotropic diffusion (AD P-M), modified Perona-Malik (MP-M), fourth order PDEs, nonlinear total variation (TV). For MQDP we use edge functions (5) with smoothing parameters of edge function with fixed values \\(\\lambda=0.2\\), \\(d=0.5\\Delta^{2}\\). The experimental results show that our image denoising method allows image denoising as well as other related method. ## 6 Conclusion Edge preserving image denoising has become an urgent step in imaging to remove noise and to preserve local image features for improving the quality of further analysis. In this paper, we proposed approaches to achieve these aims. Proposed edge-preserving procedure for medical images allows, sufficiently simplify the recalculation of the intermediate Bellman functions to find the final solution. The experimental results show that proposed algorithm allows get high results of dynamic programming for image processing. Numerical results show that our proposed algorithms are efficient and allows get result as well as existed method. ## Acknowledgements This work was supported by Russian Foundation for Basic Research (RFBR), projects No. 16-07-01039 and 16-57-52042. ## References * Besag (1974) Besag J., 1974. Spatial interaction and the statistical analysis of lattice systems. _Journal of the Royal Statistical Society, Series B_, Vol. 36, pp. 192-236. * Besag (1986) Besag J., 1986. On the Statistical Analysis of Dirty Pictures. _Journal of the Royal Statistical Society (Series B)_, Vol. 48, pp. 259-302. * Bovik and Wang (2006) Bovik A. C., Wang Z. (2006). Modern Image Quality Assessment, Synthesis Lectures on Image, Video, and Multimedia Processing. _Morgan & Claypool Publishers_, 156 pages. * Dvoenko (2012) Dvoenko S.D., 2012. Recognition of dependent objects based on acyclic Markov models. _Journal of Pattern Recognition and Image Analysis_, Vol 22 (1), pp. 28-38. * Fleury and De la Rosa (2004) Fleury G., De la Rosa J. I., 2004. Bootstrap Methods for a Measurement Estimation Problem. _IEEE Transactions on Instrumentation and Measurement_, Vol. 55(3). P. 820-827. * Gerig et al. (1992) Gerig G., et al., 1992. Nonlinear anisotropic filtering of MRI data. _IEEE Transaction on Medical Imaging_, Vol. 11 (2), pp. 221-232. * Green (1990) Green P. J. (1990) Bayesian reconstruction from emission tomography data using a modified EM algorithm. _IEEE Trans. on Medical Imaging_, Vol. 9(1), pp.84-93. * Mottl et al. (1998) Mottl V.V., et al., 1998. Optimization Techniques on Pixel Neighborhood Graphs for Image Processing. _In: Jolion JM., Kropatschek W.G. (eds) Graph Based Representations in Pattern Recognition. Supplement 12_. Springer-Verlag/Wien, pp. 135-145. * Nikolova et al. (2010) Nikolova M., Michael K., and Tam C.P., 2010. Fast Nonconvex Nonsmooth Minimization Methods for Image Restoration and Reconstruction. _IEEE Transactions on Image Processing_, Vol. 19 (12), pp. 3073-3088. * Perona and Malik (1990) Perona, P. and Malik, J., 1990. Scale space and edge detection using anisotropic diffusion. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, Vol. 12. pp. 629-639. * Pham Cong Thang and Kopylov (2015) Pham Cong Thang and Kopylov A. V., 2015. Multi-quadratic dynamic programming procedure of edge-preserving denoising for medical images. _Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci._, XL-5W6, pp. 101-106. * Pham Cong Thang and Kopylov (2016) Pham Cong Thang, Kopylov A.V, 2016. Parametric Procedures For Image Denoising With Flexible Prior Model. _The Seventh International Symposium on Information and Communication Technology (SoICT 2016)_, pp. 294-301. * Rudin et al. (1992) Rudin, L.I., Osher, S., Fatemi, E., 1992. Nonlinear total variation based noise removal algorithms. _Physica. D_, Vol. 60, pp. 259-268. * Stevenson and Stevenson (1990) Stevenson R., Stevenson D.E., 1990. Fitting curves with discontinuities. _Proc. of the first international workshop on robust computer vision_, pp. 127-136. * Wang et al. (2011) Wang Y., et al., 2011. MTV: modified total variation model for image noise removal. _IEEE Electronics Letters_, Vol. 47 (10), pp.592-594. Figure 4: PSNR variations during the vicinity expansion steps You Y., Kaveh M., 2000. Fourth Order Partial Differential Equations for Noise Removal. _IEEE Trans. Image Processing_, Vol. 9 (10), pp. 1723-1730 Yuanquan Wang, J.C. Guo, W.F. Chen and Wenxue Zhang, 2013. Image denoising using modified Perona-Malik model based on directional Laplacian. _Signal Processing_, Vol. 93(9), September 2013, pp. 2548-2558. Figure 5: Results of processing for proposed and compared methods
The ability of a denoising procedure to preserve fine image structures when suppressing unwanted noise has crucial importance for an accurate and effective medical diagnosis. We introduce here a new procedure of edge-preserving denoising for medical images, that combines the flexibility in prior assumptions, and computational effectiveness of parametric multi-quadratic dynamic programming with the increased accuracy of a tree-like representation of a discrete lattice based on the full set of possible adjacency graphs of image elements. Proposed procedure can effectively remove an additive white Gaussian noise with high quality. We provide experimental results in image denoising as well as comparison with related methods. Full set of adjacency graphs, Dynamic programming, Bayesian framework, Edge-preserving smoothing 2nd International ISRRS Workshop on PSBB, 15-17 May 2017, Moscow, Russia
Provide a brief summary of the text.
162
arxiv-format/2409_05322v1.md
Confronting new NICER mass-radius measurements with phase transition in dense matter and compact twin stars Jia Jie Li\\({}^{a}\\) Armen Sedrakian\\({}^{b,c}\\) Mark Alford\\({}^{d}\\) [email protected]; [email protected]; [email protected] ## 1 Introduction The NICER observations of nearby neutron stars allowed for accurate (up to 10%) inferences of neutron star radii in conjunction with the masses of nearby X-ray-emitting neutron stars. Recent (re)analysis of the data of three millisecond pulsars - the two-solar-mass pulsar PSR J0740+6620 (hereafter J0740) and two canonical mass 1.4 \\(M_{\\odot}\\) stars PSR J0437-4715 (hereafter J0437) and J0030+0451 (J0030) pose a challenge to the modern theories of dense matter to account for the features observed on the mass-radius (\\(M\\)-\\(R\\)) diagram of neutron stars. A sharp first-order transition between hadronic and quark matter can produce a disconnected branch of hybrid stars, opening up the possibility of _twin_ stars, where there are two different stable configurations, with different radii, but having the same mass. The larger star will be composed entirely of hadronic matter, while the more compact star will be a hybrid star with a quark core in the central region [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], for recent reviews see Refs. [18, 19]. Between the hadronic branch and the hybrid branch there is a range of radii for which there are no stable configurations. This is true if the transition from hadronic to quark phase is rapid compared to other time scales in the problem, for example, the period of fundamental modes by which these stars become unstable. In the case of slow conversion, the stability is recovered [20, 21, 22, 23, 24]. Recently, new and updated NICER astrophysical constraints have been published for the three pulsars mentioned [25, 26, 27]. Notably, the analysis of PSR J0030 resulted in three different ellipses in the \\(M\\)-\\(R\\) plane, each corresponding to a different analysis method. The purpose of this paper is to assess the compatibility of these new analyses with the hybrid star models we recently developed in a series of papers [10, 11, 28]. Specifically, we will examine three scenarios, labeled A, B, and C, which share the same data for PSR J0437 and PSR J0740 but incorporate different analyses for PSR J0030 using different models of the surface temperature patterns. The scenarios are defined in Table 1. In model ST-U each of the two hot spots is described by a single spherical cap; in CST+PDT there is a single temperature spherical spot with two components, one emitting and one masking; in ST+PST the primary (ST) is described by a single spherical cap and the secondary (PST) by two components, one emitting and one masking; in ST+PDT the primary (ST) is described by a single spherical cap and the secondary (PDT) by two components, both emitting; in PDT-U each of the two hot spots is described by two emitting spherical caps. For details see Refs. [25, 27]. In confronting the models of hybrid stars we will pay special attention to the possibility of twin stars in Scenario C, as in this case the data for canonical mass pulsars J0439 and J0030 does not overlap at 2\\(\\sigma\\) (95% confidence) level, hinting towards the existence of twin configurations. For thisto occur, a strong first-order phase transition is needed, which will be parametrized in terms of the fractional energy density jump \\(\\Delta\\epsilon/\\epsilon_{\\rm tr}\\). The remainder of this work is organized as follows. We briefly describe the physical foundations of the equation of state (EoS) models used in this study in section 2. The theoretical stellar models are compared with astrophysical observations in section 3. Finally, our conclusions are presented in section 4. ## 2 Models of hybrid stars To ensure this presentation is self-contained, we briefly review the setup from Li et al. [28]. We use four representative nucleonic EoS models based on covariant density functional (CDF) theory, as discussed in Oertel et al. [29] and Sedrakian et al. [30]. Our models are part of the DDME2 family introduced by Lalazissis et al. [31], but feature varying slope \\(L_{\\rm sym}\\) of symmetry energy. Our models are labeled as DDLS-\\(L_{\\rm sym}\\), see Ref. [32]. We keep the skewness constant at the value implied by the parameterization of Ref. [31], \\(Q_{\\rm sat}=479.22\\,{\\rm MeV}\\). Our models share the following nuclear matter parameters: Saturation density \\(\\rho_{\\rm sat}=0.152\\,{\\rm fm}^{-3}\\); Binding energy per particle \\(E_{\\rm sat}=-16.14\\,{\\rm MeV}\\) at saturation density; Incompressibility \\(K_{\\rm sat}=251.15\\,{\\rm MeV}\\); Symmetry energy \\(E_{\\rm sym}=27.09\\,{\\rm MeV}\\) at the crossing density \\(\\rho_{\\rm c}=0.11\\,{\\rm fm}^{-3}\\). In our analysis below we select from the family of DDLS-\\(L_{\\rm sym}\\) two stiff EoS with values of \\(L_{\\rm sym}=80\\), \\(100\\,{\\rm MeV}\\) and two soft EoS with \\(L_{\\rm sym}=40\\), \\(60\\,{\\rm MeV}\\) as representative EoS on the basis of which we built our hybrid star models. Note that the EoS of neutron-rich matter below and around \\(2\\,\\rho_{\\rm sat}\\) is dominated by the isovector parameters of a CDF, see for instance Ref. [33], which in the present setup is encoded in \\(L_{\\rm sym}\\). The variations of isoscalar parameters, such as \\(K_{\\rm sat}\\) or \\(Q_{\\rm sat}\\), affect the high-density part of the EoS and are unimportant because in our models the hadron-quark phase transition takes place at lower densities, in the range of \\(2\\leqslant\\rho/\\rho_{\\rm sat}\\leqslant 2.5\\). The first-order transition to quark matter is parameterized by the baryon density and energy density (\\(\\rho_{\\rm tr}\\) and \\(\\epsilon_{\\rm tr}\\)) at which the transition occurs, the energy density jump \\(\\Delta\\epsilon\\), and the speed of sound \\(c_{\\rm s}\\) in the quark matter phase. Following Ref. [28], we first fix \\(\\rho_{\\rm tr}\\) and \\(\\epsilon_{\\rm tr}\\), and then vary \\(\\Delta\\epsilon\\) and \\(c_{\\rm s}\\). As described in Ref. [34], we can identify specific values of these parameters that yield a \\(M\\)-\\(R\\) relation with two disconnected branches. The stars with low central density are purely hadronic, while those with high central density contain a quark core (i.e., hybrid stars) and are separated by an unstable region. This topology could lead to twin configurations, where stars with identical masses differ in their geometric properties such as radii, moments of inertia, and tidal deformabilities. For convenience, we use astrophysical parameters instead of microscopic ones like \\(\\rho_{\\rm tr}\\) and \\(\\Delta\\epsilon\\) to characterize a hybrid EoS model, see for illustration panel (a) of Figure 1. These macroscopic parameters are: (a) \\(M_{\\rm max}^{\\rm H}\\): the maximum mass of the hadronic branch; (b) \\(M_{\\rm max}^{\\rm Q}:\\) the maximum mass of the hybrid branch; (c) \\(M_{\\rm min}^{\\rm Q}\\): the minimum mass of the hybrid branch. The last quantity allows us to determine the range of masses where twin configurations exist. \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\hline \\multirow{2}{*}{Scenario} & J0740 & J0437 & \\multicolumn{3}{c}{J0030} \\\\ \\cline{2-5} & ST-U & CST+PDT & ST+PST & ST+PDT & PDT-U \\\\ \\hline A & \\(\\times\\) & \\(\\times\\) & \\(\\times\\) & & \\\\ B & \\(\\times\\) & \\(\\times\\) & & \\(\\times\\) & \\\\ C & \\(\\times\\) & \\(\\times\\) & & & \\(\\times\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: New NICER astrophysical constraints [25, 26, 27] used for the three scenarios A, B, C in the present work. As in Ref. [28] we use the constant speed of sound (CSS) parameterization for the quark equation of state, which assumes that the speed of sound in quark matter remains constant over the relevant density range. In the extreme high-density limit we expect \\(c_{\\rm s}^{2}\\to 1/3\\) which corresponds to the conformal limit describing weakly interacting massless quarks. The maximum possible speed of sound is the causal limit \\(c_{\\rm s}^{2}=1\\). Intermediate values represent varying degrees of stiffness in the quark matter EoS. Below we will explore two possibilities, \\(c_{\\rm s}^{2}=1\\) and 2/3. The maximally stiff EoS yields the largest maximum masses for hybrid stars and the maximal difference in the radii of twin stars. The intermediate value is more realistic and mimics what one might expect for non-perturbatively interacting quark matter. Although we will restrict our analysis to the basic CSS model with a single speed of sound over the relevant density range, one could perform a more general analysis using different speeds of sound in different density segments; it is known that this can produce triplets of stars, i.e., three stars with equal masses but different radii [4, 8]. ## 3 Comparison with astrophysical observations We examine hybrid equation of state (EoS) models with transition masses (\\(M_{\\rm max}^{\\rm H}\\)) ranging from 1.2 to 1.5 solar masses. This range encompasses the mass estimation of PSR J0437 reported by Choudhury et al. [27]. Our analysis aims to evaluate the hypothesis that PSR J0437 and J0030 could be twin stars, the first being hadronic and the second hybrid. We use the following mass and radius measurements: * PSR J0437-4715: We use the first mass and radius estimates for this brightest pulsar by using the 2017-2021 NICER X-ray spectral-timing data from Choudhury et al. [27]. The preferred CST+PDT model used informative priors on mass, distance and inclination from PPTA radio pulsar timing data and took into account constraints on the non-source background and validated against XMM-Newton data [27]. * PSR J0030+0451: We use three alternative mass and radius estimates from the reanalysis of 2017-2018 data, as reported by Vinciguerra et al. [25]. One of the estimates is based on Figure 1: \\(M\\)-\\(R\\) relations for hybrid EoS models with transition mass \\(M_{\\rm max}^{\\rm H}\\) in the range 1.2-1.5 \\(M_{\\odot}\\) and maximum mass \\(M_{\\rm max}^{Q}\\) in the range 2.0-2.3 \\(M_{\\odot}\\) that were constructed from four representative nucleonic EoS models. Models with stiff nucleonic EoS (DDLS-80 and DDLS-100) are shown in panel (a), those with soft nucleonic EoS (DDLS-40 and DDLS-60) in panel (b). Ellipses show observation constraints at 68% and 95% credible levels from analysis of NICER observations according to Refs. [25, 26, 27]. The light blue ellipse corresponds to ST+PST analysis for PSR J0030 [25]. the ST+PST NICER-only analysis of the data reported in [35], but uses an improved analysis pipeline and settings. The two other estimates are based on the joint analysis of NICER and XMM-Newton data which are labelled as ST+PDT and PDT-U. The ST+PDT results are more consistent with the magnetic field geometry inferred for the gamma-ray emission for this source [25; 36]. The PDT-U is the most complex model tested in Ref. [25] and is preferred by the Bayesian analysis. * PSR J0740+6620: We incorporate estimates from Salmi et al. [26], Miller et al. [37], and Riley et al. [38]. The representative estimates used in this study were obtained from a joint NICER and XMM-Newton analysis of the 2018-2022 dataset, based on the preferred ST-U model, which provides a more comprehensive treatment of the background [26]. By combining these observations, we establish three astrophysical scenarios, as detailed in Table 1. This approach enables us to examine different possibilities within the framework of our hybrid EoS models and the mass-twin hypothesis. Notably, the ellipses derived for PSR J0437 significantly overlap with the inferences from GW170817 [39], further reinforcing the selection of EoS based solely on gravitational wave data. ### Scenario A Figure 1 shows the \\(M\\)-\\(R\\) diagrams for hybrid EoS models with four representative nucleonic EoS which are combined with the quark matter EOS specified by two values of the speed of sound \\(c_{\\rm s}^{2}=1\\) and \\(2/3\\). Panel (a) is for stiff nucleonic EoS which demands a strong first-order phase transition with \\(M_{\\rm max}^{\\rm H}\\lesssim 1.5\\,M_{\\odot}\\) in order to be consistent with the NICER inference for PSR J0437. Panel (b) uses soft nucleonic EoS which could match PSR J0437's inference without a phase transition to quark matter, but still, as an alternative, may allow for phase transitions featuring twin configurations. \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline EoS & \\(\\epsilon_{\\rm tr}\\) & \\(\\Delta\\epsilon/\\epsilon_{\\rm tr}\\) & \\(M_{\\rm max}^{\\rm H}\\) & \\(M_{\\rm max}^{\\rm Q}\\) & \\(\\Delta M_{\\rm twin}\\) & \\(\\Delta R_{\\rm twin}\\) \\\\ & [MeV/fm\\({}^{3}\\)] & & \\([M_{\\odot}]\\) & \\([M_{\\odot}]\\) & \\([M_{\\odot}]\\) & [km] \\\\ \\hline DDLS100 & 300.570 & 1.0615 & 1.30 & 2.10 & 0.0748 & 1.84 \\\\ & & 0.8881 & 1.30 & 2.20 & 0.0397 & 1.22 \\\\ & 334.452 & 1.0055 & 1.50 & 2.05 & 0.0796 & 1.94 \\\\ & & 0.8349 & 1.50 & 2.15 & 0.0389 & 1.23 \\\\ DDLS80 & 300.180 & 0.8764 & 1.20 & 2.20 & 0.0307 & 1.54 \\\\ & & 0.7249 & 1.20 & 2.30 & 0.0088 & 0.44 \\\\ & 332.241 & 0.9100 & 1.40 & 2.10 & 0.0467 & 1.31 \\\\ & & 0.7506 & 1.40 & 2.20 & 0.0162 & 0.67 \\\\ DDLS60 & 323.965 & 0.9362 & 1.30 & 2.10 & 0.0428 & 1.14 \\\\ & & 0.7739 & 1.30 & 2.20 & 0.0149 & 0.59 \\\\ & & 356.305 & 0.8252 & 1.50 & 2.10 & 0.0323 & 0.99 \\\\ & & 0.6735 & 1.50 & 2.20 & 0.0049 & 0.34 \\\\ DDLS40 & 310.074 & 0.8291 & 1.20 & 2.20 & 0.0187 & 0.61 \\\\ & & 0.6817 & 1.20 & 2.30 & 0.0023 & 0.15 \\\\ & & 0.8856 & 1.40 & 2.10 & 0.0358 & 0.98 \\\\ & & 0.7283 & 1.40 & 2.20 & 0.0089 & 0.41 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Parameters of the hybrid EoS models used in this work, and the characteristics of their mass-radius curves. All have the maximum sound speed \\(c_{\\rm s}^{2}=1\\) in the quark phase. The last two columns show the ranges of mass and radius within which twin configurations exist. Figure 1 demonstrates that Scenario A contains models that are consistent with all three of the \\(M\\)-\\(R\\) measurements. Each of the curves has a hadronic branch that reaches within the 95% contours for PSR J0030, and a hybrid branch that passes through the 95% contours for J0437 and J0740. In several cases the \\(M\\)-\\(R\\) curves are even consistent with the 68% contours. For the models with higher transition density, corresponding to \\(M_{\\rm max}^{\\rm H}=1.5\\,M_{\\odot}\\) or \\(1.4\\,M_{\\odot}\\) it is possible that J0030 and J0437 could be hadronic-hybrid twins. For models with a lower transition density (\\(M_{\\rm max}^{\\rm H}\\lesssim 1.3\\,M_{\\odot}\\)) the mass range where twins exist is below the mass interval of PSR J0437, so that PSR J0030 could be a hadronic or hybrid star, while J0437 must be a hybrid star in this case. Tables 2 and 3 present the parameters of the used hybrid EoS models with \\(c_{\\rm s}^{2}=1\\) and \\(2/3\\), respectively, in quark phase and characteristics of the corresponding \\(M\\)-\\(R\\) diagrams, i.e., the values of \\(M_{\\rm max}^{\\rm H},M_{\\rm max}^{\\rm Q}\\), and ranges of mass and radius that characterize twin configurations, \\(\\Delta M_{\\rm twin}=M_{\\rm max}^{\\rm H}-M_{\\rm min}^{\\rm Q}\\), and \\(\\Delta R_{\\rm twin}\\) the radius difference between the \\(M_{\\rm max}^{\\rm H}\\) hadronic star and that of the hybrid counterpart with the same mass. Note that the case \\(c_{\\rm s}^{2}=1\\) allows us to establish the maximum range where twin configurations exist in our setup. ### Scenario B This is the scenario where the \\(M\\)-\\(R\\) ellipses for canonical mass stars PSR J0030 and J0437 are maximally overlapping; see Figure 2. In this case, hadronic stars are consistent with data only at \\(2\\sigma\\) (95% confidence) level, and only for soft hadronic matter (panel b). If the hadronic EoS is stiff then both stars must be hybrid which means they may have hadronic twins with significantly larger radii \\(R\\geq 13.7\\) km (the \\(2\\sigma\\) upper limit for J0437) in our example, see panel (a). ### Scenario C This is the scenario where the \\(M\\)-\\(R\\) ellipses for PSR J0030 and J0437 overlap only at \\(2\\sigma\\) (95% confidence), see Figure 3. This scenario can be viewed as a more extreme version of Scenario A, \\begin{table} \\begin{tabular}{l c c c c c c} \\hline \\hline \\multicolumn{1}{c}{ EoS} & \\(\\epsilon_{\\rm tr}\\) & \\(\\Delta\\epsilon/\\epsilon_{\\rm tr}\\) & \\(M_{\\rm max}^{\\rm H}\\) & \\(M_{\\rm max}^{\\rm Q}\\) & \\(\\Delta M_{\\rm twin}\\) & \\(\\Delta R_{\\rm twin}\\) \\\\ & [MeV/fm\\({}^{3}\\)] & & [\\(M_{\\odot}\\)] & [\\(M_{\\odot}\\)] & [\\(M_{\\odot}\\)] & [km] \\\\ \\hline DDLS100 & 300.570 & 0.7823 & 1.30 & 2.00 & 0.0225 & 0.92 \\\\ & & 0.6298 & 1.30 & 2.10 & 0.0015 & 0.21 \\\\ & 334.452 & 0.6898 & 1.50 & 2.00 & 0.0124 & 0.69 \\\\ & & 0.5467 & 1.50 & 2.10 & - & - \\\\ DDLS80 & 300.180 & 0.7641 & 1.20 & 2.00 & 0.0148 & 0.64 \\\\ & & 0.6125 & 1.20 & 2.10 & - & - \\\\ & 332.241 & 0.6735 & 1.40 & 2.00 & 0.0059 & 0.41 \\\\ & & 0.5313 & 1.40 & 2.10 & - & - \\\\ DDLS60 & 323.965 & 0.6851 & 1.30 & 2.00 & 0.0045 & 0.29 \\\\ & & 0.5414 & 1.30 & 2.10 & - & - \\\\ & 356.305 & 0.6198 & 1.50 & 2.00 & 0.0006 & 0.14 \\\\ & & 0.4826 & 1.50 & 2.10 & - & - \\\\ DDLS40 & 310.074 & 0.7256 & 1.20 & 2.00 & 0.0063 & 0.31 \\\\ & & 0.5777 & 1.20 & 2.10 & - & - \\\\ & 338.751 & 0.6573 & 1.40 & 2.00 & 0.0014 & 0.18 \\\\ & & 0.5166 & 1.40 & 2.10 & - & - \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Same as Table 2 but for models have the intermediate sound speed \\(c_{\\rm s}^{2}=2/3\\) in the quark phase. herefore, the conclusions drawn above apply to this scenario too. Nevertheless, there are some quantitative differences. PSR J0030 has now a large radius \\(R>12\\) km, which excludes models of EoS with soft hadronic matter which produce low values of \\(M_{\\rm max}^{\\rm H}\\) or \\(M_{\\rm max}^{\\rm Q}\\) with \\(R_{1.4}<12\\) km (the \\(2\\sigma\\) lower limit for J0030), where \\(R_{1.4}\\) is the radius of \\(1.4\\,M_{\\odot}\\) star. This is the case, for example, for the EoS based on DDLS-60 (\\(M_{\\rm max}^{\\rm H}=1.3\\,M_{\\odot}\\), \\(M_{\\rm max}^{\\rm Q}=2.1\\,M_{\\odot}\\) and \\(c_{\\rm s}^{2}=1\\)) and DDLS-40 (\\(M_{\\rm max}^{\\rm H}=1.2\\,M_{\\odot}\\), \\(M_{\\rm max}^{\\rm Q}=2.0\\,M_{\\odot}\\) and \\(c_{\\rm s}^{2}=2/3\\)). ### Tidal deformability and moment of inertia In this section, we consider two additional integral parameters of hybrid stars which have observational significance - the tidal deformability and moment of inertia. Figure 4 (a) shows the dimensionless tidal deformability vs mass relation for our models. It is seen that the models discussed in this work satisfy the constraint placed for a \\(1.362\\,M_{\\odot}\\) star from the analysis of the GW170817 event [39]. It is also seen that the softer EoS safely passes through the required range for \\(\\Lambda_{1.362}\\). Furthermore, for hybrid models for which \\(M_{\\rm min}^{\\rm Q}\\leq 1.4\\,M_{\\odot}\\) the transition to quark matter improves the agreement with the data. Figure 3: Same as Figure 1, but for scenario C, where the light blue ellipse corresponds to PDT-U analysis for PSR J0030 [25]. Figure 2: Same as Figure 1, but for scenario B, where the light blue ellipse corresponds to ST+PDT analysis for PSR J0030 [25]. Figure 4 (b) shows the moment of inertia predicted by EoS models for 1.338 \\(M_{\\odot}\\) star. A motivation for considering the moment of inertia comes from the observation of binary PSR J0737-3039 A [40; 41], which shows changes in orbital parameters, such as the orbit inclination (the angle between the orbital plane and observer's line of sight) and the preriastron position [42]. It is evident from the figure that our models of hybrid stars are broadly consistent with the constraint \\(0.91\\leq I_{1.338}\\leq 2.16\\) inferred from 16-yr data span reported in Ref. [40], where \\(I_{1.338}\\) is the moment of inertia of a neutron star with 1.338 \\(M_{\\odot}\\) mass in units of \\(10^{45}\\)g cm\\({}^{2}\\). ## 4 Conclusions Motivated by the recent (re)analysis of the data on two X-ray emitting pulsars PSR J0030+0451 and J0470+6620 as well as new results on PSR J0437-4715 we compared the new ellipses in the _M-R_ diagram for these pulsars with our models of hybrid stars, which are based on CDF EoS for nucleonic matter at low densities and quark matter EoS, parametrized by speed of sound, at higher densities. These models are also validated by comparisons of their predicted tidal deformabilities with the observations of GW170817 and predicted moment of inertia with the constraints for PSR J0737-3039 A, see Figure 4. In more detail, we considered three possible scenarios A, B and C which correspond to the three mass and radius estimates taken from a reanalysis of 2017-2018 data [25] for PSR J0030+0451. These include an improved NICER-only ST+PST analysis and two joint NICER-XMM-Newton models (ST+PDT and PDT-U), with the Bayesian-preferred PDT-U being the most complex. We then examined the consistency of the three scenarios with our models with a special focus on the possibility of the formation of twin stars. We find that in two scenarios (A and C), where the ellipses for the canonical mass (\\(\\sim 1.4\\,M_{\\odot}\\)) stars J0030+0451 and J0437-4715 exhibit the maximal mismatch, the potential difference in the radii of these stars within these scenarios is naturally explained by the presence of twin stars. To conclude, the ability of the hybrid star models to explain observational data from multiple sources (X-ray pulsars and gravitational waves) encourages further refinement of theoretical models, potentially leading to more accurate descriptions of neutron star interiors. This can be only achieved Figure 4: Mass-tidal deformability (left panel) and mass-moment of inertia (right panel) relations for EoS models. The 90% confidence ranges on for a \\(1.362\\,M_{\\odot}\\) star deduced from the analysis of GW170817 [39] (left panel) and for \\(1.338\\,M_{\\odot}\\) PSR J0737-3039 A from radio observations [40] (right panel) are shown with the vertical error bars. through combining theoretical models (for example using the classes of models discussed here) with observational data, which is expected to improve over time. J. L. is supported by the National Natural Science Foundation of China under Grant No. 12105232 and the Fundamental Research Funds for the Central Universities under Grant No. SWU-020021. A. S. is funded by Deutsche Forschungsgemeinschaft Grant No. SE 1836/5-3 and the Polish NCN Grant No. 2023/51/B/ST9/02798. M. A. is partly supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Award No. DE-FG02-05ER41375. ## References * [1] N.K. Glendenning and C. Kettner, _Nonidentical neutron star twins_, _Astron. Astrophys._**353** (2000) L9 [astro-ph/9807155]. * [2] J.L. Zdunik and P. Haensel, _Maximum mass of neutron stars and strange neutron-star cores_, _Astron. Astrophys._**551** (2013) A61 [1211.1231]. * I. High-mass twin compact stars_, _Astron. Astrophys._**577** (2015) A40 [1411.2856]. * [4] M. Alford and A. Sedrakian, _Compact Stars with Sequential QCD Phase Transitions_, _Phys. Rev. Lett._**119** (2017) 161104 [1706.01592]. * [5] D.E. Alvarez-Castillo, D.B. Blaschke, A.G. Grunfeld and V.P. Pagura, _Third family of compact stars within a nonlocal chiral quark model equation of state_, _Phys. Rev. D_**99** (2019) 063010 [1805.04105]. * [6] D. Blaschke, A. Ayriyan, D.E. Alvarez-Castillo and H. Grigorian, _Was GW170817 a Canonical Neutron Star Merger? Bayesian Analysis with a Third Family of Compact Stars_, _Universe_**6** (2020) 81 [2005.02759]. * [7] J.-E. Christian and J. Schaffner-Bielich, _Confirming the Existence of Twin Stars in a NICER Way_, _Astrophys. J._**935** (2022) 122 [2109.04191]. * [8] J.J. Li, A. Sedrakian and M. Alford, _Relativistic hybrid stars with sequential first-order phase transitions and heavy-baryon envelopes_, _Phys. Rev. D_**101** (2020) 063022 [1911.00276]. * [9] J.J. Li, A. Sedrakian and M. Alford, _Relativistic hybrid stars in light of the NICER PSR J0740+6620 radius measurement_, _Phys. Rev. D_**104** (2021) L121302 [2108.13071]. * [10] J.J. Li, A. Sedrakian and M. Alford, _Relativistic Hybrid Stars with Sequential First-order Phase Transitions in Light of Multimessenger Constraints_, _Astrophys. J_**944** (2023) 206 [2301.10940]. * [11] J.J. Li, A. Sedrakian and M. Alford, _Ultracompact hybrid stars consistent with multimessenger astrophysics_, _Phys. Rev. D_**107** (2023) 023018 [2207.09798]. * [12] P. Laskos-Patkos, P.S. Koliogiannis and C.C. Moustakidis, _Hybrid stars in light of the HESS J1731-347 remnant and the PREX-II experiment_, _Phys. Rev. D_**109** (2024) 063017 [2312.07113]. * [13] M. Naseri, G. Bozzola and V. Paschalidis, _Exploring pathways to forming twin stars_, _Phys. Rev. D_**110** (2024) 044037 [2406.15544]. * [14] J.C. Jimenez, L. Lazzari and V.P. Goncalves, _How the QCD trace anomaly behaves at the core of twin stars?_, 2408.11614. * [15] N.-B. Zhang and B.-A. Li, _Impact of the nuclear equation of state on the formation of twin stars_, 2406.07396. ** [16] Y. Kini, T. Salmi, S. Vinciguerra et al., _Pulse Profile Modelling of Thermonuclear Burst Oscillations III : Constraining the properties of XTE J1814-338_, 2405.10717. * [17] P. Laskos-Patkos, G.A. Lalazissis, S. Wang, J. Meng, P. Ring and C.C. Moustakidis, _Speed of sound bounds and first-order phase transitions in compact stars_, 2408.15056. * [18] G. Baym, T. Hatsuda, T. Kojo, P.D. Powell, Y. Song and T. Takatsuka, _From hadrons to quarks in neutron stars: a review_, _Rep. Prog. Phys._**81** (2018) 056902 [1707.04966]. * [19] A. Sedrakian, _Impact of Multiple Phase Transitions in Dense QCD on Compact Stars_, _Particles_**6** (2023) 713 [2306.13884]. * [20] P.B. Rau and A. Sedrakian, _Two first-order phase transitions in hybrid compact stars: Higher-order multiplet stars, reaction modes, and intermediate conversion speeds_, _Phys. Rev. D_**107** (2023) 103042 [2212.09828]. * [21] P.B. Rau and G.G. Salaben, _Nonequilibrium effects on stability of hybrid stars with first-order phase transitions_, _Phys. Rev. D_**108** (2023) 103035 [2309.08540]. * [22] C.H. Lenzi, G. Lugones and C. Vasquez, _Hybrid stars with reactive interfaces: Analysis within the Nambu-Jona-Lasinio model_, _Phys. Rev. D_**107** (2023) 083025 [2304.01898]. * [23] I.F. Ranea-Sandoval, M. Mariani, M.O. Celi, M.C. Rodriguez and L. Tonetto, _Asteroseismology using quadrupolar \\(f\\)-modes revisited: Breaking of universal relationships in the slow hadron-quark conversion scenario_, _Phys. Rev. D_**107** (2023) 123028 [2306.02823]. * [24] I.A. Rather, K.D. Marquez, B.C. Backes, G. Panotopoulos and I. Lopes, _Radial oscillations of hybrid stars and neutron stars including delta baryons: the effect of a slow quark phase transition_, _Jour. Cosmo. Astropart. Phys._**2024** (2024) 130 [2401.07789]. * [25] S. Vinciguerra, T. Salmi, A.L. Watts et al., _An Updated Mass-Radius Analysis of the 2017-2018 NICER Data Set of PSR J0030+0451_, _Astrophys. J._**961** (2024) 62 [2308.09469]. * [26] T. Salmi, D. Choudhury, Y. Kini et al., _The Radius of the High Mass Pulsar PSR J0740+6620 With 3.6 Years of NICER Data_, 2406.14466. * [27] D. Choudhury, T. Salmi, S. Vinciguerra et al., _A NICER View of the Nearest and Brightest Millisecond Pulsar: PSR J0437-4715_, _Astrophys. J. Lett._**971** (2024) L20 [2407.06789]. * [28] J.J. Li, A. Sedrakian and M. Alford, _Hybrid Star Models in the Light of New Multimessenger Data_, _Astrophys. J._**967** (2024) 116 [2401.02198]. * [29] M. Oertel, M. Hempel, T. Klahn and S. Typel, _Equations of state for supernovae and compact stars_, _Rev. Mod. Phys._**89** (2017) 015007 [1610.03361]. * [30] A. Sedrakian, J.J. Li and F. Weber, _Heavy baryons in compact stars_, _Prog. Part. Nucl. Phys._**131** (2023) 104041 [2212.01086]. * [31] G.A. Lalazissis, T. Niksic, D. Vretenar and P. Ring, _New relativistic mean-field interaction with density-dependent meson-nucleon couplings_, _Phys. Rev. C_**71** (2005) 024312. * [32] J.J. Li and A. Sedrakian, _New Covariant Density Functionals of Nuclear Matter for Compact Star Simulations_, _Astrophys. J._**957** (2023) 41 [2308.14457]. * [33] J.J. Li and A. Sedrakian, _Baryonic models of ultra-low-mass compact stars for the central compact object in HESS J1731-347_, _Phys. Lett. B_**844** (2023) 138062 [2306.14185]. * [34] M.G. Alford, S. Han and M. Prakash, _Generic conditions for stable hybrid stars_, _Phys. Rev. D_**88** (2013) 083013 [1302.4732]. * [35] T.E. Riley, A.L. Watts, S. Bogdanov et al., _A NICER View of PSR J0030+0451: Millisecond Pulsar Parameter Estimation_, _Astrophys. J. Lett._**887** (2019) L21 [1912.05702]. * [36] C. Kalapotharakos, Z. Wadiasingh, A.K. Harding and D. Kazanas, _The Multipolar Magnetic Field of Millisecond Pulsar PSR J0030+0451_, _Astrophys. J._**907** (2021) 63 [2009.08567]. * [37] M.C. Miller, F. Lamb, A.J. Dittmann et al., _The Radius of PSR J0740+6620 from NICER and XMM-Newton Data_, _Astrophys. J. Lett._**918** (2021) L28 [2105.06979]. * [38] T.E. Riley, A.L. Watts, P.S. Ray et al., _A NICER View of the Massive Pulsar PSR J0740+6620 Informed by Radio Timing and XMM-Newton Spectroscopy_, _Astrophys. J. Lett._**918** (2021) L27 [2105.06980]. * [39]LIGO Scientific, Virgo collaboration, _GW170817: Measurements of neutron star radii and equation of state_, _Phys. Rev. Lett._**121** (2018) 161101 [1805.11581]. * [40] M. Kramer, I.H. Stairs, R.N. Manchester et al., _Strong-field gravity tests with the double pulsar_, _Phys. Rev. X_**11** (2021) 041050 [2112.06795]. * [41] M. Burgay, N. D'Amico, A. Possenti et al., _An increased estimate of the merger rate of double neutron stars from observations of a highly relativistic system_, _Nature_**426** (2003) 531 [astro-ph/0312071]. * [42] J.M. Lattimer and B.F. Schutz, _Constraining the Equation of State with Moment of Inertia Measurements_, _Astrophys. J._**629** (2005) 979 [astro-ph/0411470].
The (re)analysis of data on the X-ray emitting pulsars PSR J0030+0451 and PSR J0740+6620, as well as new results on PSR J0437-4715, are confronted with the predictions of the equation of state (EoS) models allowing for strong first-order phase transition for the mass-radius (\\(M\\)-\\(R\\)) diagram. We use models that are based on a covariant density functional (CDF) EoS for nucleonic matter at low densities and a quark matter EoS, parameterized by the speed of sound, at higher densities. To account for the variations in the ellipses for PSR J0030+0451 obtained from different analyses, we examined three scenarios to assess their consistency with our models, focusing particularly on the potential formation of twin stars. We found that in two scenarios, where the ellipses for PSR J0030+0451 and PSR J0437-4715 with masses close to the canonical mass \\(\\sim 1.4\\,M_{\\odot}\\) are significantly separated, our models allow for the presence of twin stars as a natural explanation for potential differences in the radii of these stars.
Give a concise overview of the text below.
259
isprs/56d4dc4c_4f43_4dba_be81_3f84b19a3e34.md
# Generation and Analysis of Digital Surface Models (DSMs) in Urban Areas Based on DMC-Images A. Alobeid K. Jacobsen Institute of Photogrammetry and GeoInformation, Leibniz University Hannover Nienburger Str. 1, D-30167 Hannover, Germany - (Alobeid, Jacobsen)@ ipi.uni-hannover.de ## 1 Introduction Automatic 3D city models and its update got an increasing interest in recent years, requiring the development of automatic methods for acquisition of DSMs. 3D information in urban and suburban areas are very useful for many applications, such as: - Change detection - GIS database updating - Urban planning - 3D feature extraction In principle stereo images are sufficient for 3D feature extraction for objects such as buildings. However due to the complexity of details, many problems were found. Grey values are influenced by the object geometry but also by many factors such as shadows, reflections, saturation, lack of texture, noise and others. It is very difficult to separate important information from irrelevant details in generating DSMs in build-up areas. In spite of high degree of automation in several commercial programs, used for DSM generation, a significant effort is still required for accuracy improvement, especially in the difficult urban and forest areas. The aim of our investigation was to investigate the influence of different parameters on generating DSMs based on Z/I Imaging DMC-images in urban areas. In addition the use of the selected parameters for other areas has been investigated. ## 2.Available Data and Study Area The investigations have been made in the area of Frederikstad, located close to Oslo, Norway. The Digital Metric Camera (DMC) images are composed of 4 slightly convergent high resolution panchromatic sub images, mosaicked together (see figure 1). The convergent camera arrangement guarantees a very uniform image quality up to the image corners, no lower resolution in the corners could be detected by edge analysis. This is different for digital cameras based on nadir view for all sub-cameras (Jacobsen 2008). The images are available with a ground sampling distance (GSD) of 9.4cm having 60% endlap and 75% side lap. The horizontal coordinate components of well defined object points have a standard deviation of approximately 2-3cm and the vertical component of approximately 7cm. Figure 1: Configuration of panchromatic DMC sub-cameras ## 3 Image matching and investigation For the automatic image matching of the DMC-images the Hannover program DPCOR has been used, which is based on least squares matching. The approximate corresponding image positions are determined by region growing, published by (Heipke 1996). Object coordinates are computed by intersection. Figure 2: Workflow of matching procedure Figure 2 shows the work flow of the data handling. It mainly has following steps: * Manual measurement of few seed points if not enough control points already available; the number of required seed points depends on image similarity and decrease with smaller time difference of taking the both images of a stereo model. * Automatic matching in image space by least squares method. * Transformation of pixel to photo coordinates * Computation of ground coordinates by intersection. * Generation of Digital Surface Models (DSMs), 3D view, contour lines, visual inspection. The point spacing of matching can be specified in the used program DPCOR. By default every third pixel in line and also column direction will be matched. A matching of every pixel leads to high correlation of neighboured points. Figure 3 shows an example of matched points in an urban area, which leads to a very dense DSM. The small separated building with some trees around, did not lead to a clear building shape. Dark parts, caused by shadows, may lead to gaps. The unrealistic threshold of R=0.9 for the correlation coefficient of the least squares matching leads to a not negligible loss of accepted points. On the other hand a small step width of just 1 pixel increases the percentage of accepted points very slightly, but requires a quite higher processing time. The study gave optimal results for the sub-matrix size for matching of 10 x 10 pixels. **Area with man-made objects and trees:** The area includes complex objects such as small and large buildings and trees close to roofs. Problems are caused by shadows, repetitive objects, occlusions and poor texture. Few or no points were extracted in shadow areas close to the base of buildings. The matching in dense urban area may not be sufficient to model discontinuities such as buildings. Buildings often occlude each other. Furthermore, man-made objects are usually made with homogenous materials causing large areas of poor texture. Figure 5 shows the frequency distribution of the correlation coefficient in a complex area. \\begin{tabular}{|l|l|c|} \\hline **parameters** & **matching success** \\\\ R= threshold for accepted correlation & \\\\ \\hline R = 0.90 &, & Step width = 3 \\\\ sub-matrix & 10 \\(\\times\\)10 pixels & 48\\% \\\\ \\hline R = 0.80 &, & Step width = 3 \\\\ sub-matrix & 10 \\(\\times\\)10 pixels & 61\\% \\\\ \\hline R = 0.70 &, & Step width = 3 \\\\ sub-matrix & 10 \\(\\times\\)10 pixels & 74\\% \\\\ \\hline R = 0.70 &, & Step width = 1 \\\\ sub-matrix & 10 \\(\\times\\)10 pixels & 76\\% \\\\ \\hline R = 0.70 &, & Step width = 3 \\\\ sub-matrix & 7 \\(\\times\\)7 pixels & 35\\% \\\\ \\hline \\end{tabular} Table2: Matching completeness in area with man-made objects depending upon chosen parameters \\begin{tabular}{|l|l|} \\hline **parameters** & **matching success** \\\\ R= threshold for accepted correlation & \\\\ \\hline R = 0.90 &, & Step width = 3 \\\\ sub-matrix & 10 \\(\\times\\)10 pixels & 48\\% \\\\ \\hline R = 0.80 &, & Step width = 3 \\\\ sub-matrix & 10 \\(\\times\\)10 pixels & 61\\% \\\\ \\hline R = 0.70 &, & Step width = 3 \\\\ sub-matrix & 10 \\(\\times\\)10 pixels & 74\\% \\\\ \\hline R = 0.70 &, & Step width = 1 \\\\ sub-matrix & 10 \\(\\times\\)10 pixels & 76\\% \\\\ \\hline R = 0. Figure5: Frequency distribution of correlation coefficients of least squares matching in complex areas with small and large building and trees, sub-matrix for least squares matching 10x10 pixels; step width \\(=3\\) pixels ## 4 Accuracy Analysis The accuracy analysis shall lead to the optimal matching procedure. As reference for the investigation manually measured, randomly spaced points in the stereo model have been used. The reference height model must be available in a grid mode. For this reason, the reference DSM has been interpolated by Delauny triangulation to a point raster with 20cm spacing. The generated DSM has been checked against the manually measured reference height model separately for the open area and the area with man-made objects and trees. For this the Hannover program for analysis of height models can use a layer with different point classes (Jacobsen 2007). Table 3 shows the results of the comparison. The root mean square errors (RMSE) of the differences have to be seen in relation to the 9.4cm GSD together with the height to base relation of the DMC, being 3.2 for the used 60% endlap. The standard deviation of the x-parallax corresponds to the vertical root mean square error divided by the height to base relation. So a RMSE of 13.4cm for the height differences corresponds to 0.44 GSD standard deviation of the x-parallax and 10.6cm RMSE correspond to 0.35GSD. The standard deviation of the height (Sz) can be estimated with: \\[S_{\\text{z}}=\\frac{h}{b}\\bullet spx\\quad(1)\\] where h is the flying height above ground, b is the base length and spx the standard deviation of the x-parallax. As expected, the root mean square error in open area is smaller than root mean square error for the area with buildings and trees, which is influenced by discontinuities. The frequency distribution in open area (figure 7) is close to normal distribution, while the frequency distribution shown in figure 6 is a little asymmetric, caused by vegetation. As tolerance limit for analysis a maximal absolute value of the height difference of 0.5m has been used, larger height differences have been handled as blunders. Only very few points exceeded this limit. ## 5 Comparison between matched and reference DSM The reference height model is not free of errors; this should be respected for the accuracy analysis. The accuracy of the manual pointing is estimated with a standard deviation of the x-parallax of 0.25 GSD, multiplied with the height to base relation of 3.2 it leads to a standard deviation of the height measurement of 0.8 GSD or 7.5cm. If this is respected for the values shown in table 3, the standard deviation of the DSM in the area with man-made objects and trees is 11.1cm, corresponding to a x-parallax accuracy of 0.37GSD, and for the open area 7.5cm, corresponding to a x-parallax accuracy of 0.25GSD. This means, for the open area the manual pointing and the automatic image matching are on the same accuracy level, while in the build up area the manual pointing is more precise, mainly because of problems at discontinuities. ## 6 Visual inspection The visual comparison of the DSM generated by manual measurement and the matched DSM shows the following for the matched DSM: * The base of the buildings is wider than its original length, mainly caused by occlusions, but in most cases the roof is shown very well. * Sun shadows leads to matching failures. * The building footprints in most cases can be extracted from the generated DSM, but not in any case with satisfying details. Figure 6: Frequency of distribution of the height differences [m] between the matched DSM and the reference DSM in the area \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline Class & \\begin{tabular}{c} Number of \\\\ measured \\\\ points \\\\ \\end{tabular} & \\begin{tabular}{c} RMSE \\\\ cm \\\\ \\end{tabular} & Bias \\\\ \\hline man-made objects & \\multirow{2}{*}{1839} & \\multirow{2}{*}{13.4} & \\multirow{2}{*}{0,0} \\\\ and trees & & & \\\\ \\hline open area & 1222 & 10.6 & 0.4 \\\\ \\hline \\end{tabular} \\end{table} Table 3: RMS discrepancies against reference height model Figure 7: Frequency distribution of the height differences [m] between the matched DSM and the reference DSM in open areas Parts of buildings may be occluded by other buildings, influencing the achieved building shape or causing matching problems. ## 7 Conclusion Digital surface models in urban areas can be generated based on large scale DMC-images, but some difficulties have to be expected, such as caused by: radiometric problems, occlusions, shadows and vegetation. In the area including complex objects such as small and large buildings and trees close to roofs, the detection and definition of corner points in the building was poor, leading to the possibility of errors. But we can deduce the following results: * High accuracy of generated DSM based on DMC-images with a standard deviation of the height between 0.8 and 1.2 GSD. * Generation of good DSM depends on the image quality and the object visibility. * The matching parameters should be optimized according to the characteristics of each area. ## Reference Gruen, A.W. and Baltsavias, E.P., 1987: High-precision image matching for digital terrain model generation, IAPRS, Vol 25, No 3/1: 284-296. Heipke, C., 1996: Overview of Image Matching Techniques - [http://phot.epfl.ch/workshop/wks96/art_3_1.html](http://phot.epfl.ch/workshop/wks96/art_3_1.html)(January 2008) Jacobsen, K., 2006: Digital surface Models of city Areas by very High Resolution Space Imagery. EARSeL Workshop on Urban Remote Sensing, Berlin March 2006, on CD Jacobsen, K., 2007: Manual of program system BLUH, Institute of Photogrammetry and Geoinformation, Leibniz University Hannover, Germany Figure 7: Shaded visualization of matched DSM with some details, upper right = DMC-image Jacobsen, K., 2008: Tells the number of pixels the truth? - Effective Resolution of Large Size Digital Frame Cameras, ASPRS 2008 Annual Convention, Portland Passini, R., Betzner, D., Jacobsen, K., 2002: Filtering of Digital Elevation Models, ASPRS annual convention, Washington 2002 * [1998] Wiman,H., 1998: Automatic Generation of Digital surface Models through Matching The Photogrammetric Record 16 (91), 83-91 doi:10.1111/0031-868X.00115 [http://www.blackwell-synergy.com/doi/abs/10.1111/0031-868X.00115](http://www.blackwell-synergy.com/doi/abs/10.1111/0031-868X.00115)(February 2008) Figure 8: 3D visualization of matched DSM sub-area - 15 cm point spacing
DSMs are a geometric description of the height of the visible terrain surface, including elements like buildings and trees. DSMs became an important source for scene analysis and for 3D feature detection and reconstruction. The digital metric camera (DMC) offers excellent image quality and can be used for automated DSM generation, with high accuracy. By using least squares matching as method to find corresponding points in two overlapping images for three dimensional reconstructions, the corresponding points potentially can be determined with sub-pixel accuracy. The investigation of the influence of different parameters on generating DSMs from DMC images is shown like the limitation of automatic DSM generation in urban areas. The aim of our investigation is, whether the selected parameters for image matching, optimized in few test areas, can be used for the whole study area. The generated DSM was checked against a reference DSM based on manual measurement. The comparison shows that the accuracy mainly depends on terrain slope and surface structure. Furthermore, satisfying DSMs strongly depend upon the quality of the DMC-images.A standard deviation of the DSM based on DMC-images in whole urban study area of 1.0 up to 1.3 GSD is possible, corresponding to 0.3 up to 0.4 GSD for the x-parallax. Digital, Matching, DEM/DTM, Aerial, Accuracy, Quality
Give a concise overview of the text below.
271
arxiv-format/1805_04262v3.md
Stingray Detection of Aerial Images Using Augmented Training Images Generated by A Conditional Generative Model Yi-Min Chou\\({}^{1,2}\\) Chien-Hung Chen\\({}^{1,2}\\) Keng-Hao Liu\\({}^{3}\\) Chu-Song Chen\\({}^{1,2}\\) \\({}^{1}\\) Institute of Information Science, Academia Sinica, Taipei, Taiwan \\({}^{2}\\) MOST Joint Research Center for AI Technology and All Vista Healthcare \\({}^{3}\\) Department of Mechanical Electro-mechanical Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan {chou, redsword26, song}@iis.sinica.edu.tw, [email protected], ## 1 Introduction Detecting specific animals in aerial images captured by an UAV is a crucial research topic. In this research direction, computer vision techniques are beneficial to the development of popular tools for biological researches. In this paper, a stingray detection approach is introduced. Stingrays are common in coastal tropical and subtropical marine waters. They usually appear in surface water so that a common UAV can capture them. In this work, the scenario we focus on is the automatic detection of stingray from the aerial images recorded on a sea-surface area. To monitor the behaviors and understand the distribution of a certain animal, biologists collect aerial photos or videos by an UAV. After obtaining the materials, they have to manually annotate the position, number, and size of the target animal from the image scene. This step is extremely tedious and time-consuming. Besides, the collected photos could be partially useless because the target animal may be missing in the scene. Therefore, using an automatic, computer-based method to recognize the target animal is necessary for the kind of research. However, automaticly recognizing the stingray is demanding due to the following issues. First, the color of stingrays is similar to that of the rocks/reefs under the water, and the stingrays could be occluded by the dust when swimming. Second, the aerial images are usually filled with light reflection of water ripples; Third, the shape of stingray is not always consistent, and is hard to define. Under such circumstances, traditional machine learning methods accompanied with hand-craft features often fail for the detection task on the sea-surface images aerially taken. Figure 1 demonstrates the difficulty of this problem. As the rapid progress of deep learning (DL), it has been a popular approach to many image classification and object detection tasks. In recent years, a breakthrough of image recognition has been made via deep convolution neural networks (CNN) [12]. Deep CNN enforces end-to-end training, so that feature extraction and classification are integrated in a single framework. Besides handing the case where only one concept is contained in an image [12, 19, 10], deep CNN has been extended for object detection [17, 13, 16], where not only the objects contained in an image are recognized but their sites are marked by tight bounding boxes. In our case, the problem to be tackled belongs to 2-class object detection, where the foreground (or positive) class consists of stingrays and the background (or negative) class consists of sea-surface patches. We employ deep CNN detectors to fulfill our goal, where faster RCNN [17] is used in our work. The performance obtained is far more satisfied than that of using hand-craft features in our experience. Nevertheless, DL usually requires a large set of training samples to learn the network weights, while the biological image materials are sometimes insufficient to fulfill the demand. There are two main difficulties encountered when using deep-learning object detector in our work. * Insufficient training data: The amount of training data is limited by the few number of UAV flights, and the image quality is inconsistent by weather condition, environmental change, capturing location and latitude. It results in the lack of effective training images and data diversity. * Background transparency: Because the sea surface is translucent, the stingray image is actually embedded into water but not explanted on the water. Thus, the color of stingray is blended that of water. The conventional data augmentation approaches could not generate this type of images. Our stingray detection problem has 2 classes (foreground and background). To tackle this problem, we introduce a mixed background and foreground (bg-fg) data augmentation approach to handle the problem. Our approach, namely, conditional GLO (C-GLO), can learn a generator network that produces a foreground object given a specified background patch. C-GLO can learn the distribution of foreground (w/ stingray) and background (w/o stingray) images simultaneously in the latent space with a single network. Once the generator is learned, we can freely generate the synthetic stingray images respect to any sea-surface background to enrich the amount and the diversity of the training dataset. For the detection, we used Faster R-CNN as the CNN-detector for the evaluation. The experimental results show that using the C-GLO augmented samples for training can satisfiedly improve the detection performance. Such an augmentation approach could be potentially applied to other analogous applications. ## 2 Related Work In this section, we briefly review works related to our study on two folds: deep-CNN object detection and generative networks. ### Deep CNNs for Object Detection Object detection methods have been made great progress recently with the resurgence of CNNs. In the past, researches focus on the design of useful hand-crafted features, such as HOG and DPM. Currently, it shifts to the design of a good CNN architecture that can automatically capture high-level features for detection. DL-based detection approaches started with R-CNN [7] that adopts an additional selective search procedure. Later, this kind of method evolved to an approximate end-to-end model with using reginal proposal network (RPN) in Figure 1: A typical aerial sea-surface image, which contains four stingrays in the scene. The stingray detection problem is demanding because of the similar rocks/reefs under the water and the aerial images are filled with light reflection of water ripples. Faster R-CNN [17]. Many follow-up studies successively improve the performance such as R-FCN[5] and Mask R-CNN [9], or accelerate the computation such as SSD [13] and YOLO2 [16]. ### Generative Models Nature images generation has been investigated by the work of Variational Autoencoders (VAE) [11]. Later, Goodfellow et al. proposed Generative Adversarial Networks (GAN) [8] that trains a generator and a discriminator simultaneously via an iteratively adversarial process. GAN has demonstrated the capability of generating more convincing images than VAE. Although GANs provide sharper images, a main drawback lies in the difficulty of converging to an equilibrium state during training. Recently, numerous GAN-related studies have been proposed [15, 2, 3, 18], and most of them focus on resolving the problems of model instability and mode collapse [6, 14, 1]. Nevertheless, training of GAN is still more demanding and relatively unstable compared to pure supervised training. To avoid challenging adversarial training protocol in GAN, Bojanowski et al. proposed Generative Latent Optimization (GLO) [4]. GLO removes the discriminator in GAN and learns the mapping from images to noise vectors by minimizing the reconstruction loss. It provides a stable training process while enjoys many of the desirable properties of GAN, such as synthesizing appealing images and interpolating meaningfully between samples. ## 3 Our Method Data augmentation (such as cropping and flipping the images) has been widely used for the training of image classifiers, where the labels are provided for the entire image. However, the task of object detection requires bounding-box outputs, while augmenting the training images with bounding-box samples of the objects is more difficult. In object detection, the positive patches are often far fewer than the negative ones. For example, in our data, sometimes only one stingray is contained in a training image, which makes a CNN detector demanding to train. We introduce a method that performs data augmentation in the learning phase for object detection. Considering that the sea surface is translucent, we propose to use a generator that produces foreground objects mixed with the background patches selected from the image. Given some background (i.e., sea-surface) patches randomly cropped from the original image (as shown in the upper half of Figure 2), we use the C-GLO approach to synthesize a foreground object (i.e., stingray) per each background patch, and put them back to Figure 2: Mixed Bg-Fg Syntheses. Given sea-surface patches cropped in the original training images, we generate a stingray inside each patch and put the patch back to the original image. The augmented images obtained therefore contain more stingrays. In this way, the training set of images is re-generated such that each image has sufficient many stingrays and the number of stingrays per image is approximately the same. the original sites in the image (as shown in the lower half of Figure 2). In the following, we introduce C-GLO at first in Section 3.1, and then the mixed bg-fg synthesis and the CNN detector in Section 3.2. ### Conditional GLO and Architecture Adopted GLO [4] is a generative method introduced by Bojanowski et al. Given unsupervised training images \\(\\mathbf{I}=\\{I_{1},\\cdots,I_{N}\\}\\), GLO trains a generator \\(\\mathbf{\\Phi}\\) (with the input \\(z\\) and network weights \\(W\\)), such that the following objective is minimized: \\[e(W,\\mathbf{z})=\\sum_{i=1}^{N}loss(\\mathbf{\\Phi}(W;z_{i})-I_{i}), \\tag{1}\\] where \\(\\mathbf{z}=\\{z_{1},z_{2},\\cdots,z_{N}\\}\\). A two-stage iterative method is introduced for the minimization: 1. Fixing \\(\\mathbf{z}\\), find \\(W\\) to reduce \\(e(W,\\mathbf{z})\\) via back-propagation; 2. Fixing \\(W\\), find \\(z_{i}\\) to reduce \\(loss(\\mathbf{\\Phi}(W;z_{i})-I_{i})\\) via back-propagation, \\(\\forall i\\), with an uni-model normalization to \\(\\mathbf{z}\\). The above two steps are iterated to refine \\(W\\) and \\(\\mathbf{z}\\) alternatively. GLO holds the following advantages. * Direct training: First, GLO learns a generative network directly with no needs of other complemented networks. In GAN, a discriminant networks is further used to form a two-player game for the generator learning. However, GANs easily suffer from the problem of instable training. Though many modification of GANs [2, 3, 18] have been proposed to address this issue, the training process of GANs is still relatively unstable compared to supervised training. On the contrary, GLO's training process is more alike supervised training and thus it is easier to get stable results in our experience. Besides GAN, VAE also requires an additional encoder for the generator training. GLO can train the generator directly and thus consumes fewer training resources. * Inverse mapping: A second advantage of GLO is its reconstruction capability. Assume that the generator has been trained, and thus \\(W\\) is known. Given an image \\(I_{i}\\), the latent codes \\(z_{i}\\) that exactly generates \\(I_{i}\\) can be found via iterating step 2 of the above training process (with \\(W\\) fixed). Hence, the inverse mapping of the image \\(I_{i}\\) is available, which is unlike GAN that can generate novel images but do not provide the codes that recover the original images. Although some approaches combining GAN and autoencoder (such as [3]) can find the latent codes via the encoder subnetwork of the autoencoder, the codes are obtained via a forward mapping indirectly and thus the recovery performance is not guaranteed. With the reconstruction capability, given an image patch cropped from sea surface, GLO can thus find the latent code \\(z\\) that produces the same patch that can be seamlessly put back to the sea surface, which suits our data-augmentation approach introduced later. We extend GLO to C-GLO as follows. Unlike GLO, the latent space input is generalized to \\((z,c)\\) in C-GLO, where \\(z\\in R^{d}\\) is the latent code and \\(c\\in R^{m}\\) is a set of \"on-off\" labels. In this study, \\(m=1\\) since only a single condition Figure 3: Conditional GLO (C-GLO) introduced in this work. (Fig or Bg) is required. The training images thus become \\(\\mathbf{c}=\\{(I_{1},c_{1}),\\cdots,(I_{N},c_{N})\\}\\), where \\(c_{i}\\in\\{0,1\\}\\) represents background and foreground, respectively. The training process of C-GLO is similar to that of GLO as follows: 1. Given \\(\\mathbf{z},\\mathbf{c}\\), find \\(W\\) to reduce the total reconstruction loss of \\(\\mathbf{I}\\). 2. Given \\(W,c_{i}\\), find \\(z_{i}\\) to reduce the reconstruction loss of \\(I_{i},\\forall i\\). The above two steps are executed iteratively. Figure 3 shows the architecture of the C-GLO adopted. Without loss of generality, we use the same de-convolution network in DCGAN [15] as the architecture of our C-GLO in this work. C-GLO inherits the characteristics of GLO: easy to train and provides explicit latent codes for image reconstruction. The learned C-GLO can then be used to generate novel images of stingray (or sea-surface) via the condition \\(c=1\\) (or \\(c=0\\)) and the respective codes of \\(z\\). ### Mixed Bg-Fg Syntheses and object detector To convert a given background patch to a mixed bg-fg one, we disentangle the condition label of the latent representation. Let \\(\\mathbf{\\Phi}\\) be a trained generator (with the weights \\(W\\)). Consider a background (\\(c=0\\)) image patch, say \\(I_{i_{b}}\\); let \\(z_{i_{b}}\\) be its inverse mapping (i.e., \\(\\mathbf{\\Phi}(W;z_{i_{b}};c)=I_{i_{b}}\\)). We then switch the condition label from \\(c=0\\) to \\(c=1\\) and keep the other parameters \\(W,z_{i_{b}}\\) unchanged. By doing so, the sea surface patch specified by the latent code \\(z_{i_{b}}\\) is provided with a positive condition \\(c=1\\). It results in the effect that the sea surface patch \\(I_{i_{b}}\\) contains a stingray image inside it. The disentangled patch (with a synthesized stingray in it) can thus be put back to the entire sea-surface scene without noticeable artifact. The sea surface image are then augmented with more stingrays for training. Figure 2 gives some examples of the augmented samples. More examples can be found in Figure 4. We apply the augmented data to train an existing CNN detector, Faster R-CNN. Faster R-CNN contains three parts of networks, the feature-extraction network, region proposal network, and classification network. The architecture of the feature-extraction network can be flexibly chosen. In this work, we use two network models, ZF model [20] and VGG-16 model [19], as the architecture of the feature-extraction network. The two Faster R-CNNs are evaluated on our dataset with or without our data augmentation method for comparison. An overview of our approach is given in Figure 5. ## 4 Experiments In this section, we apply our mixed bg-fg synthesis approach to stingray detection and present the results. ### Dataset and Experimental Settings We have gotten a total of 36 labeled videos taken in the day time, recorded at 4k (3840\\(\\times\\)2160) resolution. The stingray images are sampled from the videos at 1fps or 4fps. Those images are composed of various components such as rocks, ripples, dust, and light reflections, and thus the stingrays are difficult to be detected even by human. We select 3245 images (from 16 videos) for training and 3147 images (from the rest 20 videos) for testing. All the images are re-scaled to 1920\\(\\times\\)1080 for learning because of the limited GPU memory (a single Nvidia Titan-X GPU is used in our experiment). In those images, there is only one object class (stingray) with the size within 30 to 350 pixels. Hence, the anchor-box parameters in Faster R-CNN are set to reflect the scales accordingly, while the other settings follow the default of Faster R-CNN. To train the C-GLO model, \\(L_{1}\\) loss is used and the output size is 64\\(\\times\\)64 pixels. For each training image, we crop the stingray patches as the positive samples and randomly crop sea-surface patches as the negative ones. After further augmentation via rotation and flipping of the stingrays, we finally use 30496 stingray patches and 7664 sea-surface patches for training Figure 4: Switch the condition in the latent space to convert a background patch to a mixed bg-fg patch via C-GLO with the dimension of latent code \\(d\\) = 256; upper: the original background patch; bottom: the synthetic patch. the C-GLO model. ### Data Augmentation Results We switch the condition of the trained GLO to generated the mixed bg-fg patches for data augmentation, as described in Section 3.2. Figure 4 shows several of our mixed bg-fg synthetic patches. It can be seen that our method can generate new stingray of various colors and shapes while keeping the same surroundings of the original patches. ### Detection Results We expect the detection capability of Faster R-CNN can be benefited from the data augmented by C-GLO. The detection results are reported in Table 1. It can be seen that the Average Precision (AP) can be improved by 4.15 and 2.02 percents when using ZF and VGG-16 as the base models for feature extraction in Faster R-CNN, respectively. In addition, there is only a slight difference on the performance by changing the dimension of \\(z\\). It reveals that the detection capability of our approach is insensitive to the size of the latent space. Also, our approach is capable of generating diverse patches to augment the training dataset, which enforces a more effective training of object detectors and improves the performance. ## 5 Conclusions In this paper, we present a method to detect stingrays in aerial images. We introduce a data augmentation method called _mixed bg-fg synthesis_ to fuse background patches and foreground objects without apparent artifacts, which is achieved by a new generative network C-GLO. The experimental results reveal that the object detection performance can be improved via our data augmentation method. The system developed in this work can help biologists to track and annotate stingrays automatically. Currently, our approach is based on images. In the future, we plan to extend our approach to video-based data augmentation and objection detection. ## Acknowledgements This work is supported in part by the projects MOST 107-2634-F-001-004 and MOST 106-2221-E-110-074. ## References * [1] M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. In _ICLR_, 2017. * [2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. _arXiv preprint arXiv:1701.07875_, 2017. * [3] D. Berthelot, T. Schumm, and L. Metz. Began: Boundary equilibrium generative adversarial networks. _arXiv preprint arXiv:1703.10717_, 2017. * [4] P. Bojanowski, A. Joulin, D. Lopez-Paz, and A. Szlam. Optimizing the latent space of generative networks. _arXiv preprint arXiv:1707.05776_, 2017. * [5] J. Dai, Y. Li, K. He, and J. Sun. R-fcn: Object detection via region-based fully convolutional networks. In _Advances \\begin{table} \\begin{tabular}{l c c c c} \\hline Network & Baseline & Ours-128 & Ours-256 & Ours-512 \\\\ \\hline ZF & 78.89 & 82.75 & 82.42 & **83.04** \\\\ \\hline VGG-16 & 84.59 & 86.14 & **86.61** & 86.43 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Average precision (AP) obtained via our augmentation method (\\(d\\) = 128, 256, 512) compared to the Faster R-CNN without augmentation Figure 5: An overview of our approach. in neural information processing systems_, pages 379-387, 2016. * [6] V. Dumoulin, I. Belghazi, B. Poole, A. Lamb, M. Arjovsky, O. Mastropietro, and A. Courville. Adversarially learned inference. _arXiv preprint arXiv:1606.00704_, 2016. * [7] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Region-based convolutional networks for accurate object detection and segmentation. _IEEE transactions on pattern analysis and machine intelligence_, 38(1):142-158, 2016. * [8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In _NIPS_, 2014. * [9] K. He, G. Gkioxari, P. Dollar, and R. Girshick. Mask r-cnn. In _Computer Vision (ICCV), 2017 IEEE International Conference on_, pages 2980-2988. IEEE, 2017. * [10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016. * [11] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In _ICLR_, 2014. * [12] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In _NIPS_, 2012. * [13] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In _European conference on computer vision_, pages 21-37. Springer, 2016. * [14] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein. Unrolled generative adversarial networks. In _ICLR_, 2017. * [15] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In _ICLR_, 2016. * [16] J. Redmon and A. Farhadi. Yolo9000: Better, faster, stronger. In _Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on_, pages 6517-6525. IEEE, 2017. * [17] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: towards real-time object detection with region proposal networks. _IEEE transactions on pattern analysis and machine intelligence_, 39(6):1137-1149, 2017. * [18] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In _NIPS_, 2017. * [19] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In _ICLR_, 2015. * [20] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In _Proceedings of the European Conference on Computer Vision_, pages 818-833, 2014.
In this paper, we present an object detection method that tackles the stingray detection problem based on aerial images. In this problem, the images are aerially captured on a sea-surface area by using an Unmanned Aerial Vehicle (UAV), and the stingrays swimming under (but close to) the sea surface are the target we want to detect and locate. To this end, we use a deep object detection method, faster RCNN, to train a stingray detector based on a limited training set of images. To boost the performance, we develop a new generative approach, conditional GLO, to increase the training samples of stingray, which is an extension of the Generative Latent Optimization (GLO) approach. Unlike traditional data augmentation methods that generate new data only for image classification, our proposed method that mixes foreground and background together can generate new data for an object detection task, and thus improve the training efficacy of a CNN detector. Experimental results show that satisfiable performance can be obtained by using our approach on stingray detection in aerial images.
Write a summary of the passage below.
216
arxiv-format/2108_01884v1.md
# Adaptive Path Planning for UAV-based Multi-Resolution Semantic Segmentation Felix Stache\\({}^{*}\\) Jonas Westheider\\({}^{*}\\) Federico Magistri Marija Popovic Cyrill Stachniss \\({}^{*}\\): authors with equal contribution. All authors are with the University of Bonn, Germany. This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy, EXC-2070 - 390732324 (PhenoRob). ## I Introduction Unmanned aerial vehicles (UAVs) are experiencing a rapid uptake in a variety of aerial monitoring applications, including search and rescue [10], wildlife conservation [9], and precision agriculture [14, 15, 25]. They offer a flexible and easy to execute a way to monitor areas from a top-down perspective. Recently, the advent of deep learning has unlocked their potential for image-based remote sensing, enabling flexible, low-cost data collection and processing [3]. However, a key challenge is planning paths to efficiently gather the most useful data in large environments, while accounting for the constraints of physical platforms, e.g. on fuel/energy, as well as the on-board sensor properties. This paper examines the problem of deep learning-based semantic segmentation using UAVs and the exploitation of this information in path planning. Our goal is to adaptively select the next sensing locations above a 2D terrain to maximize the classification accuracy of objects or areas of interest seen in images, e.g. animals on grassland or crops on a field. This enables us to perform targeted high-resolution classification only where necessary and thus maximize the value of data gathered during a mission. Most data acquisition campaigns rely on coverage-based planning to generate UAV paths at a fixed flight altitude [2]. Although easily implemented, the main drawback of such methods is that they assume an even distribution of features in the target environment; mapping the entire area at a constant image spatial resolution governed by the altitude. Recent work has explored _informative planning_ for terrain mapping, whereby the aim is to maximize an information-theoretic mapping objective subject to platform constraints. However, these studies either consider 2D planning at a fixed altitude or apply simple heuristic predictive sensor models [14, 15, 10], which limits the applicability of future plans. A key challenge is reliably characterizing how the accuracy of segmented images varies with the altitude and relative scales of the objects in registered images. To address this, we propose a new adaptive planning algorithm that directly tackles the altitude dependency of the deep learning semantic segmentation model using UAV-based imagery. First, our approach leverages prior labeled terrain data to empirically determine how classification accuracy varies with altitude; we train a deep neural network with images obtained at different altitudes that we use to initialize our planning strategy. Based on this analysis, we develop a _decision function_ using Gaussian Process (GP) regression that is first initialized on a training field and then updated Fig. 1: A comparison of our proposed adaptive path planning strategy (top-left) against lawn-mower coverage planning (top-right) for UAV-based field segmentation, evaluated on the application of precision agriculture using real field data (bottom). By allowing the paths to change online, our approach enables selecting high-resolution (low-altitude) imagery in areas with more semantic detail, enabling higher-accuracy, fine-grained segmentation in these regions. online on a separate testing field during a mission as new images are received. For replanning, the UAV path is chosen according to the decision function and segmented images to obtain higher classification accuracy in more semantically detailed or interesting areas. This allows us to gather more accurate data in targeted areas without relying on a heuristic sensor model for informative planning. The contributions of this work are: (i) an online planning algorithm for UAVs that uses the semantic content of new images to adaptively map areas of finer detail with higher accuracy. (ii) A variable-altitude accuracy model for deep learning-based semantic segmentation architectures and its integration in our planning algorithm. (iii) The evaluation of our approach against state-of-the-art methods using real-world data from an agricultural field to demonstrate its performance. We note that, while this work targets the application of precision agriculture, our algorithm can be used in any other UAV-based semantic segmentation scenario, e.g., search and rescue [10], urban scene analysis, wetland assessment, etc. ## II Related Work There is an emerging body of literature addressing mission planning for UAV-based remote sensing. This section briefly reviews the sub-topics most related to our work. **UAV-based Semantic Segmentation:** The goal of semantic segmentation is to assign a predetermined class label to each pixel of an image. State-of-the-art approaches are predominantly based on convolutional neural networks (CNNs) and have been successfully applied to aerial datasets in various scenarios [20, 3, 12, 8, 21]. In the past few years, technological advancements have enabled efficient segmentation on board small UAVs with limited computing power. Nguyen et al. [12] introduced MAVNet, a light-weight network designed for real-time aerial surveillance and inspection. Sa et al. [20] and Deng et al. [5] proposed CNN methods to segment vegetation for smart farming using similar platforms. Our work shares the motivation of these studies; we adopt ERFNet [19] to perform efficient aerial crop/weed classification in agricultural fields. However, rather than flying predetermined paths for monitoring, as in previous studies, we focus on planning: we aim to exploit modern data processing capabilities to localize areas of interest and finer detail (e.g. high vegetation cover) online and steer the robot for adaptive, high-accuracy mapping in these regions. **Adaptive Path Planning:** Adaptive algorithms for active sensing allow an agent to replan online as measurements are collected during a mission to focus on application-specific interests. Several works have successfully incorporated adaptivity requirements within informative path planning problems. Here, the objective is to minimize uncertainty in target areas as quickly as possible, e.g. for exploration [24], underwater surface inspection [7], target search [10, 22, 23], and environmental sensing [23]. These problem setups differ from ours in several ways. First, they consider a probabilistic map to represent the entire environment, using a sensor model to update the map with new uncertain measurements. In contrast, our approach directly exploits the accuracy in semantic segmentation to drive adaptive planning. Second, they consider a predefined, i.e. non-adaptive, sensor model, whereas ours is adapted online according to the behavior of the semantic segmentation model. Very few works have considered planning based on semantic information. Bartolomei et al. [1] introduced a perception-aware planner for UAV pose tracking. Although like us, they exploit semantics to guide next UAV actions, their goal is to triangulate high-quality landmarks whereas we aim to obtain accurate semantic segmentation in dense images. Dang et al. [4] and Meera et al. [10] study informative planning for target search using object detection networks. Most similar to our approach is that of Popovic et al. [16], which adaptively plans the 3D path of a UAV for terrain monitoring based on an empirical performance analysis of a SegNet-based architecture at different altitudes [20]. A key difference is that our decision function, representing the network accuracy, is not static. Instead, we allow it to change online and thus adapt to new unseen environments. **Multi-Resolution:** An important trade-off in aerial imaging arises from the fact that spatial resolution degrades with increasing coverage, e.g. Pena et al. [13] show that there are optimal altitudes for monitoring plants based on their size. Relatively limited research has tackled this challenge in the contexts of semantic segmentation and planning. For mapping, Dang et al. [4] employ an interesting method for weighting distance measurements according to their resolu Fig. 2: Each time the UAV segments one region of the field, we decide if the UAV should follow its predefined path (right side) or if it should scout the same region with a lower altitude, i.e. obtaining images with high resolution (left side). In the second case, we update the decision strategy by comparing the segmentation results of the same regions at different altitudes. tion. Sadat et al. [22] propose an adaptive coverage-based strategy that assumes sensor accuracy increases with altitude. Other studies [25, 21] only consider fixed-altitude mission planning. We follow previous approaches that empirically assess the effects of multi-resolution observations for trained models [10, 16, 17]. Specifically, our contribution is a new decision function that supports online updates for more reliable predictive planning. ## III Our Approach The goal of this work is to maximize the accuracy in the semantic segmentation of RGB images taken by a camera on-board a UAV with a limited flight time. We propose a data-driven approach that uses information from incoming images to adapt an initial predefined UAV flight path online. The main idea behind our approach is to guide the UAV to take high-resolution images for fine-grained segmentation at lower altitudes (higher resolutions) only where necessary. As a motivating application, our problem setup considers a UAV monitoring an agricultural field to identify crops and weeds for precision treatment. We first divide the target field into non-overlapping regions and, for each, associate a waypoint in the 3D space above the field from which the camera footprint of the UAV camera covers the entire area. From these waypoints, we then define a lawn-mower coverage path that we use to bootstrap the adaptive strategy. Our strategy consists of two steps. First, at each waypoint along the lawnower path, we use a deep neural network to assign a semantic label to each pixel in the observed region (soil, crop, and weed in our selected use-case). Second, based on the segmented output, we decide whether the current region requires more detailed re-observation at a higher image resolution, i.e., lower UAV altitude; otherwise, the UAV continues its pre-determined coverage path. A key aspect of our approach is a new data-driven decision function that enables the UAV to select a new altitude for higher-resolution images if they are needed. This decision function is updated adaptively during the mission by comparing the segmentation results of the current region at the different altitudes. This enables us to precisely capture the relationship between image resolution (altitude) and segmentation accuracy when planning new paths. Fig. 2 shows an overview of our planning strategy. In the following subsections, we describe the CNN for semantic segmentation and the path planning strategy, which consists of offline planning and online path adaptation. ### _Semantic Segmentation_ In this work, we consider the semantic segmentation of RGB images not only as of the final goal but also as a tool to define adaptive paths for re-observing given regions of the field. Each time the UAV reaches a waypoint, we perform a pixel-wise semantic segmentation to assign a label (crop, weed or soil in our case) to each pixel in the current view. We use the ERFNet [19] architecture provided by the Bonnetal framework [11] that allows for real-time inference. We train this neural network on RGB images collected at different altitudes to allow it to generalize across possible altitudes without the need for retraining. If the same region is observed by the camera from different altitudes, we preserve the results obtained with the highest resolution, assuming that higher-resolution images yield greater segmentation accuracy. ### _Path Planning: Basic Strategy_ The initial flight path is calculated based on the standard lawn-mower strategy [6]. Such a path enables covering the region of interest efficiently without any prior knowledge. We aim to adapt this path according to the non-uniform distribution of features in the field to improve semantic segmentation performance. For a desired region of interest, we define a lawn-mower path based on a series of waypoints. A waypoint is defined as a position \\(\\mathbf{w}_{i}\\) in the 3D UAV workspace where: (i) the UAV camera footprint does not overlap the footprints of any other waypoint; (ii) the UAV performs the semantic segmentation of its current field of view; (iii) the UAV decides to revise its path or to execute the path as previously determined; and (iv) we impose zero velocity and zero acceleration. The initial flight path is calculated in form of fixed waypoints at the highest altitude \\(\\mathbf{W}^{h\\text{\\tiny{max}}}=\\{\\mathbf{w}_{0},\\mathbf{w}_{1},\\dots,\\mathbf{w}_{n}\\}\\). If necessary, we modify this coarse plan by inserting further waypoints based on the new imagery as it arrives. At each waypoint \\(\\mathbf{w}_{i}\\), UAV decides either to follow the pre-determined path, i.e. moving to \\(\\mathbf{w}_{i+1}\\), or to inspect the current region more closely at a lower altitude. In the second case, we define a second series of waypoints, \\(\\mathbf{W}^{h^{\\prime}}=\\{\\mathbf{w}_{0},\\mathbf{w}_{1},\\dots,\\mathbf{w}_{n}\\}\\), at the desired altitude, \\(h^{\\prime}\\), that will be inserted before \\(\\mathbf{w}_{i+1}\\in\\mathbf{W}^{h_{\\text{\\tiny{max}}}}\\) so that the resulting path, at the desired altitude, is a lawn-mower strategy covering the camera footprint from \\(\\mathbf{w}_{i}\\in\\mathbf{W}^{h_{\\text{\\tiny{max}}}}\\). ### _Planning Strategy: Offline Initialization_ We develop a decision function that takes a given waypoint as input and outputs the next waypoint, either \\(\\mathbf{w}_{i+1}\\in\\mathbf{W}^{h_{\\text{\\tiny{max}}}}\\) or \\(\\mathbf{w}_{0}\\in\\mathbf{W}^{h^{\\prime}}\\), given the segmentation result. In the case of an altitude change, our decision function outputs also the value of the desired altitude \\(h^{\\prime}\\). To do this, we start by defining a vegetation ratio providing the number of pixels classified as vegetation (crop and weed) as a fraction of the total number of pixels in the image: \\[v=\\frac{\\sum_{c\\in\\{\\text{crop, weed}\\}}p_{c}}{p_{\\text{tot}}}, \\tag{1}\\] where \\(p_{tot}\\) is the total number of pixel and \\(p_{c}\\) is the total number of pixels classified as \\(c\\). This vegetation ratio gives us a way to infer how valuable it is to spend time on the current region of the field. It captures the intuition that higher values of this ratio indicate more possible misclassifications between the crop and weed classes. To quantify such a relationship, we let the UAV run on a separate field, where we have access to ground truth data, segmenting regions of the fields with different altitudes. Segmenting the same region of the field with different altitudes provides two pieces of information that we use to shape the decision function. Onone hand, we have the difference between the altitudes from which we segment the field, \\(\\Delta h=h_{\\text{max}}-h^{\\prime}\\). On the other hand, we have the the difference between the vegetation ratio in the predicted segmentation, \\(\\Delta v=v_{h_{\\text{max}}}-v_{h^{\\prime}}\\). At the same time, we can compare the vegetation ratio to the accuracy of the predicted segmentation by computing the mean intersection over union (mIoU). Where the mIoU is defined as the average over the classes \\(C=\\{\\text{crop, weed, soil}\\}\\) of the ratio between the intersection of ground truth and predicted segmentation and the union of the same quantities: \\[\\text{mIoU}=\\frac{1}{|C|}\\sum_{c\\in C}\\frac{\\text{gt}_{\\text{c}}\\cap\\text{ prediction}_{\\text{c}}}{\\text{gt}_{\\text{c}}\\cup\\text{prediction}_{\\text{c}}}. \\tag{2}\\] Again, we define the difference between mIoUs at different altitudes as, \\(\\Delta\\text{mIoU}=\\text{mIoU}_{h_{\\text{max}}}-\\text{mIoU}_{h^{\\prime}}\\). Our method thus considers two sets of observations, representing the relationships between the vegetation ratio and UAV altitude (\\(\\mathcal{O}\\)) and between vegetation ratio and mIoU (\\(\\mathcal{I}\\)) as follows: \\[\\mathcal{O}=\\begin{bmatrix}\\Delta v_{0}&\\Delta h_{0}\\\\ \\Delta v_{1}&\\Delta h_{1}\\\\ \\vdots\\\\ \\Delta v_{n}&\\Delta h_{n}\\end{bmatrix},\\qquad\\qquad\\mathcal{I}=\\begin{bmatrix} \\Delta v_{0}&\\Delta\\text{mIoU}_{0}\\\\ \\Delta v_{1}&\\Delta\\text{mIoU}_{1}\\\\ \\vdots\\\\ \\Delta v_{n}&\\Delta\\text{mIoU}_{n}\\end{bmatrix}.\\] While both sets are initialized offline, we only update \\(\\mathcal{O}\\) online given that \\(\\mathcal{I}\\) requires access to ground truth that is clearly not available on testing fields. We fit both sets of observations using GP regression [18]. A GP for a function \\(f(x)\\) is defined by a mean function \\(m(x)\\) and a covariance function \\(k(x_{i},\\,x_{j})\\): \\[f(x)\\thicksim GP(m(x),k(x_{i},x_{j})). \\tag{3}\\] A common choice is to set the mean function \\(m(x)=0\\) and to use the squared exponential covariance function: \\[k(x_{i},x_{j}) = \\zeta_{f}^{2}\\text{exp}\\left(-\\frac{1}{2}\\frac{|x_{i}-x_{j}|^{2}} {\\ell^{2}}\\right)+\\zeta_{n}^{2}, \\tag{4}\\] where \\(\\theta=\\{\\ell,\\,\\zeta_{f}^{2},\\,\\zeta_{n}^{2}\\}\\) are the model hyperparameters and represent respectively the length scale \\(\\ell\\), the variance of the output \\(\\zeta_{f}^{2}\\) and of the noise \\(\\zeta_{n}^{2}\\). Typically, the hyperparameters are learned from the training data by maximizing the log marginal likelihood. Given a set of observations \\(y\\) of \\(f\\) for the inputs \\(\\mathbf{X}\\) (i.e. our sets \\(\\mathcal{O}\\), \\(\\mathcal{I}\\)), GP regression allows for learning a predictive model of \\(f\\) at the query inputs \\(\\mathbf{X}_{*}\\) by assuming a joint Gaussian distribution over the samples. The predictions at \\(\\mathbf{X}_{*}\\) are represented by the predictive mean \\(\\mu_{*}\\) and variance \\(\\sigma_{*}^{2}\\) defined as: \\[\\begin{split}\\mu_{*}&=\\mathbf{K}(\\mathbf{X}_{*},\\mathbf{X})\\, \\mathbf{K}_{\\mathrm{XX}}^{-1}\\,y,\\\\ \\sigma_{*}^{2}&=\\mathbf{K}(\\mathbf{X}_{*},\\mathbf{X}_{*})\\,-\\mathbf{K}( \\mathbf{X}_{*},\\mathbf{X})\\,\\mathbf{K}_{\\mathrm{XX}}^{-1}\\,\\mathbf{K}(\\mathbf{X},\\mathbf{X}_{*}),\\end{split} \\tag{5}\\] where \\(\\mathbf{K}_{\\mathrm{XX}}=\\mathbf{K}(\\mathbf{X},\\mathbf{X})+\\varsigma_{n}^{2}\\,\\mathbf{I}\\), and \\(\\mathbf{K}(\\cdot,\\,\\cdot)\\) are matrices constructed using the covariance function \\(k(\\cdot,\\cdot)\\) evaluated at the training and test inputs, \\(\\mathbf{X}\\) and \\(\\mathbf{X}_{*}\\). In the following, we will use the ground sampling distance (GSD) to identify the image resolution (thus the UAV altitude) from which the UAV performs the semantic segmentation. The GSD is defined as: \\(\\text{GSD}=\\frac{hS_{w}}{fI_{w}}\\), where \\(h\\) is the UAV altitude, \\(S_{w}\\) the sensor width of the camera in mm, \\(f\\) the focal length of the camera in mm and \\(I_{w}\\) the image width in pixels. ### _Planning Strategy: Online Adaptation_ To adapt the UAV behavior online to fit the differences between the testing and training fields, we update the GP defined by the set \\(\\mathcal{O}\\) in the following way. In the testing field, each time the UAV decides to change altitude to a lower one, we compute a new pair \\(\\Delta v^{\\prime},\\Delta h^{\\prime}\\) and re-compute the GP output as defined in Eq. (3). ## IV Experimental Results We validate our proposed algorithm for online adaptive path planning on the application of UAV-based crop/weed semantic segmentation. The goal of our experiments is to demonstrate the benefits of using our adaptive strategy to maximize segmentation accuracy in missions while keeping a low execution time. Specifically, we show results to support two key claims: our online adaptive algorithm can (i) map high-interest regions with higher accuracy and (ii) improve Fig. 4: The averaged results for the testing fields from the WeedMap dataset. The blue square lies to the left of all performances with a linear decision function, indicating some performance improvement. Fig. 3: We use three different fields from the WeedMap dataset [21] to train a CNN for semantic segmentation and one field to initialize the planning strategy. The remaining field is used for evaluation. For an extensive evaluation of our approach, we swap the roles of the fields so that we test our algorithm on each field once. segmentation accuracy while keeping a low execution time with respect to the baselines described in Sec. IV-B. ### _Dataset_ To evaluate our approach, we use the WeedMap dataset [21]. It consists of 8 different fields collected with two different having different channels, it also provides pixel-wise semantic segmentation labels for each of the 8 fields. In this study, we focus only on the 5 fields having RGB information. We split the 5 fields into training and testing sets (Fig. 3). One of the training fields is used to initialize the decision function that shapes altitude selection in the adaptive strategy, as described in Sec. III-D. For each experiment in the following sub-sections, we test our approach and the baselines, see Sec. IV-B, on each field once, and then report the average among each run. ### _Baselines_ To evaluate our proposed approach, we compare it against two main baselines. The first one is the standard lawn-mower strategy where a UAV covers the entire field at the same altitude, for this strategy we use consider five different altitudes resulting in \\(\\text{GSD}\\in\\{1.0,1.5,2.0,2.5,3.0\\}\\frac{\\text{cm}}{\\text{px}}\\). The lawnmower strategy with a fixed GSD of \\(3.0\\frac{\\text{cm}}{\\text{px}}\\) corresponds to the initial plan for our strategy described in Sec. III-B. The second baseline is defined by _only_ initializing the UAV behavior as described in Sec. III-C and without adapting the strategy online using the decision function as new segmentations arrive. We refer to this strategy as \"Non Adaptive\". This benchmark allows us to study the benefit of adaptivity obtained by using our proposed approach (\"Adaptive\"). ### _Metrics_ Our evaluation consists of two main criteria: segmentation accuracy and mission execution time. For execution time, we compute the total time taken by the UAV to survey the whole field, including the time needed to move between waypoints, segment a new image, and plan the next path. To assess the quality of the semantic segmentation we use the mIoU metric defined in Eq. (2). ### _Field Segmentation Accuracy vs Execution Time_ The first experiment is designed to show that our proposed strategy obtains higher accuracy while keeping low execution time. We show such results in Fig. 4. For each strategy, we compute the mIoU (over the entire field) and the execution time needed by the UAV to complete its path. The adaptive strategy crosses the line defined by the lawnmower strategies at different altitudes, meaning that it can achieve better segmentation accuracy while keeping a lower execution time. The non-adaptive strategy instead lies under the curve, failing to overtake the lawn-mower strategy. We plot exemplary paths results from the different strategies in Fig. 5; on the left, we show the lawn-mower strategy with altitudes corresponding to GSDs of \\(1.0\\frac{\\text{cm}}{\\text{px}}\\) and \\(3.0\\frac{\\text{cm}}{\\text{px}}\\), while the middle and right plots show the paths resulting from non-adaptive and adaptive strategy, respectively. ### _Per-Image Segmentation Accuracy vs Altitude_ The second experiment shows the ability of our approach to achieve targeted semantic segmentation when compared to the non-adaptive strategy. At this stage, we compute mIoU for each image that contributes to the final segmentation of the whole field. This will give us a way to evaluate the efficiency of our adaptation strategy. We then visualize the mean and standard deviation. As can be seen in Fig. 6, our adaptive strategy provides higher per-image accuracies when the UAV is scouting the field at low altitudes. This entails that, with our strategy, the UAV invests time resources in a more proficuous manner. We show a qualitative comparison of the per-image semantic masks in Fig. 7. ## V Conclusion In this paper, we presented a new approach for efficient multi-resolution mapping using UAVs for semantic segmentation. We exploit prior knowledge and the new incoming segmentations in a way, that we get a decision function with a shape that produces a flight path, leading to a performance gain in terms of segmentation accuracy while keeping comparatively short execution time. The resulting map is mapped with different resolutions, depending on the information content of a corresponding area. We believe that our approach opens a direction for efficient UAV mapping Fig. 5: Visual comparison of trajectories traveled by the UAV over a field using different planning strategies. The coverage paths (left) are restricted to fixed heights and cannot map targeted areas of interest. The linear decision function (middle) enables adaptive planning, but it is continuous with respect to altitude and leads to sudden jumps. Our adaptive approach overcomes this issue, leaving the path less often and more purposefully at selected heights for more efficient mapping. The black spheres indicate measurement points. purposes, especially in precision agriculture. Further investigations on less homogeneous field structures are to be carried out to refine the approach. ## References * [1] L. Bartolomei, L. Teixeira, and M. Chli. Perception-aware path planning for uavs using semantic segmentation. In _Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_. IEEE, 2020. * [2] T. Cabreira, L. Brisolara, and P.R. Ferreira Jr. Survey on coverage path planning with unmanned aerial vehicles. _Drones_, 3(1), 2019. * [3] A. Carrio, C. Sampedo Perez, A. Rodriguez Ramos, and P. Campoy. A review of deep learning methods and applications for unmanned aerial vehicles. _Journal of Sensors_, 2017:1-13, 2017. * [4] T. Dang, C. Papachristos, and K. Alexis. Autonomous exploration and simultaneous object search using aerial robots. In _IEEE Aerospace Conference_, 2018. * [5] J. Deng, Z. Zhong, H. Huang, Y. Lan, Y. Han, and Y. Zhang. Lightweight Semantic Segmentation Network for Real-Time Weed Mapping Using Unmanned Aerial Vehicles. _Applied Sciences_, 10(20), 2020. * [6] E. Galceran and M. Carreras. A survey on coverage path planning for robotics. _Robotics and Autonomous Systems_, 61(12):1258-1276, 2013. * [7] G.A. Hollinger, B. Englot, F.S. Hover, U. Mitra, and G.S. Sukhatme. Active planning for underwater inspection and the benefit of adaptivity. _Intl. Journal of Robotics Research (IJRR)_, 32(1):3-18, 2013. * 119, 2020. * [9] S. Manfreda, M.F. McCabe, P.E. Miller, R. Lucas, V. Pajucol Madrigal, G. Mallinis, E. Ben Dore, D. Helman, L. Estes, G. Ciraolo, J. Mullerova, F. Tauro, M.I. De Lima, J.L.M.P. De Lima, A. Maltese, F. Frances, K. Caylor, M. Kohy, M. Perks, G. Ruiz-Perez, Z. Su, G. Vico, and B. Toth. On the Use of Unmanned Aerial Systems for Environmental Monitoring. _Remote Sensing_, 10(4), 2018. * [10] A.A. Meera, M. Popovic, A. Millane, and R. Siegwart. Obstacle-aware Adaptive Informative Path Planning for UAV-based Target Search. In _Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA)_, pages 718-724, 2019. * [11] A. Milotto, L. Mandtler, and C. Stachniss. Fast Instance and Semantic Segmentation Exploiting LocalConnectivity, Metric Learning, and One-Shot Detection for Robotics. In _Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA)_, Montreal, QC, Canada, 2019. * [12] T. Nguyen, S.S. Shivakumard, I.D. Miller, J. Keller, E.S. Lee, A. Zhou, T. Ozaslan, G. Loianno, J.H. Harwood, J. Wozenradt, C.J. Taylor, and V. Kumar. MAVNet: An Effective Semantic Segmentation Network for MAV-Based Tasks. _IEEE Robotics and Automation Letters (RA-L)_, 4(9):398-3915, 2019. * [13] J.M. Pena, J. Torres-Sanchez, A. Serrano-Perez, A.I. De Castro, and F. Lopez-Granados. Quantifying Efficacy and Limits of Unmanned Aerial Vehicle (UAV) Technology for Weed Seedling Detection as Affected by Sensor Resolution. _Sensors_, 15(3), 2015. * [14] M. Popovic, G. Hitz, J. Nieto, I. Isa, R. Siegwart, and E. Galceran. Online Informative Path Planning for Active Classification Using UAVs. In _Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA)_, Singapore, 2017. * [15] M. Popovic, T. Vidal-Calleja, G. Hitz, I. Sa, R. Siegwart, and J. Nieto. Multiresolution Mapping and Informative Path Planning for UAV-based Terrain Monitoring. In _Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_, Vancouver, BC, Canada, 2017. * [16] M. Popovic, T. Vidal-Calleja, G. Hitz, J.J. Chung, I. Sa, R. Siegwart, and J. Nieto. An informative path planning framework to UAV-based terrain monitoring. _Autonomous Robots_, 44(6):889-911, 2020. * [17] L. Qingqing, J. Taipalma, J.P. Queralta, T.N. Gia, M. Gabbouj, H. Tenhunen, J. Raitoharju, and T. Westerlund. Towards Active Vision with UAVs in Marine Search and Rescue: Analyzing Human Detection at Variable Altitudes. In _IEEE International Symposium on Safety, Security, and Rescue Robotics_, pages 65-70, 2020. * [18] C.E. Rasmussen and C.K.I. Williams. _Gaussian Processes for Machine Learning_. MIT Press, Cambridge, MA, 2006. * [19] E. Romera, J.M. Alvarez, L.M. Bergasa, and R. Arroyo. Erfnet: Efficient residual factorized convnet for real-time semantic segmentation. _IEEE Transactions on Intelligent Transportation Systems_, 19(1):263-272, 2017. * [20] I. Sa, Z. Chen, M. Popovic, R. Khanna, F. Liebisch, J. Nieto, and R. Siegwart. weednet: Dense semantic weed classification using multispectral images and may for smart farming. _IEEE Robotics and Automation Letters (RA-L)_, 3(1):588-595, 2018. * [21] I. Sa, M. Popovic, R. Khanna, Z. Chen, P. Lottes, F. Liebisch, J. Nieto, C. Stachniss, A. Walter, and R. Siegwart. Weedmap: A large-scale semantic weed mapping framework using aerial multispectral imaging and deep neural network for precision farming. _Remote Sensing_, 10, 2018. * [22] S.A. Sadat, J. Wuwerla, and R. Vaughan. Fractal trajectories for online non-uniform aerial coverage. In _Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA)_, pages 2971-2976, 2015. * [23] A. Singh, A. Krause, and W.J. Kaiser. Nonmyopic Adaptive Informative Path Planning for Multiple Robots. In _International Jontl Conference on Artificial Intelligence_, page 1843-1850, 2009. * [24] C. Stachniss, G. Grisetti, and W. Burgard. Information Gain-based Exploration Using Rao-Blackwellized Particle Filters. In _Proc. of Robotics: Science and Systems (RSS)_, pages 65-72, Cambridge, MA, USA, 2005. * [25] K.C. Vivaldini, T.H. Martinelli, V.C. Guizilini, J.R. Souza, M.D. Oliveira, F.T. Ramos, and D.F. Wolf. Uav route planning for active disease classification. _Autonomous Robots_, 43(5):1137-1153, 2019. Fig. 6: Mean and standard deviation of the per-image statistics for semantic segmentation. Our adaptive strategy leads to better performance when scouting the field at low altitudes. Fig. 7: Qualitative field segmentation results using the non-adaptive strategy (left) and the proposed adaptive strategy using our decision function (right) for path planning. The circled details demonstrate that our adaptive planning approach enables targeted high-resolution segmentation to better capture finer plant details.
In this paper, we address the problem of adaptive path planning for accurate semantic segmentation of terrain using unmanned aerial vehicles (UAVs). The usage of UAVs for terrain monitoring and remote sensing is rapidly gaining momentum due to their high mobility, low cost, and flexible deployment. However, a key challenge is planning missions to maximize the value of acquired data in large environments given flight time limitations. To address this, we propose an online planning algorithm which adapts the UAV paths to obtain high-resolution semantic segmentations necessary in areas on the terrain with fine details as they are detected in incoming images. This enables us to perform close inspections at low altitudes only where required, without wasting energy on exhaustive mapping at maximum resolution. A key feature of our approach is a new accuracy model for deep learning-based architectures that captures the relationship between UAV altitude and semantic segmentation accuracy. We evaluate our approach on the application of crop/weed segmentation in precision agriculture using real-world field data.
Condense the content of the following passage.
193
arxiv-format/2005_01752v2.md
# Fitting Laplacian Regularized Stratified Gaussian Models Jonathan Tuck Stephen Boyd ## 1 Introduction We observe data records of the form \\((z,y)\\), where \\(y\\in\\mathbf{R}^{n}\\) and \\(z\\in\\{1,\\ldots,K\\}\\). We model \\(y\\) as samples from a zero-mean Gaussian distribution, conditioned on \\(z\\), _i.e._, \\[y\\mid z\\sim\\mathcal{N}(0,\\Sigma_{z}),\\] with \\(\\Sigma_{z}\\in\\mathbf{S}^{n}_{++}\\) (the set of symmetric positive definite \\(n\\times n\\) matrices), \\(z=1,\\ldots,K\\). Our goal is to estimate the model parameters \\(\\Sigma=(\\Sigma_{1},\\ldots,\\Sigma_{K})\\in(\\mathbf{S}^{n}_{++})^{K}\\) from the data. We refer to this as a stratified Gaussian model, since we have a different Gaussian model for \\(y\\) for each value of the stratification feature \\(z\\). Estimating a set of covariance matrices is referred to as joint covariance estimation. The negative log-likelihood of \\(\\Sigma\\), on an observed data set \\((z_{i},y_{i})\\), \\(i=1,\\ldots,m\\), is given by \\[\\sum_{i=1}^{m}\\left((1/2)y_{i}^{T}\\Sigma_{z_{i}}^{-1}y_{i}-(1/2) \\log\\det(\\Sigma_{z_{i}}^{-1})-(n/2)\\log(2\\pi)\\right)\\] \\[= \\sum_{k=1}^{K}\\left((n_{k}/2)\\operatorname{\\mathbf{Tr}}(S_{k} \\Sigma_{k}^{-1})-(n_{k}/2)\\log\\det(\\Sigma_{k}^{-1})-(n_{k}n/2)\\log(2\\pi)\\right),\\]where \\(n_{k}\\) is the number of data samples with \\(z=k\\) and \\(S_{k}=\\frac{1}{n_{k}}\\sum_{i:z_{i}=k}y_{i}y_{i}^{T}\\) is the empirical covariance matrix of \\(y\\) for which \\(z=k\\), with \\(S_{k}=0\\) when \\(n_{k}=0\\). This function is general not convex in \\(\\Sigma\\), but it is convex in the _natural parameter_ \\[\\theta=(\\theta_{1},\\ldots,\\theta_{K})\\in(\\mathbf{S}_{++}^{n})^{K},\\] where \\(\\theta_{k}=\\Sigma_{k}^{-1}\\), \\(k=1\\ldots,K\\). We will focus on estimating \\(\\theta\\) rather than \\(\\Sigma\\). In terms of \\(\\theta\\), and dropping a constant and a factor of two, the negative log-likelihood is \\[\\ell(\\theta)=\\sum_{k=1}^{K}\\ell_{k}(\\theta_{k}),\\] where \\[\\ell_{k}(\\theta_{k})=n_{k}\\left(\\mathbf{Tr}(S_{k}\\theta_{k})-\\log\\det(\\theta_ {k})\\right).\\] We refer to \\(\\ell(\\theta)\\) as the loss, and \\(\\ell_{k}(\\theta_{k})\\) as the local loss, associated with \\(z=k\\). For the special case where \\(n_{k}=0\\), we define \\(\\ell_{k}(\\theta_{k})\\) to be zero if \\(\\theta_{k}\\succ 0\\), and \\(\\infty\\) otherwise. We refer to \\(\\ell(\\theta)/m\\) as the average loss. To estimate \\(\\theta\\), we add two types of regularization to the loss, and minimize the sum. We choose \\(\\theta\\) as a solution of \\[\\text{minimize}\\ \\ \\sum_{k=1}^{K}\\left(\\ell_{k}(\\theta_{k})+r(\\theta_{k}) \\right)+\\mathcal{L}(\\theta), \\tag{1}\\] where \\(\\theta\\) is the optimization variable, \\(r:\\mathbf{S}^{n}\\rightarrow\\mathbf{R}\\) is a local regularization function, and \\(\\mathcal{L}:(\\mathbf{S}^{n})^{K}\\rightarrow\\mathbf{R}\\) is Laplacian regularization, defined below. We refer to our estimated \\(\\theta\\) as a Laplacian regularized stratified Gaussian model. Local regularization.Common types of local regularization include trace regularization, \\(r(\\theta_{k})=\\gamma\\operatorname{\\mathbf{Tr}}\\theta_{k}\\), and Frobenius regularization, \\(r(\\theta_{k})=\\gamma\\|\\theta_{k}\\|_{F}^{2}\\), where \\(\\gamma>0\\) is a hyper-parameter. Two more recently introduced local regularization terms are \\(\\gamma\\|\\theta_{k}\\|_{1}\\) and \\(\\gamma\\|\\theta_{k}\\|_{\\mathrm{od},1}=\\gamma\\sum_{i\ eq j}|(\\theta_{k})_{ij}|\\), which encourage sparsity of \\(\\theta\\) and of the off-diagonal elements of \\(\\theta\\), respectively [10]. (A zero entry in \\(\\theta_{k}\\) means that the associated components of \\(y\\) are conditionally indedendent, given the others, when \\(z=k\\)[1].) Laplacian regularization.Let \\(W\\in\\mathbf{S}^{K}\\) be a symmetric matrix with zero diagonal entries and nonnegative off-diagonal entries. The associated Laplacian regularization is the function \\(\\mathcal{L}:(\\mathbf{S}^{n})^{K}\\rightarrow\\mathbf{R}\\) given by \\[\\mathcal{L}(\\theta)=\\frac{1}{2}\\sum_{i,j=1}^{K}W_{ij}\\|\\theta_{i}-\\theta_{j}\\|_ {F}^{2}.\\] Evidently \\(\\mathcal{L}\\) is separable across the entries of its arguments; it can be expressed as \\[\\mathcal{L}(\\theta)=\\frac{1}{2}\\sum_{u,v=1}^{n}\\left(\\sum_{i,j=1}^{K}W_{ij}(( \\theta_{i})_{uv}-(\\theta_{j})_{uv})^{2}\\right).\\]Laplacian regularization encourages the estimated values of \\(\\theta_{i}\\) and \\(\\theta_{j}\\) to be close when \\(W_{ij}>0\\). Roughly speaking, we can interpret \\(W_{ij}\\) as prior knowledge about how close the data generation processes for \\(y\\) are, for \\(z=i\\) and \\(z=j\\). We can associate the Laplacian regularization with a graph with \\(K\\) vertices, which has an edge \\((i,j)\\) for each positive \\(W_{ij}\\), with weight \\(W_{ij}\\). We refer to this graph as the regularization graph. We assume that the regularization graph is connected. We can express Laplacian regularization in terms of a (weighted) Laplacian matrix \\(L\\), given by \\[L_{ij}=\\left\\{\\begin{array}{ll}-W_{ij}&i\ eq j\\\\ \\sum_{k=1}^{K}W_{ik}&i=j\\end{array}\\right.\\] for \\(i,j=1,\\ldots,K\\). The Laplacian regularization can be expressed in terms of \\(L\\) as \\[\\mathcal{L}(\\theta)=(1/2)\\operatorname{\\mathbf{Tr}}(\\theta^{T}(I\\otimes L) \\theta),\\] where \\(\\otimes\\) denotes the Kronecker product. Assumptions.We note that (1) need not have a unique solution, in pathological cases. As a simple example, consider the case with \\(r=0\\) and \\(W=0\\), _i.e._, no local regularization and no Laplacian regularization, which corresponds to independently creating a model for each value of \\(z\\). If all \\(S_{k}\\) are positive definite, the solution is unique, with \\(\\theta_{k}=S_{k}^{-1}\\). If any \\(S_{k}\\) is not positive definite, the problem does not have a unique solution. The presence of either local or Laplacian regularization (with the associated graph being connected) can ensure that the problem has a unique solution. For example, with trace regularization (and \\(\\gamma>0\\)), it is readily shown that the problem (1) has a unique solution. Another elementary condition that guarantees a unique solution is that the associated graph is connected, and \\(S_{k}\\) do not have a common nullspace. We will henceforth assume that the problem (1) has a unique solution. This implies that the objective in (1) is closed, proper, and convex. The problem (1) is a convex optimization problem which can be solved globally in an efficient manner [20, 21]. Contributions.Joint covariance estimation and Laplacian regularized stratified model fitting are not new ideas; in this paper we simply bring them together. Laplacian regularization has been shown to work well in conjunction with stratified models, allowing one with very little data to create sensible models for each value of some stratification parameter [20, 21]. To our knowledge, this is the first paper that has explicitly framed joint covariance estimation as a stratified model fitting problem. We develop and implement a large-scale distributed method for Laplacian regularized joint covariance estimation via the alternating direction method of multipliers (ADMM), which scales to large-scale data sets [1, 20]. Outline.In SS1, we introduce Laplacian regularized stratified Gaussian models and review work related to fitting Laplacian regularized stratified Gaussian models. In SS2, we develop and analyze a distributed solution method to fit Laplacian regularized stratified Gaussian models, based on ADMM. Lastly, in SS3, we illustrate the efficacy of this model fitting technique and of this method with three examples, in finance, radar signal processing, and weather forecasting. ### Related work Stratified model fitting.Stratified model fitting, _i.e._, separately fitting a different model for each value of some parameter, is an idea widely used across disciplines. For example, in medicine, patients are often divided into subgroups based on age and sex, and one fits a separate model for the data from each subgroup [12, 13]. Stratification can be useful for dealing with categorical feature values, interpreting the nature of the data, and can play a large role in experiment design. As mentioned previously, the joint covariance estimation problem can naturally be framed as a stratified model fitting problem. Covariance matrix estimation.Covariance estimation applications span disciplines such as radar signal processing [10], statistical learning [1], finance [1, 2], and medicine [14]. Many techniques exist for the estimation of a single covariance matrix when the covariance matrix's structure is known _a priori_[11]. When the covariance matrix is sparse, thresholding the elements of the sample covariance matrix has been shown to be an effective method of covariance matrix estimation [1]. [2] propose a maximum likelihood solution for a covariance matrix that is the sum of a Hermitian positive semidefinite matrix and a multiple of the identity. Maximimum likelihood-style approaches also exist for when the covariance matrix is assumed to be Hermitian, Toeplitz, or both [1, 13, 12]. [2] propose using various shrinkage estimators when the data is high dimensional. (Shrinkage parameters are typically chosen by an out-of-sample validation technique [14].) Joint covariance estimation.Jointly estimating statistical model parameters has been the subject of significant research spanning different disciplines. The joint graphical lasso [15] is a stratified model that encourages closeness of parameters by their difference as measured by fused lasso and group lasso penalties. (Laplacian regularization penalizes their difference by the \\(\\ell_{2}\\)-norm squared.) The joint graphical lasso penalties in effect result in groups of models with the same parameters, and those parameters being sparse. (In contrast, Laplacian regularization leads to parameter values that vary smoothly with nearby models. It has been observed that in most practical settings, Laplacian regularization is sufficient for accurate estimation [13].) Similar to the graphical lasso, methods such as the time-varying graphical lasso [1] and the network lasso [14] have been recently developed to infer model parameters in graphical networks assuming some graphical relationship (in the former, the relationship is in time; in the latter, the relationship is arbitrary). Another closely related work to this paper is [14], which introduces the use of Laplacian regularization in joint estimation of covariance matrices in a zero-mean multivariate Gaussian model. In this paper, Laplacian regularization is used assuming a grid structure, and the problem is solved using the majorization-minimization algorithmic framework [1]. In contrast, this paper assumes a much more complex and sophisticated structure of the system, and uses ADMM to solve the problem much more efficiently. Connection to probabilistic graphical models.There is a significant connection of this work to probabilistic graphical models [13]. In this connection, a stratified model for joint model parameter estimation can be seen as an undirected graphical model, where the vertices follow different distributions, and the edges encourage corresponding vertices' distributions to be alike. In fact, very similar problems in atmospheric science, medicine, and statistics have been studied under this context [1, 1, 12, 13]. ## 2 Distributed solution method There are many methods that can be used to solve minimize (1); for example, ADMM [1] has been successfully used in the past as a large-scale, distributed method for stratified model fitting with Laplacian regularization [14], which we will adapt for use in this paper. This method expresses minimizing (1) in the equivalent form \\[\\begin{array}{ll}\\text{minimize}&\\sum\\limits_{k=1}^{K}((\\ell_{k}(\\theta_{k })+r(\\widetilde{\\theta}_{k}))+\\mathcal{L}(\\widehat{\\theta})\\\\ \\text{subject to}&\\theta-\\widehat{\\theta}=0,\\quad\\widetilde{\\theta}-\\widehat{ \\theta}=0,\\end{array} \\tag{2}\\] now with variables \\(\\theta\\in(\\mathbf{S}_{++}^{n})^{K}\\), \\(\\widetilde{\\theta}\\in(\\mathbf{S}_{++}^{n})^{K}\\), and \\(\\widehat{\\theta}\\in(\\mathbf{S}_{++}^{n})^{K}\\). Problem (2) is in ADMM standard form, splitting on \\((\\theta,\\widetilde{\\theta})\\) and \\(\\widehat{\\theta}\\). The ADMM algorithm for this problem, outlined in full in Algorithm 2.1, can be summarized by four steps: computing the (scaled) proximal operators of \\(\\ell_{1},\\ldots,\\ell_{K}\\), \\(r\\), and \\(\\mathcal{L}\\), followed by updates on dual variables associated with the two equality constraints, \\(U\\in(\\mathbf{R}^{n\\times n})^{K}\\) and \\(\\widetilde{U}\\in(\\mathbf{R}^{n\\times n})^{K}\\). Recall that the proximal operator of \\(f:\\mathbf{R}^{n\\times n}\\rightarrow\\mathbf{R}\\) with penalty parameter \\(\\omega\\) is \\[\\mathbf{prox}_{\\omega f}(V)=\\operatorname*{argmin}_{\\theta}\\left(\\omega f( \\theta)+(1/2)\\|\\theta-V\\|_{F}^{2}\\right).\\] **given** Loss functions \\(\\ell_{1},\\ldots,\\ell_{K}\\), local regularization function \\(r\\), graph Laplacian matrix \\(L\\), and penalty parameter \\(\\omega>0\\). _Initialize._\\(\\theta^{0}=\\widetilde{\\theta}^{0}=\\widehat{\\theta}^{0}=U^{0}=\\widetilde{U}^{ 0}=0\\). **repeat** 1. _Evaluate the proximal operator of_ \\(\\ell_{k}\\)_._\\(\\theta_{k}^{t+1}=\\mathbf{prox}_{\\omega\\ell_{k}}(\\widehat{\\theta}_{k}^{t}-U_{k}^{t}), \\quad k=1,\\ldots,K\\)__ 2. _Evaluate the proximal operator of_ \\(r\\)_._\\(\\widehat{\\theta}_{k}^{t+1}=\\mathbf{prox}_{\\omega r}(\\widehat{\\theta}_{k}^{t}- \\widetilde{U}_{k}^{t}),\\quad k=1,\\ldots,K\\)__ 3. _Evaluate the proximal operator of_ \\(\\mathcal{L}\\)_._\\(\\theta^{t+1}=\\mathbf{prox}_{\\omega\\mathcal{L}/2}((1/2)(\\theta^{t+1}+U^{t}+ \\widetilde{\\theta}^{t+1}+\\widetilde{U}^{t}))\\)__\\(4.\\)_Update the dual variables._\\(U^{t+1}=U^{t}+\\theta^{t+1}-\\widehat{\\theta}^{t+1};\\quad\\widehat{U}^{t+1}= \\widehat{U}^{t}+\\widehat{\\theta}^{t+1}-\\widehat{\\theta}^{t+1}\\) **until convergence** To see how we could use this for fitting Laplacian regularized stratified models for the joint covariance estimation problem, we outline efficient methods for evaluating the proximal operators of \\(\\ell_{k}\\), of a variety of relevant local regularizers \\(r\\), and of the Laplacian regularization. ### Evaluating the proximal operator of \\(\\ell_{k}\\) Evaluating the proximal operator of \\(\\ell_{k}\\) (for \\(n_{k}>0\\)) can be done efficiently and in closed-form [14, 15, 16, 17]. We have that the proximal operator is \\[\\mathbf{prox}_{\\omega\\ell_{k}}(V)=QXQ^{T},\\] where \\(X\\in\\mathbf{R}^{K\\times K}\\) is a diagonal matrix with entries \\[X_{ii}=\\frac{\\omega n_{k}d_{i}+\\sqrt{(\\omega n_{k}d_{i})^{2}+4\\omega n_{k}}}{2 },\\quad i=1,\\ldots,K,\\] and \\(d\\) and \\(Q\\) are computed as the eigen-decomposition of \\((1/\\omega n_{k})V-S_{k}\\), _i.e._, \\[\\frac{1}{\\omega n_{k}}V-S_{k}=Q\\,\\mathbf{diag}(d)Q^{T}.\\] The dominant cost in computing the proximal operator of \\(\\ell_{k}\\) is in computing the eigen-decomposition, which can be computed with order \\(n^{3}\\) flops. ### Evaluating the proximal operator of \\(r\\) The proximal operator of \\(r\\) often has a closed-form expression that can be computed in parallel. For example, if \\(r=\\gamma\\,\\mathbf{Tr}(\\theta)\\), then \\(\\mathbf{prox}_{\\omega r}(V)=V-\\omega\\gamma I\\). If \\(r(\\theta)=(\\gamma/2)\\|\\theta\\|_{F}^{2}\\) then \\(\\mathbf{prox}_{\\omega r}(V)=(1/(1+\\omega\\gamma))V\\), and if \\(r=\\gamma\\|\\theta\\|_{1}\\), then \\(\\mathbf{prox}_{\\omega r}(V)=\\max(V-\\omega\\gamma,0)-\\max(-V-\\omega\\gamma,0)\\), where \\(\\max\\) is taken elementwise [15]. If \\(r(\\theta)=\\gamma_{1}\\,\\mathbf{Tr}(\\theta)+\\gamma_{2}\\|\\theta\\|_{\\mathrm{od},1}\\) where \\(\\|\\theta\\|_{\\mathrm{od},1}=\\sum_{i\ eq j}|\\theta_{ij}|\\) is the \\(\\ell_{1}\\)-norm of the off diagonal elements of \\(\\theta\\), then \\[\\mathbf{prox}_{\\omega r}(V)_{ij}=\\begin{cases}V_{ij}-\\omega\\gamma_{1},&i=j\\\\ \\max(V_{ij}-\\omega\\gamma_{2},0)-\\max(-V_{ij}-\\omega\\gamma_{2},0)&i\ eq j\\end{cases}.\\] ### Evaluating the proximal operator of \\(\\mathcal{L}\\) Evaluating the proximal operator of \\(\\mathcal{L}\\) is equivalent to solving the \\(n(n+1)/2\\) regularized Laplacian systems \\[\\left(L+(2/\\omega)I\\right)\\begin{bmatrix}(\\widehat{\\theta}_{1}^{t+1})_{ij}\\\\ (\\widehat{\\theta}_{2}^{t+1})_{ij}\\\\ \\vdots\\\\ (\\widehat{\\theta}_{K}^{t+1})_{ij}\\end{bmatrix}=(1/\\omega)\\begin{bmatrix}( \\theta_{1}^{t+1}+U_{1}^{t}+\\widehat{\\theta}_{1}^{t+1}+\\widetilde{U}_{1}^{t}) _{ij}\\\\ (\\theta_{2}^{t+1}+U_{2}^{t}+\\widehat{\\theta}_{2}^{t+1}+\\widetilde{U}_{2}^{t}) _{ij}\\\\ \\vdots\\\\ (\\theta_{K}^{t+1}+U_{K}^{t}+\\widehat{\\theta}_{K}^{t+1}+\\widetilde{U}_{K}^{t}) _{ij}\\end{bmatrix} \\tag{3}\\] for \\(i=1,\\ldots,n\\) and \\(j=1,\\ldots,i\\), and setting \\((\\widehat{\\theta}_{k}^{t+1})_{ji}=(\\widehat{\\theta}_{k}^{t+1})_{ij}\\). Solving these systems is quite efficient; many methods for solving Laplacian systems (and more generally, symmetric diagonally-dominant systems) can solve these systems in nearly-linear time [23, 13]. We find that the conjugate gradient (CG) method with a diagonal pre-conditioner [11, 12] can efficiently and reliably solve these systems. (We can also warm-start CG with \\(\\widehat{\\theta}^{t}\\).) Stopping criterion.Under our assumptions on the objective, the iterates of ADMM converge to a global solution, and the primal and dual residuals \\[r^{t+1}=(\\theta^{t+1}-\\widehat{\\theta}^{t+1},\\widetilde{\\theta}^{t+1}- \\widehat{\\theta}^{t+1}),\\quad s^{t+1}=-(1/\\lambda)(\\widehat{\\theta}^{t+1}- \\widehat{\\theta}^{t},\\widehat{\\theta}^{t+1}-\\widehat{\\theta}^{t}),\\] converge to zero [1]. This suggests the stopping criterion \\[\\|r^{t+1}\\|_{F}\\leq\\epsilon_{\\mathrm{pri}},\\quad\\|s^{t+1}\\|_{F}\\leq\\epsilon_{ \\mathrm{dual}},\\] for some primal tolerance \\(\\epsilon_{\\mathrm{pri}}\\) and dual tolerance \\(\\epsilon_{\\mathrm{dual}}\\). Typically, these tolerances are selected as a combination of absolute and relative tolerances; we use \\[\\epsilon_{\\mathrm{pri}}=\\sqrt{2Kn^{2}}\\epsilon_{\\mathrm{abs}}+\\epsilon_{ \\mathrm{rel}}\\max\\{\\|r^{t+1}\\|_{F},\\|s^{t+1}\\|_{F}\\},\\quad\\epsilon_{\\mathrm{ dual}}=\\sqrt{2Kn^{2}}\\epsilon_{\\mathrm{abs}}+(\\epsilon_{\\mathrm{rel}}/ \\omega)\\|(u^{t},\\widetilde{u}^{t})\\|_{F},\\] for some absolute tolerance \\(\\epsilon_{\\mathrm{abs}}>0\\) and relative tolerance \\(\\epsilon_{\\mathrm{rel}}>0\\). Penalty parameter selection.In practice (_i.e._, in SS3), we find that the number of iterations to convergence does not change significantly with the choice of the penalty parameter \\(\\omega\\). We found that fixing \\(\\omega=0.1\\) worked well across all of our experiments. ## 3 Examples In this section we illustrate Laplacian regularized stratified model fitting for joint covariance estimation. In each of the examples, we fit two models: a common model (a Gaussian model without stratification), and a Laplacian regularized stratified Gaussian model. For each model, we selected hyper-parameters that performed best under a validation technique. We provide an open-source implementation of Algorithm 2.1, along with the code used to create the examples, at [https://github.com/cvxgrp/strat_models](https://github.com/cvxgrp/strat_models). We train all of our models with an absolute tolerance \\(\\epsilon_{\\mathrm{abs}}=10^{-3}\\) and a relative tolerance \\(\\epsilon_{\\mathrm{rel}}=10^{-3}\\). All computation was carried out on a 2014 MacBook Pro with four Intel Core i7 cores clocked at 3 GHz. ### Sector covariance estimation Estimating the covariance matrix of a portfolio of time series is a central task in quantitative finance, as it is a parameter to be estimated in the classical Markowitz portfolio optimization problem [11, 2, 12]. In addition, models for studying the dynamics of the variance of a time series (or multiple time series) data are common, such as with the GARCH family of models in statistics [10]. In this example, we consider the problem of modeling the covariance of daily sector returns, given market conditions observed the day prior. Data records and dataset.We use daily returns from \\(n=9\\) exchange-traded funds (ETFs) that cover the sectors of the stock market, measured daily, at close, from January 1, 2000 to January 1, 2018 (for a total of 4774 data points). The ETFs used are XLB (materials), XLV (health care), XLP (consumer staples), XLY (consumer discretionary), XLE (energy), XLF (financials), XLI (industrials), XLK (technology), and XLU (utilities). Each data record includes \\(y\\in\\mathbf{R}^{9}\\), the daily return of the sector ETFs. The sector ETFs have individually been winsorized (clipped) at their 5th and 95th percentiles. Each data record also includes the market condition \\(z\\), which is derived from market indicators known on the day, the five-day trailing averages of the market volume (as measured by the ETF SPY) and volatility (as measured by the ticker VIX). Each of these market indicators is binned into 2% quantiles (_i.e._, \\(0\\%-2\\%,2\\%-4\\%,\\ldots,98\\%-100\\%\\)), making the number of stratification features \\(K=50\\cdot 50=2500\\). We refer to \\(z\\) as the market conditions. We randomly partition the dataset into a training set consisting of 60% of the data records, a validation set consisting of 20% of the data records, and a held-out test set consisting of the remaining 20% of the data records. In the training set, there are an average of 1.2 data points per market condition, and the number of data points per market condition vary significantly. The most populated market condition contains 38 data points, and there are 1395 market conditions (more than half of the 2500 total) for which there are zero data points. Model.The stratified model in this case includes \\(K=2500\\) different sector return (inverse) covariance matrices in \\(\\mathbf{S}^{9}_{++}\\), indexed by the market conditions. Our model has \\(Kn(n-1)/2=90000\\) parameters. Regularization.For local regularization, we use trace regularization with regularization weight \\(\\gamma_{\\mathrm{loc}}\\), _i.e._, \\(r=\\gamma_{\\mathrm{loc}}\\operatorname{\\mathbf{Tr}}(\\cdot)\\). The regularization graph for the stratified model is the Cartesian product of two regularization graphs: * _Quantile of five-day trailing average volatility._ The regularization graph is a path graph with 50 vertices, with edge weights \\(\\gamma_{\\mathrm{vix}}\\). * _Quantile of five-day trailing average market volume._ The regularization graph is a path graph with 50 vertices, with edge weights \\(\\gamma_{\\mathrm{vol}}\\). The corresponding Laplacian matrix has 12300 nonzero entries, with hyper-parameters \\(\\gamma_{\\rm{vix}}\\) and \\(\\gamma_{\\rm{vol}}\\). All together, our stratified Gaussian model has three hyper-parameters. Results.We compared a stratified model to a common model. The common model corresponds to solving one covariance estimation problem, ignoring the market regime. For the common model, we used \\(\\gamma_{\\rm{loc}}=5\\). For the stratified model, we used \\(\\gamma_{\\rm{loc}}=0.15\\), \\(\\gamma_{\\rm{vix}}=1500\\), and \\(\\gamma_{\\rm{vol}}=2500\\). These values were chosen based on a crude hyper-parameter search. We compare the models' average loss over the held-out test set in table 1. We can see that the stratified model substantially outperforms the common model. To visualize how the covariance varies with market conditions, we look at risk of a portfolio (_i.e._, the standard deviation of the return) with uniform allocation across the sectors. The risk is given by \\(\\sqrt{w^{T}\\Sigma w}\\), where \\(\\Sigma\\) is the covariance matrix and \\(w=(1/9)\\mathbf{1}\\) is the weight vector corresponding to a uniform allocation. In figure 1, we plot the heatmap of the risk of this portfolio as a function of the market regime \\(z\\) for the stratified model. The risk heatmap makes sense and varies smoothly across market conditions. The estimate of the risk of the uniform portfolio for the common model covariance matrix is 0.859. The risk in our stratified model varies by about a factor of two from this common estimate of risk. Application.Here we demonstrate the use of our stratified risk model, in a simple trading policy. For each of the \\(K=2500\\) market conditions, we compute the portfolio \\(w_{z}\\in\\mathbf{R}^{9}\\) which is Markowitz optimal, _i.e._, the solution of \\[\\begin{array}{ll}\\mbox{maximize}&\\mu^{T}w-\\gamma w^{T}\\Sigma_{z}w\\\\ \\mbox{subject to}&\\mathbf{1}^{T}w=1\\\\ &\\|w\\|_{1}\\leq 2,\\end{array} \\tag{4}\\] with optimization variable \\(w\\in\\mathbf{R}^{9}\\) (\\(w_{i}<0\\) denotes a short position in asset \\(i\\)). The objective is the risk adjusted return, and \\(\\gamma>0\\) is the _risk-aversion parameter_, which we take as \\(\\gamma=0.15\\). We take \\(\\mu\\in\\mathbf{R}^{9}\\) to be the vector of median sector returns in the training set. The last constraint limits the portfolio leverage, measured by \\(\\|w\\|_{1}\\), to no more than 2. (This means that the total short positions in the portfolio cannot exceed 0.5 times the total portfolio value.) We plot the leverage of the stratified model portfolios \\(w_{z}\\), indexed by market conditions, in figure 2. At the beginning of each day \\(t\\), we use the previous day's market conditions \\(z_{t}\\) to allocate our current total portfolio value according to the weights \\(w_{z_{t}}\\). We run this policy using realized returns from January 1, 2018 to January 1, 2019 (which was held out from all other \\begin{table} \\begin{tabular}{l l} \\hline \\hline Model & Average test loss \\\\ \\hline Common & \\(6.42\\times 10^{-3}\\) \\\\ Stratified & \\(1.15\\times 10^{-3}\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Results for §3.1. **Figure 1:** Heatmap of \\(\\sqrt{w^{T}\\Sigma_{z}w}\\) with \\(w=(1/9)\\mathbf{1}\\) for the stratified model. Figure 2: Heatmap of \\(\\|w_{z}\\|_{1}\\), the stratified model portfolios, indexed by market conditions. Figure 4: Weights for the stratified model policy and the common policy over 2018. ### Space-time adaptive processing In radar space time adaptive processing (STAP), a problem of widespread importance is the detection problem: detect a target over a terrain in the presence of interference. Interference typically comes in the form of clutter (unwanted terrain noise), jamming (noise emitted intentionally by an adversary), and white noise (typically caused by the circuitry/machinery of the radar receiver) [104, 151, 106]. (We refer to the sum of these three noises as interference.) In practice, these covariance matrices for a given radar orientation (_i.e._, for a given range, azimuth, Doppler frequency, _etc._) are unknown and must be estimated [106, 106]. Our goal is to estimate the covariance matrix of the interference, given the radar orientation. Data records.Our data records \\((z,y)\\) include ground interference measurements \\(y\\in\\mathbf{R}^{30}\\) (so \\(n=30\\)), which were synthetically generated (see below). In addition, the stratification features \\(z\\) describe the radar orientation. A radar orientation corresponds to a tuple of the range \\(r\\) (in km), azimuth angle \\(a\\) (in degrees), and Doppler frequency \\(d\\) (in Hz), which are binned. For example, if \\(z=(r,a,d)=([35,37),[87,89),[976,980))\\), then the measurement was taken at a range between 35-57 km, an azimuth between 87-89 degrees, and a Doppler frequency between 976-980 Hz. There are 10 range bins, 10 azimuth bins, and 10 Doppler frequency bins, and we allow \\(r\\in[35,50]\\), \\(a\\in[87,267]\\), and \\(d\\in[-992,992]\\); these radar orientation values are realistic and were selected from the radar signal processing literature; see [102, Table 1] and [106, Table 3.1]. The number of stratification features is \\(K=10\\cdot 10\\cdot 10=1000\\). We generated the data records \\((z,y)\\) as follows. We generated three complex Hermitian matrices \\(\\widetilde{\\Sigma}_{\\text{range}}\\in\\mathbf{C}^{15}\\), \\(\\widetilde{\\Sigma}_{\\text{azi}}\\in\\mathbf{C}^{15}\\), and \\(\\widetilde{\\Sigma}_{\\text{dopp}}\\in\\mathbf{C}^{15}\\) randomly, where \\(\\mathbf{C}\\) is the set of complex numbers. For each \\(z=(r,a,d)\\), we generate a covariance matrix according to \\[\\widetilde{\\Sigma}_{z}=\\widetilde{\\Sigma}_{(r,a,d)}=\\left(\\frac{4\\times 10^{4} }{r}\\right)^{2}\\widetilde{\\Sigma}_{\\text{range}}+\\left(\\cos\\left(\\frac{\\pi a }{180}\\right)+\\sin\\left(\\frac{\\pi a}{180}\\right)\\right)\\widetilde{\\Sigma}_{ \\text{azi}}+\\left(1+\\frac{d}{1000}\\right)\\widetilde{\\Sigma}_{\\text{dopp}}.\\] For each \\(z\\), we then independently sample from a Gaussian distribution with zero mean and covariance matrix \\(\\widetilde{\\Sigma}_{z}\\) to generate the corresponding data samples \\(\\widetilde{y}\\in\\mathbf{R}^{15}\\). We then generate the real-valued data records \\((z,y)\\) from the complex-valued \\((z,\\widetilde{y})\\) via \\(y=(\\Re\\widetilde{y},\\Im\\widehat{y})\\), where \\(\\Re\\) and \\(\\Im\\) denote the real and imaginary parts of \\(\\widetilde{y}\\), respectively, and equivalently estimate (the inverses of) \\[\\Sigma_{z}=\\begin{bmatrix}\\Re\\widetilde{\\Sigma}_{z}&-\\Im\\widetilde{\\Sigma}_{ z}\\\\ \\Im\\widetilde{\\Sigma}_{z}&\\Re\\widetilde{\\Sigma}_{z}\\end{bmatrix},\\quad z=1, \\ldots,K,\\] the real-valued transformation of \\(\\widetilde{\\Sigma}_{z}\\)[104, Ch. 4]. (Our model estimates the collection of real-valued natural parameters \\(\\theta=(\\Sigma_{1}^{-1},\\ldots,\\Sigma_{K}^{-1})\\); it is trivial to obtain the equivalent collection of complex-valued natural parameters.) For the remainder of this section, we only consider the problem in its real-valued form. We generate approximately 2900 samples and randomly partition the data set into 80% training samples and 20% test samples. The number of training samples per vertex varysignificantly; there are a mean of 1.74 samples per vertex, and the maximum number of samples on a vertex is 30. 625 of the \\(K=1000\\) vertices have no training samples associated with them. Model.The stratified model in this case is \\(K=1000\\) (inverse) covariance matrices in \\(\\mathbf{S}_{++}^{30}\\), indexed by the radar orientation. Our model has \\(Kn(n-1)/2=435000\\) parameters. Regularization.For local regularization, we utilize trace regularization with regularization weight \\(\\gamma_{\\mathrm{tr}}\\), and \\(\\ell_{1}\\)-regularization on the off-diagonal elements with regularization weight \\(\\gamma_{\\mathrm{od}}\\). That is, \\(r(\\theta)=\\gamma_{\\mathrm{tr}}\\operatorname{\\mathbf{Tr}}(\\theta)+\\gamma_{ \\mathrm{od}}\\|\\theta\\|_{\\mathrm{od},1}\\). The regularization graph for the stratified model is taken as the Cartesian product of three regularization graphs: * _Range._ The regularization graph is a path graph with 10 vertices, with edge weight \\(\\gamma_{\\mathrm{range}}\\). * _Azimuth._ The regularization graph is a cycle graph with 10 vertices, with edge weight \\(\\gamma_{\\mathrm{azi}}\\). * _Doppler frequency._ The regularization graph is a path graph with 10 vertices, with edge weight \\(\\gamma_{\\mathrm{dopp}}\\). The corresponding Laplacian matrix has 6600 nonzero entries and the hyper-parameters are \\(\\gamma_{\\mathrm{range}}\\), \\(\\gamma_{\\mathrm{azi}}\\), and \\(\\gamma_{\\mathrm{dopp}}\\). The stratified model in this case has five hyper-parameters: two for the local regularization, and three for the Laplacian regularization graph edge weights. Results.We compared a stratified model to a common model. The common model corresponds to solving one individual covariance estimation problem, ignoring the radar orientations. For the common model, we let \\(\\gamma_{\\mathrm{tr}}=0.001\\) and \\(\\gamma_{\\mathrm{od}}=59.60\\). For the stratified model, we let \\(\\gamma_{\\mathrm{tr}}=2.68\\), \\(\\gamma_{\\mathrm{od}}=0.66\\), \\(\\gamma_{\\mathrm{range}}=10.52\\), \\(\\gamma_{\\mathrm{azi}}=34.30\\), and \\(\\gamma_{\\mathrm{dopp}}=86.97\\). These hyper-parameters were chosen by performing a crude hyper-parameter search and selecting hyper-parameters that performed well on the validation set. We compare the models' average loss over the held-out test sets in table 3. In addition, we also compute the metric \\[D(\\theta)=\\frac{1}{Kn}\\sum_{k=1}^{K}\\left(\\operatorname{\\mathbf{Tr}}(\\Sigma_{ k}^{\\star}\\theta_{k})-\\log\\det(\\theta_{k})\\right),\\] where \\(\\Sigma_{k}^{\\star}\\) is the true covariance matrix for the stratification feature value \\(z=k\\); this metric is used in the radar signal processing literature as a metric to determine how close \\(\\theta_{k}^{-1}\\) is to \\(\\Sigma_{k}^{\\star}\\). Application.As another experiment, we consider utilizing these models in a target detection problem: given a vector of data \\(y\\in\\mathbf{R}^{30}\\) and its radar orientation \\(z\\), determine if the vector is just interference, _i.e._, \\[y\\mid z=d,\\quad d\\sim\\mathcal{N}(0,\\Sigma_{z}^{\\star}),\\] or if the vector has some target associated with it, _i.e._, \\[y\\mid z=s_{z}+d,\\quad d\\sim\\mathcal{N}(0,\\Sigma_{z}^{\\star})\\] for some target vector \\(s_{z}\\in\\mathbf{R}^{30}\\), which is fixed for each \\(z\\). (Typically, this is cast as a hypothesis test where the former is the null hypothesis and the latter is the alternative hypothesis [11].) We generate \\(s_{z}\\) with \\(z=(r,a,d)\\) as \\[s_{z}=(\\Re\\widetilde{s}_{z},\\Im\\widetilde{s}_{z}),\\qquad\\widetilde{s}_{z}=(1, z_{d},z_{d}^{2})\\otimes(1,z_{a},z_{a}^{2},z_{a}^{3},z_{a}^{4})\\] with \\(z_{a}=e^{2\\pi i\\sin(a)}\\), \\(z_{d}=e^{2\\pi id/f_{R}}\\), and \\(f_{R}=1984\\) is the pulse repetition frequency (in Hz); these values are realistic and selected from the radar signal processing literature [14, Ch. 2]. For each \\(z\\), we generate \\(y\\) as follows: we sample a \\(d\\sim\\mathcal{N}(0,\\Sigma_{z}^{\\star})\\), and with probability \\(1/2\\) we set \\(y=s_{z}+d\\), and set \\(y=d\\) otherwise. (There are 1000 samples). We then test if \\(y\\) contains the target vector via the selection criterion \\[\\frac{(s_{z}^{T}\\theta_{z}y)^{2}}{s_{z}^{T}\\theta_{z}s_{z}}>\\alpha,\\] for some threshold \\(\\alpha\\); this is well-known in the radar signal processing literature as the optimal method for detection in this setting [13, 14, 15]. If the selection criterion holds, then we classify \\(y\\) as containing a target; otherwise, we classify \\(y\\) as containing noise. We vary \\(\\alpha\\) and test the samples on the common and stratified models. We plot the receiver operator characteristic (ROC) curves for both models in figure 5. The area under the ROC curve is 0.84 for the common model and 0.95 for the stratified model; the stratified model is significantly more capable at classifying in this setting. \\begin{table} \\begin{tabular}{l l l} \\hline \\hline Model & Average test sample loss & \\(D(\\theta)\\) \\\\ \\hline Common & 0.153 & 2.02 \\\\ Stratified & 0.069 & 1.62 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Results for §3.2. Figure 5: ROC curves for the common and stratified models as the threshold \\(\\alpha\\) varies. ### Temperature covariance estimation We consider the problem of modeling the covariance matrix of hourly temperatures of a region as a function of day of year. Data records and dataset.We use temperature measurements (in Fahrenheit) from Boston, MA, sampled once per hour from October 2012 to October 2017, for a total of 44424 hourly measurements. We winsorize the data at its 1st and 99th percentiles. We then remove a baseline temperature, which consists of a constant and a sinusoid with period one year. We refer to this time series as the baseline-adjusted temperature. From this data, we create data records \\((z_{i},y_{i}),i=1,\\ldots,1851\\) (so \\(m=1851\\)), where \\(y_{i}\\in\\mathbf{R}^{24}\\) is the baseline-adjusted temperature for day \\(i\\), and \\(z_{i}\\in\\{1,\\ldots,366\\}\\) is the day of the year. For example, \\((y_{i})_{3}\\) is the baseline-adjusted temperature at 3AM, and \\(z_{i}=72\\) means that the day was the 72nd day of the year. The number of stratification features is then \\(K=366\\), corresponding to the number of days in a year. We randomly partition the dataset into a training set consisting of 60% of the data records, a validation set consisting of 20% of the data records, and a held-out test set consisting of the remaining 20% of the data records. In the training set, there are a mean of approximately 3.03 data records per day of year, the most populated vertex is associated with six data records, and there are seven vertices associated with zero data records. Model.The stratified model in this case is \\(K=366\\) (inverse) covariance matrices in \\(\\mathbf{S}_{++}^{24}\\), indexed by the days of the year. Our model has \\(Kn(n-1)/2=101016\\) parameters. Regularization.For local regularization, we utilize trace regularization with regularization weight \\(\\gamma_{\\mathrm{tr}}\\), and \\(\\ell_{1}\\)-regularization on the off-diagonal elements with regularization weight \\(\\gamma_{\\mathrm{od}}\\). That is, \\(r(\\theta)=\\gamma_{\\mathrm{tr}}\\operatorname{\\mathbf{Tr}}(\\theta)+\\gamma_{ \\mathrm{od}}\\|\\theta\\|_{\\mathrm{od},1}\\). The stratification feature stratifies on day of year; our overall regularization graph, therefore, is a cycle graph with 366 vertices, one for each possible day of the year, with edge weights \\(\\gamma_{\\mathrm{day}}\\). The associated Laplacian matrix contains 1096 nonzeros. Results.We compared a stratified model to a common model. The common model corresponds to solving one covariance estimation problem, ignoring the days of the year. For the common model, we used \\(\\gamma_{\\mathrm{tr}}=359\\) and \\(\\gamma_{\\mathrm{od}}=0.1\\). For the stratified model, we used \\(\\gamma_{\\mathrm{tr}}=6000\\), \\(\\gamma_{\\mathrm{od}}=0.1\\), and \\(\\gamma_{\\mathrm{day}}=0.14\\). These hyper-parameters were chosen by performing a crude hyper-parameter search and selecting hyper-parameters that performed well on the validation set. We compare the models' losses over the held-out test sets in table 4. To illustrate some of these model parameters, in figure 6 we plot the heatmaps of the correlation matrices for days that roughly correspond to each season. \\begin{table} \\begin{tabular}{l l} \\hline \\hline Model & Average test loss \\\\ \\hline Common & 0.132 \\\\ Stratified & 0.093 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Average loss over the test set for §3.3. Figure 6: Heatmaps of the correlation matrices for days approximately corresponding to the start of winter (top left), spring (top right), summer (bottom left) and autumn (bottom right). Application.As another experiment, we consider the problem of forecasting the second half of a day's baseline-adjusted temperature given the first half of the day's baseline-adjusted temperature. We do this by modeling the baseline-adjusted temperature from the second half of the day as a Gaussian distribution conditioned on the observed baseline-adjusted temperatures [14, 15]. We run this experiment using the common and stratified models found in the previous experiment, using the data in the held-out test set. In table 5, we compare the root-mean-square error (RMSE) between the predicted temperatures and the true temperatures over the held-out test set for the two models, and in figure 7, we plot the temperature forecasts for two days in the held-out test set. ## Acknowledgments Jonathan Tuck is supported by the Stanford Graduate Fellowship in Science and Engineering. The authors thank Muralidhar Rangaswamy and Peter Stoica for helpful comments on an early draft of this paper. Figure 7: Baseline-adjusted temperature forecasts for two days in the held-out test set. \\begin{table} \\begin{tabular}{l l} \\hline \\hline Model & Average prediction RMSE \\\\ \\hline Common & 8.269 \\\\ Stratified & 6.091 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Average prediction RMSE over the test set for §3.3. ## References * [AC00] R. Almgren and N. Chriss. Optimal execution of portfolio transactions. _Journal of Risk_, pages 5-39, 2000. * [BBD\\({}^{+}\\)17] S. Boyd, E. Busseti, S. Diamond, R. N. Kahn, K. Koh, P. Nystrup, and J. Speth. Multi-period trading via convex optimization. _Foundations and Trends in Optimization_, 3(1):1-76, 2017. * [BEGd08] O. Banerjee, L. El Ghaoui, and A. d'Aspremont. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. _Journal of Machine Learning Research_, 9:485-516, 2008. * [Bis06] C. M. Bishop. _Pattern Recognition and Machine Learning_. Springer, 2006. * [BL08] P. J. Bickel and E. Levina. Covariance regularization by thresholding. _The Annals of Statistics_, 36(6):2577-2604, 12 2008. * [BLW82] J. P. Burg, D. G. Luenberger, and D. L. Wenger. Estimation of structured covariance matrices. _Proceedings of the IEEE_, 70(9):963-974, Sep. 1982. * [BPC\\({}^{+}\\)11] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. _Foundation and Trends in Machine Learning_, 3(1):1-122, 2011. * [BV04] S. Boyd and L. Vandenberghe. _Convex Optimization_. Cambridge University Press, 2004. * [CB09] G. Cao and C. Bouman. Covariance estimation for high dimensional data vectors using the sparse matrix transform. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, _Advances in Neural Information Processing Systems 21_, pages 225-232. Curran Associates, Inc., 2009. * [DWW14] P. Danaher, P. Wang, and D. M. Witten. The joint graphical lasso for inverse covariance estimation across multiple classes. _Journal of the Royal Statistical Society_, 76(2):373-397, 2014. * [Eng82] R. F. Engle. Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. _Econometrica_, 50(4):987-1007, 1982. * [Faz02] M. Fazel. _Matrix Rank Minimization with Applications_. PhD thesis, Stanford University, March 2002. * [FHT08] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. _Biostatistics_, 9(3):432-441, 2008. * [FLL16] J. Fan, Y. Liao, and H. Liu. An overview of the estimation of large covariance and precision matrices. _The Econometrics Journal_, 19(1):C1-C32, 2016. * [GLMZ11] J. Guo, E. Levina, G. Michailidis, and J. Zhu. Joint estimation of multiple graphical models. _Biometrika_, 98(1):1-15, February 2011. * [HL96] J. P. Hoffbeck and D. A. Landgrebe. Covariance matrix estimation and classification with limited training data. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 18(7):763-767, July 1996. * [HLB15] D. Hallac, J. Leskovec, and S. Boyd. Network lasso: Clustering and optimization in large graphs. In _Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining_, pages 387-396, 2015. * [HPBL17] D. Hallac, Y. Park, S. Boyd, and J. Leskovec. Network inference via the time-varying graphical lasso. In _Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining_, pages 205-213, 2017. * [HS52] M. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems. _Journal of Research of the National Bureau of Standards_, 49(6), 1952. * [Kan15] B. Kang. _Robust Covariance Matrix Estimation for Radar Space-Time Adaptive Processing (STAP)_. PhD thesis, The Pennsylvania State University, August 2015. * [KF09] D. Koller and N. Friedman. _Probabilistic Graphical Models: Principles and Techniques_. The MIT Press, 2009. * [KOSZ13] J. A. Kelner, L. Orecchia, A. Sidford, and Z. A. Zhu. A simple, combinatorial algorithm for solving sdd systems in nearly-linear time. In _Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing_, STOC 13, page 911920, New York, NY, USA, 2013. Association for Computing Machinery. * [KVM\\({}^{+}\\)99] W. Kernan, C. Viscoli, R. Makuch, L. Brass, and R. Horwitz. Stratified randomization for clinical trials. _Journal of clinical epidemiology_, 52(1):19-26, 1999. * [LH87] E. Levitan and G. T. Herman. A maximum a posteriori probability expectation maximization algorithm for image reconstruction in emission tomography. _IEEE Transactions on Medical Imaging_, 6(3):185-192, Sept 1987. * [LSL99] H. Li, P. Stoica, and J. Li. Computationally efficient maximum likelihood estimation of structured covariance matrices. _IEEE Transactions on Signal Processing_, 47(5):1314-1323, May 1999. * [Mar52] H. Markowitz. Portfolio selection. _The Journal of Finance_, 7(1):77-91, 1952. * [Mar84] F. H. C. Marriott. Multivariate Statistics: A Vector Space Approach. _Journal of the Royal Statistical Society Series C_, 33(3):319-319, November 1984. * [Mel04] W. L. Melvin. A STAP overview. _IEEE Aerospace and Electronic Systems Magazine_, 19(1):19-35, Jan 2004. * [MM16] J. Ma and G. Michailidis. Joint structural estimation of multiple graphical models. _Journal of Machine Learning Research_, 17(166):1-48, 2016. * [MS87] M. I. Miller and D. L. Snyder. The role of likelihood and entropy in incomplete-data problems: Applications to estimating point-process intensities and toeplitz constrained covariances. _Proceedings of the IEEE_, 75(7):892-907, July 1987. * [PB14] N. Parikh and S. Boyd. Proximal algorithms. _Foundations and Trends in Optimization_, 1(3):127-239, 2014. * [rad02] High-fidelity site-specific radar data set. In _Knowledge-Aided Sensor Signal Processing & Expert Reasoning Workshop 2002_, April 2002. * [RFKN92] F. C. Robey, D. R. Fuhrmann, E. J. Kelly, and R. Nitzberg. A CFAR adaptive matched filter detector. _IEEE Transactions on Aerospace and Electronic Systems_, 28(1):208-216, 1992. * [RFP10] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. _SIAM Review_, 52(3):471501, August 2010. * [SB09] J. Skaf and S. Boyd. Multi-period portfolio optimization with constraints and transaction costs, 2009. * [SBP17] Y. Sun, P. Babu, and D. Palomar. Majorization-minimization algorithms in signal processing, communications, and machine learning. _IEEE Transactions in Signal Processing_, 65(3):794-816, 2017. * [SCC\\({}^{+}\\)19] S. Salari, F. Chan, Y. Chan, I. Kim, and R. Cormier. Joint DOA and clutter covariance matrix estimation in compressive sensing MIMO radar. _IEEE Transactions on Aerospace and Electronic Systems_, 55(1):318-331, Feb 2019. * [SG00] M. Steiner and K. Gerlach. Fast converging adaptive processor or a structured covariance matrix. _IEEE Transactions on Aerospace and Electronic Systems_, 36(4):1115-1126, Oct 2000. * [TB20] J. Tuck and S. Boyd. Eigen-stratified models, 2020. * [TBB19] J Tuck, S. Barratt, and S. Boyd. A Distributed Method for Fitting Laplacian Regularized Stratified Models. _arXiv e-prints_, page arXiv:1904.12017, Apr 2019. * [THB19] J. Tuck, D. Hallac, and S. Boyd. Distributed majorization-minimization for Laplacian regularized problems. _IEEE/CAA Journal of Automatica Sinica_, 6(1):45-52, January 2019. * [TJ16] R. Takapoui and H. Javadi. Preconditioning via diagonal scaling, 2016. * [VB96] L. Vandenberghe and S. Boyd. Semidefinite programming. _SIAM Review_, 38(1):49-95, 1996. * [Vis13] N. Vishnoi. Lx= b. _Foundations and Trends in Theoretical Computer Science_, 8(1-2):1-141, 2013. * [War95] J. Ward. Space-time adaptive processing for airborne radar. In _1995 International Conference on Acoustics, Speech, and Signal Processing_, volume 5, pages 2809-2812, 1995. * [WBAW12] B. Wahlberg, S. Boyd, M. Annergren, and Y. Wang. An ADMM algorithm for a class of total variation regularized estimation problems. In _16th IFAC Symposium on System Identification_, 2012. * [WRAH06] M. C. Wicks, M. Rangaswamy, R. Adve, and T. B. Hale. Space-time adaptive processing: a knowledge-based perspective for airborne radar. _IEEE Signal Processing Magazine_, 23(1):51-65, Jan 2006. * [WT09] D. M. Witten and R. Tibshirani. Covariance-regularized regression and classification for high dimensional problems. _Journal of the Royal Statistical Society_, 71(3):615-636, 2009. * [ZSP14] Y. Zhu, X. Shen, and W. Pan. Structural pursuit over multiple undirected graphs. _Journal of the American Statistical Association_, 109(508):1683-1696, 2014.
We consider the problem of jointly estimating multiple related zero-mean Gaussian distributions from data. We propose to jointly estimate these covariance matrices using Laplacian regularized stratified model fitting, which includes loss and regularization terms for each covariance matrix, and also a term that encourages the different covariances matrices to be close. This method 'borrows strength' from the neighboring covariances, to improve its estimate. With well chosen hyper-parameters, such models can perform very well, especially in the low data regime. We propose a distributed method that scales to large problems, and illustrate the efficacy of the method with examples in finance, radar signal processing, and weather forecasting.
Summarize the following text.
134
arxiv-format/1106_5979v1.md
# Probabilistic Voronoi Diagrams for Probabilistic Moving Nearest Neighbor Queries Mohammed Eunus Ali Egemen Tanin Rui Zhang Ramamohanarao Kotagiri Department of Computer Science and Software Engineering University of Melbourne, Victoria, 3010, Australia Tel.: +61 3 8344 1350 Fax: +61 3 9348 1184 {eunus,egemen,rui,rao}@csse.unimelb.edu.au ## 1 Introduction Uncertainty is an inherent property in many database applications that include location based services [1], environmental monitoring [2], and feature extraction systems [3]. The inaccuracy or imprecision of data capturing devices, the privacy concerns of users, and the limitations on bandwidth and battery power introduce uncertainties in different attributes such as the location of an object or the measured value of a sensor. The values of these attributes are stored in a database, known as an uncertain database. In recent years, query processing on an uncertain database has received significant attention from the research community due to its wide range of applications. Consider a location based application where the location information of users may need to be pre-processed before publishing due to the privacy concern of users. Alternatively, a user may want to provide her position as a larger region in order to prevent her location to be identified to a particular site. In such cases, locations of users are stored as uncertain attributes such as regions instead of points in the database. An application that deals with the location of objects (e.g., post office, hospital) obtained from satellite images is another example of an uncertain database. Since the location information may not be possible to identify accurately from the satellite images due to noisy transmission, locations of objects need to be represented as regions denoting the probable locations of objects. Likewise, in a biological database, objects identified from microscopic images need to be presented as uncertain attributes due to inaccuracies of data capturing devices. In this paper, we propose a novel concept called _Probabilistic Voronoi Diagram_ (PVD), which has a potential to efficiently process nearest neighbor (NN) queries on an uncertain database. The PVD for a given set of uncertainobjects \\(o_{1},o_{2}, ,o_{n}\\) partitions the data space into a set of _Probabilistic Voronoi Cells_ (PVCs) based on the probability measure. Each cell \\(PVC(o_{i})\\) is a region in the data space, where each data point in this region has a higher probability of being the NN to \\(o_{i}\\) than any other object. A nearest neighbor (NN) query on an uncertain database, called a Probabilistic Nearest Neighbor (PNN) query, returns a set of objects, where each object has a non-zero probability of being the nearest to a query point. A common variant of the PNN query that finds the most probable NN to a given query point is also called a top-1-PNN query. Existing research focuses on efficient processing of PNN queries [4; 5; 6; 7] and its variants [8; 9; 10] for a _static query point_. In this paper, we are interested in answering Probabilistic Moving Nearest Neighbor (PMNN) queries on an uncertain database, where data objects are _static_, the query is _moving_, and the future path of the moving query is _unknown_. A PMNN query returns the most probable nearest object for a moving query point continuously. A straightforward approach for evaluating a PMNN query is to use a sampling-based method, which processes the PMNN query as a sequence of PNN queries at sampled locations on the query path. However, to obtain up-to-date answers, a high sampling rate is required, which makes the sampling-based approach inefficient due to the frequent processing of PNN queries. To avoid high processing cost of the sampling based approach and to provide continuous results, recent approaches for continuous NN query processing on a _point data set_ rely on safe-region based techniques, e.g., Voronoi diagram [11]. In a Voronoi diagram based approach, the data space is partitioned into disjoint Voronoi cells where all points inside a cell have the same NN. Then, the NN of a query point is reduced to identifying the cell for the query point, and the result of a moving query point remains valid as long as it remains inside that cell. Motivated by the safe-region based paradigm, in this paper we propose a Voronoi diagram based approach for processing a PMNN query on a set of uncertain objects. Voronoi diagrams for uncertain objects [6; 12] based on a simple distance metric, such as the minimum and maximum distances to objects, result in a large neutral region that contains those points for which no specific NN object is defined. Thus, these are not suitable for processing a PMNN query. In this paper, we propose the PVD that divides the space based on a probability measure rather than using just a simple distance metric. A naive approach to compute the PVD is to find the top-1-PNN for every possible location in the data space using existing static PNN query processing techniques [4; 5; 8], which is an impractical solution due to high computational overhead. In this paper, we propose a practical solution to compute the PVD for a set of uncertain objects. The key idea of our approach is to efficiently compute the probabilistic bisectors between two neighboring objects that forms the basis of PVCs for the PVD. After computing the PVD, the most probable NN can be determined by simply identifying the PVC in which the query point is currently located. The result of the query does not change as long as the moving query point remains in the current PVC. A user sends its request as soon as it exits the PVC. Thus, in contrast to the sampling based approach, the PVD ensures the most probable NN for every point of a moving query path is available. Since this approach requires the pre-computation of the whole PVD, we name it the _pre-computation approach_ in this paper. The pre-computation approach needs to access all the objects from the database to compute the entire PVD. In addition, the PVD needs to be re-computed for any updates (insertion or deletion) to the database. Thus the pre-computation approach may not be suitable for the cases when the query is confined into a small region in the data space or when there are frequent updates in the database. For such cases, we propose an incremental algorithm based on the concept of local PVD. In this approach, a set of surrounding objects and an associated search space, called _known region_, with respect to the current query position are retrieved from the database. Objects are retrieved based on their probabilistic NN rankings from the current query location. Then, we compute the local PVD based only on the retrieved data set, and develop a _probabilistic safe region_ based PMNN query processing technique. The probabilistic safe region defines a region for an uncertain object where the object is guaranteed to be the most probable nearest neighbor. This probabilistic safe region enables a user to utilize the retrieved data more efficiently and reduces the communication overheads when a client is connected to the server through a wireless link. The process needs to be repeated as soon as the retrieved data set cannot provide the required answer for the moving query point. We name this PMNN query processing technique the _incremental approach_ in this paper. In summary, we make the following contributions in this paper: * We formulate the Probabilistic Voronoi Diagram (PVD) for uncertain objects and propose techniques to compute the PVD. * We provide an algorithm for evaluating PMNN queries based on the pre-computed PVD. * We propose an incremental algorithm for evaluating PMNN queries based on the concept of local PVD. * We conduct an extensive experimental study which shows that our PVD based approaches outperform the sampling based approach significantly. The rest of the paper is organized as follows. Section 2 discusses preliminaries and the problem setup. Section 3 reviews related work. In Section 4, we formulate the concept of PVD and present methods to compute it, focusing on one and two dimensional spaces. In Section 5, we present two techniques: pre-computation approach and incremental approach for processing PMNN queries. Section 6 reports our experimental results and Section 7 concludes the paper. ## 2 Preliminaries and Problem Setup Let \\(O\\) be a set of uncertain objects in a \\(d\\)-dimensional data space. An uncertain object \\(o_{i}\\in O\\), \\(1\\leq i\\leq|O|\\), is represented by a \\(d\\)-dimensional uncertain range \\(R_{i}\\) and a probability density function \\((pdf)\\)\\(f_{i}(u)\\) that satisfies \\(\\int_{R_{i}}f_{i}(u)du=1\\) for \\(u\\in R_{i}\\). If \\(u\ otin R_{i}\\), then \\(f_{i}(u)=0\\). We assume that the pdf of uncertain objects follow uniform distributions for the sake of easy explication. Our concept of PVD is applicable for other types of distributions. We briefly discuss PVDs for other distributions in Section 4.3). For uniform distribution, the pdf of \\(o_{i}\\) can be expressed as \\(f_{i}(u)=\\frac{1}{Area(R_{i})}\\) for \\(u\\in R_{i}\\). For example, for a circular object \\(o_{i}\\), the uncertainty region and the pdf are represented as \\(R_{i}=(c_{i},r_{i})\\) and \\(f_{i}(u)=\\frac{1}{nr_{i}^{2}}\\), respectively, where \\(c_{i}\\) is the center and \\(r_{i}\\) is the radius of the region. We also assume that the uncertainty of objects remain constant. An NN query on a traditional database consisting of a set of data points (or objects) returns the nearest data point to the query point. An NN query on an uncertain database does not return a single object, instead it returns a set of objects that have non-zero probabilities of being the NN to the query point. Suppose that the database maintains only point locations \\(c_{1}\\), \\(c_{2}\\), and \\(c_{3}\\) for objects \\(o_{1}\\), \\(o_{2}\\), and \\(o_{3}\\), respectively (see Figure 1). Then an NN query with respect to \\(q\\) returns \\(o_{2}\\) as the NN because the distance \\(dist(c_{2},q)\\) is the least among all other objects. In this case, \\(o_{1}\\) and \\(o_{3}\\) are the second and third NNs, respectively, to the query point \\(q\\). If the database maintains the uncertainty regions \\(R_{1}=(c_{1},r_{1})\\), \\(R_{2}=(c_{2},r_{2})\\), and \\(R_{3}=(c_{3},r_{3})\\) for objects \\(o_{1}\\), \\(o_{2}\\), and \\(o_{3}\\), respectively, then the NN query returns all three \\((o_{1},p_{1})\\), \\((o_{2},p_{2})\\), \\((o_{3},p_{3})\\) as probable NNs for the query point \\(q\\), where \\(p_{1}>p_{2}>p_{3}>0\\) (see Figure 1). A Probabilistic Nearest Neighbor (PNN) query [4] is defined as follows: **Definition 2.1**.: _(PNN) Given a set \\(O\\) of uncertain objects in a \\(d\\)-dimensional database, and a query point \\(q\\), a PNN query returns a set \\(P\\) of tuples \\((o_{i},p_{i})\\), where \\(o_{i}\\in O\\) and \\(p_{i}\\) is the non-zero probability that the distance of \\(o_{i}\\) to \\(q\\) is the minimum among all objects in \\(O\\)._ The probability \\(p(o_{i},q)\\) (or simply \\(p_{i}\\)) of an object \\(o_{i}\\) of being the NN to a query point \\(q\\) can be computed as follows. For any point \\(u\\in R_{i}\\), where \\(R_{i}\\) is the uncertainty region of an object \\(o_{i}\\), we need to first find out the probability of \\(o_{i}\\) being at \\(u\\) and multiply it by the probabilities of all other objects being farther than \\(u\\) with respect to \\(q\\), and then summing up these products for all \\(u\\) to compute \\(p_{i}\\). Thus, \\(p_{i}\\) can be expressed as follows: \\[p_{i}=\\int_{u\\in R_{i}}f_{i}(u)(\\prod_{j\ eq i}\\int_{v\\in R_{j}}P(dist(v,q)> dist(u,q))dv)du, \\tag{1}\\] where the function \\(P(.)\\) returns the probability that a point \\(v\\in R_{j}\\) of \\(o_{j}\\) is farther from a point \\(u\\in R_{i}\\) of \\(o_{i}\\). Figure 1 shows a query point \\(q\\), and three objects \\(o_{1}\\), \\(o_{2}\\), and \\(o_{3}\\). Based on Equation 1, the probability \\(p_{1}\\) of object \\(o_{1}\\) being the NN to \\(q\\) can be computed as follows. In this example, we assume _a discrete space_ where the radii of three objects are \\(5,2\\), and \\(3\\) units, respectively, and the minimum distance of \\(o_{1}\\) to \\(q\\) is \\(5\\) units. Suppose that the dashed circles \\((q,5)\\), \\((q,6)\\), \\((q,7)\\), \\((q,8)\\), and \\((q,9)\\) centered at \\(q\\) with radii \\(5\\), \\(6\\), \\(7\\), \\(8\\), and \\(9\\) units, respectively, divide the uncertain region \\(R_{1}\\) of \\(o_{1}\\) into four sub-regions \\(o_{1_{1}}\\), \\(o_{1_{2}}\\), \\(o_{1_{3}}\\), and \\(o_{1_{4}}\\), where \\(o_{1_{1}}=(c_{1},r_{1})\\cap(q,6)\\), \\(o_{1_{2}}=(c_{1},r_{1})\\cap(q,7)-o_{1_{1}}\\), \\(o_{1_{3}}=(c_{1},r_{1})\\cap(q,8)-(o_{1_{1}}\\cup o_{1_{2}})\\), \\(o_{1_{4} Then \\(p_{1}\\) can be computed by summing: (i) the probability of \\(o_{1}\\) being within the sub-region \\(o_{1_{1}}\\) multiplied by the probabilities of \\(o_{2}\\) and \\(o_{3}\\) being outside the circular region \\((q,6)\\), (ii) the probability of \\(o_{1}\\) being within the sub-region \\(o_{1_{2}}\\) multiplied by the probabilities of \\(o_{2}\\) and \\(o_{3}\\) being outside the circular region \\((q,7)\\), (iii) the probability of \\(o_{1}\\) being within the sub-region \\(o_{1_{3}}\\) multiplied by the probabilities of \\(o_{2}\\) and \\(o_{3}\\) being outside the circular region \\((q,8)\\), and (iv) the probability of \\(o_{1}\\) being within the sub-region \\(o_{1_{4}}\\) multiplied by the probabilities of \\(o_{2}\\) and \\(o_{3}\\) being outside the circular region \\((q,9)\\). As we have discussed in the introduction, in many applications a user may often be interested in the most probable nearest neighbor. In such cases, a PNN only returns the object with the highest probability of being the NN, also known as a _top-1-PNN query_. In this paper, we address the probabilistic moving NN query that continuously reports the most probable NN for each query point of a moving query. From Equation 1, we see that finding the most probable NN to a static query point is expensive as it involves costly integration and requires to consider the uncertainty of other objects. Hence, for a moving user that needs to be updated with the most probable answer continuously, it requires repetitive computation of the top object for every sampled location of the moving query. In this paper, we propose PVD based approaches for evaluating a PMNN query. In this paper, we propose two techniques: a pre-computation approach and an incremental approach to answer PMNN queries. Based on the nature of applications, one can choose any of these techniques that suits best for her purpose. Moreover, both of our techniques fit into any of the two most widely used query processing paradigms: _centralized paradigm_, and _client-server paradigm_. In the centralized paradigm the query issuer and the processor reside in the same machine, and the total query processing cost is the main performance measurement metric. On the other hand, in the client-server paradigm, a client issues a query to a server that processes the query, through wireless links such as mobile phone networks. Thus, in the client-server paradigm the performance metric includes both the communication cost and the query processing cost. In the rest of the paper, we use the following functions: \\(min(v_{1},v_{2}, ,v_{n})\\) and \\(max(v_{1},v_{2}, ,v_{n})\\) return the minimum and the maximum, respectively, of a given set of values \\(v_{1}\\), \\(v_{2}\\), ,\\(v_{n}\\); \\(dist(p_{1},p_{2})\\) returns the Euclidian distance between two points \\(p_{1}\\) and \\(p_{2}\\); \\(mindist(p,o)\\) and \\(maxdist(p,o)\\) return the minimum and maximum Euclidian distances, respectively, between a point \\(p\\) and an uncertain object \\(o\\). We also use the following terminologies. When the possible range of values of two uncertain objects overlap then we call them _overlapping objects_; otherwise they are called _non-overlapping objects_. If the ranges of two objects are of equal length then we call them _equi-range objects_; otherwise they are called _non-equi-range objects_. ## 3 Background In this section, we first give an overview of existing PNN query processing techniques on uncertain databases that are closely related to our work. Then we present existing work on Voronoi diagrams. ### Probabilistic Nearest Neighbor Processing PNN queries on uncertain databases has received significant attention in recent years. In [4], Cheng et al. proposed a numerical integration based technique to evaluate a PNN query for one-dimensional sensor data. In [5], an I/O efficient technique based on numerical integration was developed for evaluating PNN queries Figure 1: An example of a PNN query on two-dimensional uncertain moving object data. In [7], authors presented a sampling based technique to compute PNN, where both data and query objects are uncertain. Probabilistic threshold NN queries have been introduced in [13], where all objects with probabilities above a specified threshold are reported. In [14], a PNN algorithm was presented where both data and query objects are static trajectories, where the algorithm finds objects that have non-zero probability of any sub-intervals of a given trajectory. Lian et al. [15] presented a technique for a group PNN query that minimizes the aggregate distance to a set of static query points. The PNN variant, top-\\(k\\)-PNN query reports top \\(k\\) objects which have higher probabilities of being the nearest than other objects in the database [8; 9; 10]. Among these works, techniques [9; 10] aim to reduce I/O and CPU costs independently. In [8], the authors proposed a unified cost model that allows interleaving of I/O and CPU costs while processing top-\\(k\\)-PNN queries. This method [8] uses lazy computational bounds for probability calculation which is found to be very efficient for finding top-\\(k\\)-PNN. Any existing methods for static PNN queries [4; 5; 7] or its variants [8; 9; 10] can be used for evaluating PMNN queries which process the PMNN query as a sequence of PNN queries at sampled locations on the query path. Since in this paper we are only interested in the most probable answer, we use the recent technique [8] to compute top-1-PNN for processing PMNN queries in a comparative sampling based approach and also for the probability calculation in the PVD. Some techniques [16; 17] have been proposed for answering PNN queries (including top-\\(k\\)-PNN) for existentially uncertain data, where objects are represented as points with associated membership probabilities. However, these techniques are not related to our work as they do not support uncertainty in objects' attributes. Our problem should also not be confused with maximum likelihood classifiers [18] where they use statistical decision rules to estimate the probability of an object being in a certain class, and assign the object to the class with the highest probability. All of the above mentioned schemes assume a static query point for PNN queries. Though, continuous processing of NN queries for a moving query point on a _point data set_ was also a topic of interest for many years [19], we are the first to address such queries on an _uncertain data set_. In this paper, we propose efficient techniques for probabilistic moving NN queries on an uncertain database, where we continuously report the most probable NN for a moving query point. ### Voronoi Diagrams The Voronoi diagram [11] is a popular approach for answering both static and continuous nearest neighbor queries for two-dimensional _point data_[20]. Voronoi diagrams for extended objects (e.g., circular objects) [21] have been proposed that use boundaries of objects, i.e., minimum distances to objects, to partition the space. However, these objects are not uncertain, and thus, [21] cannot be used for PNN queries. Voronoi diagrams for uncertain objects have been proposed that can divide the space for a set of sparsely distributed objects [6; 12]. Both of these approaches are based on the distance metric, where \\(mindist\\) and \\(maxdist\\) to objects are used to calculate the boundary of the Voronoi edges. The Voronoi diagram of [12] can be described as follows. Let \\(R_{1},R_{2}, ,R_{n}\\) be the regions of a set \\(O\\) of uncertain objects \\(o_{1},o_{2}, ,o_{n}\\), respectively. Then a set of sub-regions or cells \\(V_{1},V_{2}, ,V_{n}\\) in the data space can be determined such that a point in \\(V_{i}\\) must be closer to any point in \\(R_{i}\\) than to any point in any other object's region. For two objects \\(o_{i}\\) and \\(o_{j}\\), let \\(H(i,j)\\) be the set of points in the space that are at least as close to any point in \\(R_{i}\\) as any point in \\(R_{j}\\), i.e., \\[H(i,j)=\\{p\\|\\forall x\\in R_{i}\\forall y\\in R_{j}\\ dist(p,x)\\leq dist(p,y)\\},\\] where \\(p\\) is a point in the data space. Then, the cell \\(V_{i}\\) of object \\(o_{i}\\) can be defined as follows: \\[V_{i}=\\cap_{j\ eq i}H(i,j).\\] The boundary \\(B(i,j)\\) of \\(H(i,j)\\) can be defined as a set of points in \\(H(i,j)\\), where \\(p\\in B(i,j)\\) and \\(maxdist(p,o_{i})=mindist(p,o_{j})\\). If the regions are circular, the boundary of object \\(o_{i}\\) with \\(o_{j}\\) is a set of points \\(p\\) that holds the following condition: \\[dist(p,c_{i})+r_{i}=dist(p,c_{j})-r_{j},\\]where \\(c_{i}\\) and \\(c_{j}\\) are the centers and \\(r_{i}\\) and \\(r_{j}\\) are the radii of the regions for objects \\(o_{i}\\) and \\(o_{j}\\), respectively. Since \\(r_{i}\\) and \\(r_{j}\\) are constants, the points \\(p\\) that satisfy the above equation lie on the hyperbola (with foci \\(c_{i}\\) and \\(c_{j}\\)) arm closest to \\(o_{i}\\). Figure 2 shows an example of this Voronoi diagram for uncertain objects \\(o_{1}\\) and \\(o_{2}\\). The figure also shows the neutral region (the region between two hyperbolic arms) for which the NN cannot be defined by using this Voronoi diagram. Since this Voronoi diagram divides the space based on only the distances (i.e., _mindist_ and _maxdist_ of objects), there may not exist any partition of the space when there is no point such that _mindist_ of an object is equal to _maxdist_ of the other object, i.e., when the regions of objects overlap or too close to each other. In this approach, a Voronoi cell \\(V_{i}\\) only contains those points in the data space that have \\(o_{i}\\) as the nearest object with probability one. Thus, this diagram is called a guaranteed Voronoi diagram for a given set of uncertain objects. However, in our application domain, an uncertain database can contain objects with overlapping ranges or objects with close proximity (or densely populated) [4; 5; 8]. Hence a PNN query returns a set of objects (possibly more than one) which have the possibilities of being the NN to the query point. Having such a data distribution, the guaranteed Voronoi diagram cannot divide the space at all, and as a result the neutral regions cover most of the data space for which no nearest object can be determined. However, for an efficient PMNN query evaluation we need to continuously find the most probable nearest object for each point of the query path. We propose a Probabilistic Voronoi Diagram (PVD) that works for any distribution of data objects. Cheng et al. [6] also propose a Voronoi diagram for uncertain data, called Uncertain-Voronoi diagram (UV-diagram). The UV-diagram partitions the space based on the distance metric similar to the guaranteed Voronoi diagram [12]. For each uncertain object \\(o_{i}\\), the UV-diagram defines a region (or UV-cell) where \\(o_{i}\\) has a non-zero probability of being the NN for any point in this region. The main difference of the UV-diagram from the guaranteed Voronoi diagram is that the guaranteed Voronoi diagram concerns about finding the region for a object where the object is _guaranteed_ to be the NN for any point in this region, on the other hand UV-diagram concerns about defining a region for an object where the object has a _chance_ of being the NN for any point in this region. For example, in Figure 2, all points that are left side of the hyperbolic arm closest to \\(o_{2}\\) have non-zero probabilities of \\(o_{1}\\) being the NN, and thus the region left to this hyperbolic line (i.e., closest to \\(o_{2}\\)) defines the UV-cell for object \\(o_{1}\\). Similarly, the region right to the hyperbolic line closest to \\(o_{1}\\) defines the UV-cell for object \\(o_{2}\\). Since both UV-diagram and guaranteed Voronoi diagram are based on the concept of similar distance metrics, the UV-diagram suffers from similar limitations as of the guaranteed Voronoi diagram (as discussed above) and is not suitable for our purpose. ## 4 Probabilistic Voronoi Diagram A Probabilistic Voronoi Diagram (PVD) is defined as follows: **Definition 4.1**.: _(PVD) Let \\(O\\) be a set of uncertain objects in a \\(d\\)-dimensional data space. The probabilistic Voronoi diagram partitions the data space into a set of disjoint regions, called Probabilistic Voronoi Cells (PVCs). The PVC of an object \\(o_{i}\\in O\\) is a region or a set of non-contiguous region, denoted by \\(PVC(o_{i})\\), such that \\(p(o_{i},q)>p(o_{j},q)\\) for any point \\(q\\in PVC(o_{i})\\) and for any object \\(o_{j}\\in O-\\{o_{i}\\}\\), where \\(p(o_{i},q)\\) and \\(p(o_{j},q)\\) are the probabilities of \\(o_{i}\\) and \\(o_{j}\\) of being the NNs to \\(q\\)._ The basic idea of computing a PVD is to identify the PVCs of all objects. To find a PVC of an object, we need to find the boundaries of the PVC with all neighboring objects. The boundary line/curve that separates two neighboring PVCs is called the probabilistic bisector of two corresponding objects, as both objects have equal probabilities of being Figure 2: A guaranteed Voronoi diagram the NNs for any point on the boundary. Let \\(o_{i}\\) and \\(o_{j}\\) be two uncertain objects, \\(pb_{o_{i}o_{j}}\\) be the probabilistic bisector of \\(o_{i}\\) and \\(o_{j}\\) that separates \\(PVC(o_{i})\\) and \\(PVC(o_{j})\\). Then, for any point \\(q\\in pb_{o_{i}o_{j}}\\), \\(p(o_{i},q)=p(o_{j},q)\\), and for any point \\(q\\in PVC(o_{i})\\), \\(p(o_{i},q)>p(o_{j},q)\\), and for any point \\(q\\in PVC(o_{j})\\), \\(p(o_{i},q)<p(o_{j},q)\\). A naive approach to compute the PVD requires the processing of PNN queries by using Equation 1 at every possible location in the data space for determining the PVCs based on the calculated probabilities. This approach is _prohibitively_ expensive in terms of computational cost and thus impractical. In this paper, we propose an efficient and practical solution for computing the PVD for uncertain objects. Next, we show how to efficiently compute PVDs, focusing on 1-dimensional (1D) and 2-dimensional (2D) spaces. We briefly discuss higher dimensional cases at the end of this section. ### Probabilistic Voronoi Diagram in a 1D Space Applications such as environmental monitoring, feature extraction systems capture one dimensional uncertain attributes, and store these values in a database. In this section, we derive the PVD for 1D uncertain objects. An uncertain 1D object \\(o_{i}\\) can be represented as a range \\([l_{i},u_{i}]\\), where \\(l_{i}\\) and \\(u_{i}\\) are lower and upper bounds of the range. Let \\(m_{i}\\) and \\(n_{i}\\) be the midpoint and the length of the range \\([l_{i},u_{i}]\\), i.e., \\(m_{i}=\\frac{l+u_{i}}{2}\\) and \\(n_{i}=u_{i}-l_{i}\\). The probabilistic bisector \\(pb_{o_{i}o_{j}}\\) of two 1D objects \\(o_{i}\\) and \\(o_{j}\\) is a point \\(x\\) within the range \\([min(l_{i},l_{j}),max(u_{i},u_{j})]\\) such that \\(p(o_{i},x)=p(o_{j},x)\\), and \\(p(o_{i},x^{\\prime})>p(o_{j},x^{\\prime})\\) for any point \\(x^{\\prime}<x\\) and \\(p(o_{i},x^{\\prime\\prime})<p(o_{j},x^{\\prime\\prime})\\) for any point \\(x^{\\prime\\prime}>x\\). Since only the equality condition is not sufficient, other two conditions must also hold. In our proof for lemmas, we will show that a probabilistic bisector needs to satisfy all three conditions. For example, Figure 3(b) shows two uncertain objects \\(o_{1}\\) and \\(o_{2}\\), and their probabilistic bisector \\(pb_{o_{1}o_{2}}\\) as a point \\(x\\). In this example, the lengths of range for \\(o_{1}\\) and \\(o_{2}\\) are \\(n_{1}=8\\) and \\(n_{2}=4\\), respectively, and the minimum distances from \\(x\\) to \\(o_{1}\\) and \\(o_{2}\\) are \\(d_{1}=1\\) and \\(d_{2}=3\\), respectively. Then based on Equation 1, we can compute the probabilities of \\(o_{1}\\) and \\(o_{2}\\) of being the NN to \\(x\\) as follows: \\[p(o_{1},x)=\\frac{2}{8}\\cdot\\frac{4}{4}+\\frac{1}{8}\\cdot\\frac{3}{4}+\\frac{1}{8 }\\cdot\\frac{2}{4}+\\frac{1}{8}\\cdot\\frac{1}{4}=\\frac{14}{32},\\] and \\[p(o_{2},x)=\\frac{1}{4}\\cdot\\frac{5}{8}+\\frac{1}{4}\\cdot\\frac{4}{8}+\\frac{1}{4 }\\cdot\\frac{3}{8}+\\frac{1}{4}\\cdot\\frac{2}{8}=\\frac{14}{32}.\\] A naive approach for finding the \\(pb_{o_{i}o_{j}}\\) requires the computation of probabilities (using Equation 1) of \\(o_{i}\\) and \\(o_{j}\\) for every position within the range \\([min(l_{i},l_{j}),max(u_{i},u_{j})]\\). To avoid high computational overhead of this naive approach, in our method we show that for two equi-range objects (i.e., \\(n_{i}=n_{j}\\)), we can always directly compute the probabilistic bisector (see Lemma 4.1) by using the upper and lower bounds of two candidate objects. Similarly, we also show that for two non-equi-range objects, where \\(n_{i}\ eq n_{j}\\), we can directly compute the probabilistic bisector for certain scenarios shown in Lemmas 4.2-4.3, and for the remaining scenarios of non-equi-range objects we exploit these lemmas to find probabilistic bisectors at reduced computational cost. Next, we present the lemmas for 1D objects. Lemma 4.1 gives the probabilistic bisector of two equi-range objects, overlapping and non-overlapping. Figure 3(a) is an example of a non-overlapping case. (Note that if \\(l_{i}=l_{j}\\) and \\(u_{i}=u_{j}\\), then two objects \\(o_{i}\\) and \\(o_{j}\\) are assumed to be the same and no probabilistic bisector exists between them.) **Lemma 4.1**: _Let \\(o_{i}\\) and \\(o_{j}\\) be two objects where \\(m_{i}\ eq m_{j}\\). If \\(n_{i}=n_{j}\\), then the probabilistic bisector \\(pb_{o_{i}o_{j}}\\) of \\(o_{i}\\) and \\(o_{j}\\) is the bisector of \\(m_{i}\\) and \\(m_{j}\\)._ Let \\(o_{i}\\) and \\(o_{j}\\) be two equi-range objects, i.e., \\(n_{i}=n_{j}\\). Let \\(x\\) be the bisector of two midpoints \\(m_{i}\\) and \\(m_{j}\\), i.e., \\(x=\\frac{m_{i}+m_{j}}{2}\\). Then, by using Equation 1, we can calculate the probability of \\(o_{i}\\) being the NN to \\(x\\) as follows. \\[p(o_{i},x)=\\sum_{s=1}^{n_{i}-1}\\frac{1}{n_{i}}\\frac{n_{j}-s}{n_{j}}.\\]Similarly, we can calculate the probability of \\(o_{j}\\) being the NN to \\(x\\), as follows. \\[p(o_{j},x)=\\sum_{s=1}^{n_{j}-1}\\frac{1}{n_{j}}\\frac{n_{i}-s}{n_{i}}.\\] If we put \\(n_{i}=n_{j}\\) in the above two equations, we have \\(p(o_{i},x)=p(o_{j},x)\\). Thus, the probabilities of \\(o_{i}\\) and \\(o_{j}\\) of being the NN from the point \\(x\\) are equal. Now, let \\(x^{\\prime}=\\frac{n_{i}+n_{j}}{2}-\\epsilon\\) be a point on the left side of \\(x\\). Then we can calculate the probability of \\(o_{i}\\) of being the NN to \\(x^{\\prime}\\) \\[p(o_{i},x^{\\prime})=2\\epsilon\\frac{n_{j}}{n_{i}n_{j}}+\\sum_{s=1}^{n_{i}-2 \\epsilon}\\frac{1}{n_{i}}\\frac{n_{j}-s}{n_{j}}.\\] Similarly, we can calculate the probability of \\(o_{j}\\) being the NN to \\(x^{\\prime}\\), as follows. \\[p(o_{j},x^{\\prime})=\\sum_{s=2\\epsilon+1}^{n_{i}-1}\\frac{1}{n_{j}}\\frac{n_{i}-s }{n_{i}}.\\] Now, if we put \\(n_{i}=n_{j}\\) in the above two equations, then we have \\(p(o_{i},x^{\\prime})>p(o_{j},x^{\\prime})\\) at \\(x^{\\prime}\\). Similarly we can prove that \\(p(o_{i},x^{\\prime\\prime})<p(o_{j},x^{\\prime\\prime})\\) for a point \\(x^{\\prime\\prime}\\) on the right side of \\(x\\). Thus, we can conclude that \\(x\\) is the probabilistic bisector of \\(o_{i}\\) and \\(o_{j}\\), i.e., \\(pb_{o_{i}o_{j}}=x\\). The following lemma shows how to compute the probabilistic bisector of two non-equi-range objects that are non-overlapping (see Figure 3(b)). **Lemma 4.2**.: _Let \\(o_{i}\\) and \\(o_{j}\\) be two non-overlapping objects, where \\(n_{i}\ eq n_{j}\\). If there are no other objects within the range \\([min(l_{i},l_{j}),max(u_{i},u_{j})]\\), then the probabilistic bisector \\(pb_{o_{i}o_{j}}\\) of \\(o_{i}\\) and \\(o_{j}\\) is the bisector of \\(m_{i}\\) and \\(m_{j}\\)._ Figure 3: Scenarios of lemmas Proof. Let \\(n_{i}>n_{j}\\), and \\(x\\) be the bisector of two midpoints \\(m_{i}\\) and \\(m_{j}\\) of objects \\(o_{i}\\) and \\(o_{j}\\), respectively, i.e., \\(x=\\frac{m_{i}+m_{j}}{2}\\), and the minimum distances from \\(x\\) to \\(o_{i}\\) and \\(o_{j}\\) are \\(d_{i}\\) and \\(d_{j}\\), respectively. Then, by using Equation 1, we can calculate the probability of \\(o_{i}\\) being the NN to \\(x\\) as follows. \\[p(o_{i},x) =(d_{j}-d_{i})\\frac{1}{n_{i}}\\frac{n_{j}}{n_{j}}+\\sum_{s=1}^{n_{j }-1}\\frac{1}{n_{i}}\\frac{n_{j}-s}{n_{j}}\\] \\[=(d_{j}-d_{i})\\frac{n_{j}}{n_{i}n_{j}}+\\frac{n_{j}(n_{j}-1)}{2n_{ i}n_{j}}.\\] Similarly, we can calculate the probability of \\(o_{j}\\) being the NN to \\(x\\) as follows. \\[p(o_{j},x)=\\sum_{s=1}^{n_{j}}\\frac{1}{n_{j}}\\frac{n_{i}-(d_{j}-d_{i}+s)}{n_{i}}.\\] Since, we have \\(d_{j}-d_{i}=\\frac{n_{i}-n_{j}}{2}\\), i.e., \\(n_{i}=2(d_{j}-d_{i})+n_{j}\\). By replacing \\(n_{i}\\) in the numerator of \\(p(o_{j},x)\\), we can have the following, \\[p(o_{j},x) =\\sum_{s=1}^{n_{j}}\\frac{1}{n_{j}}\\frac{2(d_{j}-d_{i})+n_{j}-(d_{ j}-d_{i}+s)}{n_{i}}\\] \\[=(d_{j}-d_{i})\\frac{n_{j}}{n_{i}n_{j}}+\\frac{n_{j}(n_{j}-1)}{2n_{ i}n_{j}}.\\] Since \\(p(o_{i},x)=p(o_{j},x)\\), we have \\(pb_{o_{i}o_{j}}=x\\). On the other hand, let \\(x^{\\prime}=\\frac{m_{i}+m_{j}}{2}-\\epsilon\\) be a point on the left side of the probabilistic bisector. Then, by using Equation 1, we can calculate the probability of \\(o_{i}\\) being the NN to \\(x^{\\prime}\\) as follows. \\[p(o_{i},x^{\\prime})=(d_{j}-d_{i}+2\\epsilon)\\frac{n_{j}}{n_{i}n_{j}}+\\frac{n_{ j}(n_{j}-1)}{2n_{i}n_{j}}.\\] Similarly, we can calculate the probability of \\(o_{j}\\) being the NN to \\(x^{\\prime}\\), as follows. \\[p(o_{j},x^{\\prime})=(d_{j}-d_{i}-2\\epsilon)\\frac{n_{j}}{n_{i}n_{j}}+\\frac{n_{ j}(n_{j}-1)}{2n_{i}n_{j}}.\\] So, we can say \\(p(o_{i},x^{\\prime})>p(o_{j},x^{\\prime})\\) for a point \\(x^{\\prime}\\) on the left side of \\(pb_{o_{i}o_{j}}\\). Similarly we can prove that \\(p(o_{i},x^{\\prime\\prime})<p(o_{j},x^{\\prime\\prime})\\) for a point \\(x^{\\prime\\prime}\\) on the right side of \\(pb_{o_{i}o_{j}}\\). For two non-equi-range objects that are overlapping, the following lemma directly computes the probabilistic bisector for the scenarios where lower, upper, or mid-point values of two candidate objects are same (see Figure 3(c), (d), and (e)). **Lemma 4.3**.: _Let \\(o_{i}\\) and \\(o_{j}\\) be two overlapping objects, where \\(n_{i}\ eq n_{j}\\), \\(l_{i}\\leq l_{j}\\leq u_{j}\\leq u_{i}\\), and there are no other objects within the range \\([\\min(l_{i},l_{j}),max(u_{i},u_{j})]\\)._ 1. _If_ \\(l_{i}=l_{j}\\)_, then the probabilistic bisector_ \\(pb_{o_{i}o_{j}}\\) _of_ \\(o_{i}\\) _and_ \\(o_{j}\\) _is the bisector of_ \\(m_{i}\\) _and_ \\(u_{j}\\)_._ 2. _If_ \\(u_{i}=u_{j}\\)_, then the probabilistic bisector_ \\(pb_{o_{i}o_{j}}\\) _of_ \\(o_{i}\\) _and_ \\(o_{j}\\) _is the bisector of_ \\(m_{i}\\) _and_ \\(l_{j}\\)_._ 3. _If_ \\(m_{i}=m_{j}\\)_, then the probabilistic bisectors_ \\(pb_{o_{i}o_{j}}\\) _of_ \\(o_{i}\\) _and_ \\(o_{j}\\) _are the bisectors of_ \\(l_{i}\\) _and_ \\(l_{j}\\)_, and_ \\(u_{i}\\) _and_ \\(u_{j}\\)_._ Proof. Let \\(n_{i}>n_{j}\\), \\(l_{i}=l_{j}\\), \\(x=\\frac{m_{i}+l_{j}}{2}\\), and \\(d\\) be the distance from \\(x\\) to both \\(m_{i}\\) and \\(l_{j}\\). Then, by using Equation 1, we can calculate the probability of \\(o_{i}\\) being the NN to \\(x\\) as follows. \\[p(o_{i},x) =\\sum_{s=1}^{d}\\frac{2}{n_{i}}\\frac{n_{j}}{n_{j}}+\\sum_{s=1}^{n_{j} -1}\\frac{2}{n_{i}}\\frac{n_{j}-s}{n_{j}}\\] \\[=\\frac{2d}{n_{i}}+\\frac{n_{j}(n_{j}-1)}{n_{i}n_{j}}.\\] Similarly, we can calculate the probability of \\(o_{j}\\) being the NN to \\(x\\) as follows. \\[p(o_{j},x)=\\sum_{s=1}^{n_{j}}\\frac{1}{n_{j}}\\frac{n_{i}-(2d+2s)}{n_{i}}.\\] However, \\(\\frac{n_{i}}{2}-n_{j}=2d\\), that is \\(n_{i}=4d+2n_{j}\\). By replacing \\(n_{i}\\) in the numerator and simplifying the term, we can have the following, \\(p(o_{j},x)=\\frac{2d}{n_{i}}+\\frac{n_{j}(n_{i}-1)}{n_{i}n_{j}}\\). Since \\(p(o_{i},x)=p(o_{j},x)\\), \\(pb_{o_{i}o_{j}}=x\\). Similar to Lemma 4.2, we can prove that \\(p(o_{i},x^{\\prime})>p(o_{j},x^{\\prime})\\) for any point \\(x^{\\prime}\\) on the left, and \\(p(o_{i},x^{\\prime\\prime})<p(o_{j},x^{\\prime\\prime})\\) for any point \\(x^{\\prime\\prime}\\) on the right side of \\(pb_{o_{i}o_{j}}\\). Similarly, we can prove the case for \\(u_{i}=u_{j}\\). Let \\(m_{i}=m_{j}\\), \\(x_{1}=\\frac{\\left\\{i+l_{j}\\right\\}}{2}\\), and \\(d\\) be the distance from \\(x_{1}\\) to both \\(l_{i}\\) and \\(l_{j}\\). Then, by using Equation 1, we can calculate the probability of \\(o_{i}\\) being the NN to \\(x_{1}\\) as follows. \\[p(o_{i},x_{1}) =\\sum_{s=1}^{d}\\frac{2}{n_{i}}\\frac{n_{j}}{n_{j}}+\\sum_{s=1}^{n_{ j}-1}\\frac{1}{n_{i}}\\frac{n_{j}-s}{n_{j}}\\] \\[=\\frac{2d}{n_{i}}+\\frac{n_{j}(n_{j}-1)}{2n_{i}n_{j}}.\\] Similarly, we can calculate the probability of \\(o_{j}\\) being the NN to \\(x_{1}\\) as follows. \\[p(o_{j},x_{1})=\\sum_{s=1}^{n_{j}}\\frac{1}{n_{j}}\\frac{n_{i}-(2d+s)}{n_{i}}.\\] However, \\(\\frac{n_{i}}{2}-\\frac{n_{j}}{2}=2d\\), that is \\(n_{i}=4d+n_{j}\\). By replacing \\(n_{i}\\) in the numerator and simplifying the term, we can have the following, \\(p(o_{j},x_{1})=\\frac{2d}{n_{i}}+\\frac{n_{j}(n_{j}-1)}{2n_{i}n_{j}}\\). Since \\(p(o_{i},x^{\\prime})=p(o_{j},x^{\\prime})\\), we have \\(pb_{o_{i}o_{j}}=x_{1}\\). Similar to Lemma 4.2, we can prove that \\(p(o_{i},x^{\\prime})>p(o_{j},x^{\\prime})\\) for any point \\(x^{\\prime}\\) on the left, and \\(p(o_{i},x^{\\prime\\prime})<p(o_{j},x^{\\prime\\prime})\\) for any point \\(x^{\\prime\\prime}\\) on the right side of \\(pb_{o_{i}o_{j}}\\). Similarly, we can prove that the other probabilistic bisector exists at \\(x_{2}=\\frac{u_{i}+u_{j}}{2}\\), as the case is symmetric to that of \\(x_{1}\\). Note that, since \\(n_{i}>n_{j}\\) and \\(m_{i}=m_{j}\\), \\(o_{i}\\) completely contains \\(o_{j}\\). Thus the probability of \\(o_{j}\\) is higher than that of \\(o_{i}\\) around the mid-point (\\(m_{i}\\)), and the probability of \\(o_{i}\\) is higher than that of \\(o_{j}\\) towards the boundary points (\\(l_{i}\\) and \\(u_{i}\\)). Therefore in this case, we have two probabilistic bisectors between \\(o_{i}\\) and \\(o_{j}\\). Figures 3(c-e) show an example of three cases as described in Lemma 4.3. Figure 3(c) shows the first case for objects \\(o_{1}\\) and \\(o_{2}\\), where \\(l_{1}=l_{2}\\) and \\(pb_{o_{1}o_{2}}=\\frac{m_{1}+u_{2}}{2}\\). Similarly, Figure 3(d) shows an example of the second case for objects \\(o_{1}\\) and \\(o_{2}\\), where \\(u_{1}=u_{2}\\) and \\(pb_{o_{1}o_{2}}=\\frac{m_{1}+l_{2}}{2}\\). Finally, Figure 3(e) shows an example of the third case for objects \\(o_{1}\\) and \\(o_{2}\\), where \\(m_{1}=m_{2}\\), and \\(x_{1}=\\frac{l_{i}+u_{2}}{2}\\) and \\(x_{2}=\\frac{u_{1}+u_{2}}{2}\\) are two probabilistic bisectors. In such a case, two probabilistic bisectors, \\(x_{1}\\) and \\(x_{2}\\), divide the space into three subspaces. That means, the Voronoi cell of object \\(o_{1}\\) comprises of two disjoint subspaces. In Figure 3(e), the subspace left to \\(x_{1}\\) and the subspace right to \\(x_{2}\\) form the Voronoi cell of \\(o_{1}\\), and the subspace bounded by \\(x_{1}\\) and \\(x_{2}\\) forms the Voronoi cell of \\(o_{2}\\). Apart from the above mentioned scenarios, the remaining scenarios of two overlapping non-equi-range objects are shown in Figure 4, where it is not possible to compute the probabilistic bisector directly by using lower and upper bounds of two candidate objects. In these scenarios, Lemma 4.3 can be used for choosing a point, called the initial probabilistic bisector, which approximates the actual probabilistic bisector and thereby reducing the computational overhead. Figure 4 (a), (b), (c) show three scenarios, where three cases of Lemma 4.3 (1), (2), (3), are used to compute the initial probabilistic bisector, respectively, for our algorithm. We will see (in Algorithm 1) how to use our lemmas to find the probabilistic bisectors for these scenarios. So far we have assumed that no other objects exist within the ranges of two candidate objects. However, the probabilities of two candidate objects may change in the presence of any other objects within their ranges (as shown in Equation 1). Only the probabilistic bisector of two equi-range objects remains the same in the presence of any other object within their ranges. Let \\(o_{k}\\) be the third object that overlaps with the range \\([min(l_{i},l_{j}),max(u_{i},u_{j})]\\) for the case in Figure 3(a). Then, using Equation 1, we can calculate the NN probability of object \\(o_{i}\\) from \\(x\\) as follows. \\[p(o_{i},x)=\\sum_{s=1}^{n_{i}-1}\\frac{1}{n_{i}}\\frac{n_{j}-s}{n_{j}}\\frac{n_{k} -s}{n_{k}}.\\] Similarly, we can calculate the NN probability of object \\(o_{j}\\) from \\(x\\) as follows. \\[p(o_{j},x)=\\sum_{s=1}^{n_{i}-1}\\frac{1}{n_{j}}\\frac{n_{i}-s}{n_{i}}\\frac{n_{k} -s}{n_{k}}.\\] Since \\(n_{i}=n_{j}\\), we have \\(p(o_{i},x)=p(o_{j},x)\\) and \\(pb_{o_{i}o_{j}}=x\\). Therefore, the probabilistic bisector \\(pb_{o_{i}o_{j}}\\) does not change with the presence a third object. Therefore, for scenarios, except for the case when two candidate objects are equi-range, when any other object exists within the ranges two candidate objects, we again use one of the Lemmas 4.1-4.3 to compute the initial probabilistic bisector, and then find the actual probabilistic bisector. For example, if two non-equi-range candidate objects do not overlap each other (see Figure 3(b)) and a third object exists, which is not shown in figure, within the range of these two candidate objects, then we use Lemma 4.2 to find the initial probabilistic bisector. Similarly, we choose the corresponding lemmas for other scenarios to compute initial probabilistic bisectors. Then we use these computed initial probabilistic bisectors to find actual probabilistic bisectors. The position of a probabilistic bisector depends on the relative positions and the uncertainty regions of two candidate objects. We have shown that for some scenarios the probabilistic bisectors can be directly computed using the proposed lemmas. In some other scenarios, there is no straightforward way to compute probabilistic bisectors. For this latter case, the initial probabilistic bisector of two candidate objects is chosen based on the actual probabilistic bisector of the scenario that can be directly computed and has the most similarity (relative positions of candidate objects) with two candidate objects. This ensures that the initial probabilistic bisector is essentially close to the actual probabilistic bisector. _Algorithms:_ Figure 4: Remaining scenarios Based on the above lemmas, Algorithm 1 summarizes the steps of computing the probabilistic bisector \\(pb_{o_{i}o_{j}}\\) for any two objects \\(o_{i}\\) and \\(o_{j}\\), where \\(O\\) is a given set of objects and \\(o_{i},o_{j}\\in O\\). If \\(o_{i}\\) and \\(o_{j}\\) satisfy any of Lemmas 4.1-4.3 the algorithm directly computes \\(pb_{o_{i}o_{j}}\\) (Lines 1.2- 1.3). Otherwise, if any other object exists within the range of two candidate non-equi-range objects \\(o_{i}\\) and \\(o_{j}\\), or two candidate non-equi-range objects fall in any of the scenarios shown in Figure 4. The algorithm first computes an initial probabilistic bisector \\(ipb\\) using our lemmas, where the given scenario has the most similarity in terms of relative positions of candidate objects to the corresponding lemma. Then, the algorithm uses the function \\(FindProbBisector1D\\) to find \\(pb_{o_{i}o_{j}}\\) by using \\(ipb\\) as a base. After computing the \\(ipb\\) the algorithm calls a function \\(FindProbBisector1D\\) to find the probabilistic bisector \\(pb_{o_{i}o_{j}}\\) (Lines 1.8, 1.15, 1.18, and 1.21). The function \\(FindProbBisector1D\\) computes \\(pb_{o_{i}o_{j}}\\) by refining \\(ipb\\). If the probabilities of \\(o_{i}\\) and \\(o_{j}\\) of being the NN from \\(ipb\\) are equal, then the algorithm returns \\(ipb\\) as the probabilistic bisector. Otherwise, the algorithm decides in which direction from \\(ipb\\) it should continue the search for \\(pb_{o_{i}o_{j}}\\). Let \\(x=ipb\\). We also assume that \\(o_{i}\\) is left to \\(o_{j}\\). If \\(p(o_{i},x)\\) is smaller than \\(p(o_{j},x)\\), then \\(pb_{o_{i}o_{j}}\\) is to the left of \\(x\\) and within the range \\([min(l_{i},l_{j}),x]\\), otherwise \\(pb_{o_{i}o_{j}}\\) is to the right of \\(x\\) and within the range \\([x,max(l_{i},l_{j})]\\). Since using lemmas, we choose \\(ipb\\) as close as possible to \\(pb_{o_{i}o_{j}}\\), in most of the cases the probabilistic bisector is found very close to the position of \\(ipb\\). Thus, as an alternative to directly running a binary search within the range, one can perform a step-wise search first, by increasing (or decreasing) the value of \\(x\\) until the probability ranking of two objects swaps. Since the precision of probability measures affects the performance of the above search, we assume that the two probability measures are equal when the difference between them is smaller than a threshold. The value of the threshold can be found experimentally given an application domain. Finally, Algorithm 2 shows the steps for computing a PVD for a set of 1D uncertain objects \\(O\\). In 1D data space, the PVD contains a list of bisectors that divides the total data space into a set of Voronoi cells or 1D ranges. The basic idea of Algorithm 2 is that, once we have the probabilistic bisectors of all pairs of objects in a sorted list, a sequential scan of the list can find the candidate probabilistic bisectors that comprise the probabilistic Voronoi diagram in 1D space. To avoid computing probabilistic bisectors for all pairs of objects \\(o_{i},o_{j}\\in O\\), we use the following heuristic: **Heuristic 4.1**.: _Let \\(o_{i}\\) be an object in the ordered (in ascending order of \\(l_{i}\\)) list of objects \\(O\\), and \\(o_{j}\\) be the next object right to \\(o_{i}\\) in \\(O\\). Let \\(x=pb_{o_{i}o_{j}}\\), and \\(d=dist(x,l_{i})\\). Let \\(o_{k}\\) be an object in \\(O\\). If \\(dist(x,l_{k})>d\\), then the probabilistic bisector \\(pb_{o_{i}o_{k}}\\) of \\(o_{i}\\) and \\(o_{k}\\) is \\(x^{\\prime}\\), and \\(x^{\\prime}\\) is to the right of \\(x\\), i.e., \\(x^{\\prime}>x\\); therefore \\(pb_{o_{i}o_{k}}\\) does not need to be computed._ Algorithm 2 runs as follows. First, the algorithm sorts all objects in ascending order of their lower bounds (Line 2.3). Second, for each object \\(o_{i}\\), it computes probabilistic bisectors of \\(o_{i}\\) with the next object \\(o_{j}\\in O\\) and with a set \\(N\\) of objects returned by the function \\(getCandidateObjects\\) based on Heuristic 4.1 (Lines 2.4-2.8). \\(PBL\\) maintains the list all computed probabilistic bisectors. Third, the algorithm sorts the list \\(PBL\\) in ascending order of the position of probabilistic bisectors and assigns the sorted list to \\(SPBL\\) (Line 2.9). Finally, from \\(SPBL\\), the algorithm selects probabilistic bisectors that contribute to the PVD (Lines 2.10-2.19). For this final step, the algorithm first finds the most probable NN \\(o^{\\prime}\\) with respect to the starting position of the data space. Then for each \\(pb_{o_{i}o_{j}}\\in SPBL\\), the algorithm decides whether \\(pb_{o_{i}o_{j}}\\) is a candidate for the \\(PVD\\) (Lines 2.11-2.19). We assume that \\(o_{i}\\) is the left side object and \\(o_{j}\\) is the right side object of the probabilistic bisector. If \\(o^{\\prime}=o_{i}\\), then \\(pb_{o_{i}o_{j}}\\) is included in the \\(PVD\\), and \\(o^{\\prime}\\) is updated with the most probable object on the right region of \\(pb_{o_{i}o_{j}}\\) (Line 2.17). Otherwise, \\(pb_{o_{i}o_{j}}\\) is discarded (Line 2.19). This process continues until \\(SPBL\\) becomes empty, and the algorithm finally returns \\(PVD\\). The proof of correctness and the complexity of this algorithm are provided as follows. _Correctness_: Let \\(SPBL\\) be the list of probabilistic bisectors in ascending order of their positions. Let \\(o^{\\prime}\\) be the most probable NN with respect to the starting point \\(l\\) of the 1D data space. Let \\(pb_{o_{i}o_{j}}\\) be the next probabilistic bisector fetched from \\(SPBL\\). Now we can have the following two cases: (i) Case 1: \\(o^{\\prime}=o_{i}\\). The probability \\(p_{i}\\) of \\(o_{i}\\) being the nearest is the highest for all points starting from \\(l\\) to \\(pb_{o_{i}o_{j}}\\) and the probability \\(p_{j}\\) of \\(o_{j}\\) being the nearest is the highest for points on the right side of \\(pb_{o_{i}o_{j}}\\) until the next valid probabilistic bisector is found. Hence, \\(pb_{o_{i}o_{j}}\\) is a valid probabilistic bisector and is added to the PVD. Then the algorithm updates \\(o^{\\prime}\\) by \\(o_{j}\\) since \\(o_{j}\\) will be the most probable on the right of \\(pb_{o_{i}o_{j}}\\) and will be on the left region of the next valid probabilistic bisector. (ii) Case 2: \\(o^{\\prime}\ eq o_{i}\\). Let us assume that \\(p_{i}>p^{\\prime}\\) at \\(pb_{o_{i}o_{j}}\\). We already know that \\(p^{\\prime}>p_{i}\\) at the starting point \\(l\\). So there should be some point within the range \\([l,pb_{o_{i}o_{j}}]\\) where \\(p^{\\prime}=p_{i}\\), which is the position of the probabilistic bisector of \\(o^{\\prime}\\) and \\(o_{i}\\). Since no such bisector is found within this range, \\(p_{i}>p^{\\prime}\\) is not true at \\(pb_{o_{i}o_{j}}\\). Thus, \\(p^{\\prime}\\) is the highest even at \\(pb_{o_{i}o_{j}}\\), and will remain the highest until it fetches another \\(pb_{o_{i}o_{j}}\\) from \\(SPBL\\), where \\(o^{\\prime}=o_{i^{\\prime}}\\). The above process continues until the algorithm reaches the end of the data space. _Complexity_: The complexity of Algorithm 2 can be determined as follows. Let \\(C_{b}\\) be the cost of computing the probability of an object being the NN of a query point, and \\(C_{pb}\\) be the cost of finding the probabilistic bisector of two objects. The complexity of Algorithm 2 is dominated by the complexity of executing the Lines 2.4-2.8, which is \\(O(nNC_{pb})\\), where \\(n\\) is the total number of objects, and \\(N\\) is the expected number of probabilistic bisectors that need to be computed for each object in \\(O\\). For real data sets, \\(N\\) is found to be a small value since each object has a small number of surrounding objects (in the worst case it can be \\(n-1\\)). The cost of \\(C_{pb}=O(C_{b}\\log_{2}D)\\), where \\(D\\) is the expected distance between our initial probabilistic bisector \\(ipb\\) and the actual probabilistic bisector. This is because, the cost of finding a probabilistic bisector is \\(O(1)\\) for the cases when our algorithm can directly compute the probabilistic bisector, and for other cases our algorithm first finds \\(ipb\\) by \\(O(1)\\) and then searches for the actual probabilistic bisector using \\(FindProbBisector1D\\) by \\(O(\\log D)\\). ### Probabilistic Voronoi Diagram in a 2D Space In location-based applications, locations of objects such as a passenger and a building, in a 2D space can be uncertain due to the imprecision of data capturing devices or the privacy concerns of users. In these applications, the location of an object \\(o_{i}\\) can be represented as a circular region \\(R_{i}=(c_{i},r_{i})\\), where \\(c_{i}\\) is the center and \\(r_{i}\\) is the radius of the region, and the actual location of \\(o_{i}\\) can be anywhere in \\(R_{i}\\). The area of \\(o_{i}\\) is expressed as \\(A_{i}=\\pi r_{i}^{2}\\). In this section, we derive the PVD for 2D uncertain objects. Similar to the 1D case, a naive approach to find the probabilistic bisector \\(pb_{o_{i}o_{j}}\\) of \\(o_{i}\\) and \\(o_{j}\\) requires an exhaustive computation of probabilities using Equation 1 for every position in a large area. In our approach, we first show that we can directly compute \\(pb_{o_{i}o_{j}}\\) as the bisector \\(bs_{c_{i}e_{j}}\\) of \\(c_{i}\\) and \\(c_{j}\\) when two candidate objects are equi-range (i.e., \\(r_{i}=r_{j}\\)). Next, we show that for two non-equi-range objects (i.e., \\(r_{i}\ eq r_{j}\\)), depending on radii and relative positions of objects \\(pb_{o_{i}o_{j}}\\) slightly shifts from \\(bs_{c_{i}e_{j}}\\). In this case, we use \\(bs_{c_{i}e_{j}}\\) to choose a line, called the initial probabilistic bisector, to approximate the actual probabilistic bisector \\(pb_{o_{i}o_{j}}\\). Although for simplicity of presentation, we will use examples where two candidate objects are non-overlapping, Lemmas 4.4-4.7 also hold for overlapping objects. For two equi-range uncertain circular objects \\(o_{i}\\) and \\(o_{j}\\), we have the following lemma: **Lemma 4.4**: _Let \\(o_{i}\\) and \\(o_{j}\\) be two circular uncertain objects with uncertain regions \\((c_{i},r_{i})\\) and \\((c_{j},r_{j})\\), respectively. If \\(r_{i}=r_{j}\\), then the probabilistic bisector \\(pb_{o_{i}o_{j}}\\) of \\(o_{i}\\) and \\(o_{j}\\) is the bisector \\(bs_{c_{i}e_{j}}\\) of \\(c_{i}\\) and \\(c_{j}\\)._ Let \\(x\\) be any point on \\(bs_{c_{i}e_{j}}\\), and \\(d=mindist(x,o_{i})(\\text{or}\\;mindist(x,o_{j}))\\). Let there be no other objects within the circular range centered at \\(x\\) with radius \\(d+2r_{i}\\). Suppose circles centered at \\(x\\) with radii \\(d+1\\) to \\(d+2r_{i}\\) partition \\(o_{i}\\) into \\(2r_{i}\\) sub-regions \\(o_{i_{1}},o_{i_{2}}, ,o_{i_{2r_{i}}}\\), such that \\(\\sum_{s=1}^{2r_{i}}\\frac{o_{i_{k}}}{A_{i}}=1\\). Similarly, \\(o_{j}\\) is divided into \\(2r_{i}\\) sub-regions \\(o_{j_{1}},o_{j_{2}}, ,o_{j_{2r_{i}}}\\), where \\(\\sum_{s=1}^{2r_{i}}\\frac{o_{i_{k}}}{A_{j}}=1\\). By using Equation 1, we can calculate the probability of \\(o_{i}\\) being the nearest from \\(x\\), as follows. \\[p(o_{i},x)=\\sum_{s=d+1}^{2r_{i}+d}\\frac{o_{i_{s-d}}}{A_{i}}(1-\\sum_{u=d+1}^{s }\\frac{o_{i_{s-d}}}{A_{j}}).\\] Similarly, we can calculate the probability of \\(o_{j}\\) being the nearest from \\(x\\), as follows. \\[p(o_{j},x)=\\sum_{s=d+1}^{2r_{i}+d}\\frac{o_{j_{s-d}}}{A_{j}}(1-\\sum_{u=d+1}^{s }\\frac{o_{i_{s-d}}}{A_{i}}).\\] Since, \\(r_{i}=r_{j}\\) and \\(o_{i_{s}}=o_{j_{s}}\\) for all \\(1\\leq s\\leq 2r_{i}\\), we have \\(p(o_{i},x)=p(o_{j},x)\\). The probabilistic bisector \\(pb_{o_{1}o_{2}}\\) of two equi-range objects \\(o_{1}\\) and \\(o_{2}\\) is shown in Figure 5. Lemmas 4.5 and 4.6 show how the probabilistic bisector of two non-equi-range objects \\(o_{i}\\) and \\(o_{j}\\) is related to the bisector of \\(c_{i}\\) and \\(c_{j}\\) (Figure 6 and 7). Next, we will show in Lemma 4.5 that the shape of \\(pb_{o_{i}o_{j}}\\) for two non-equi-range circular objects \\(o_{i}\\) and \\(o_{j}\\) is a curve, and the distance of this curve from \\(bs_{c_{i}e_{j}}\\) is maximum on the line \\(\\overline{c_{i}e_{j}}\\). Figure 6 shows the bisector \\(bs_{c_{i}e_{2}}\\) and the probabilistic bisector \\(pb_{o_{1}o_{2}}\\) for \\(o_{1}\\) and \\(o_{2}\\). **Lemma 4.5**.: _Let \\(o_{i}\\) and \\(o_{j}\\) be two objects with non-equi-range uncertain circular regions \\((c_{i},r_{i})\\) and \\((c_{j},r_{j})\\), respectively, and \\(bs_{c_{i}c_{j}}\\) be the bisector of \\(c_{i}\\) and \\(c_{j}\\). Then the maximum distance between \\(bs_{c_{i}c_{j}}\\) and \\(pb_{o_{i}o_{j}}\\) occurs on the line \\(\\overline{c_{i}c_{j}}\\). This distance gradually decreases as we move towards positive or negative infinity along the bisector \\(bs_{c_{i}c_{j}}\\)._ Proof. Let \\(x=\\frac{c_{i}+c_{j}}{2}\\) be the intersection point of \\(bs_{c_{i}c_{j}}\\) and \\(\\overline{c_{i}c_{j}}\\). Suppose a circle centered at \\(x\\) with radius \\(\\frac{dist(c_{i},c_{j})}{2}\\) divides \\(o_{i}\\) into \\(o_{i_{1}}\\) and \\(o_{i_{2}}\\), where \\(\\frac{o_{i_{1}}}{A_{i}}+\\frac{o_{i_{2}}}{A_{i}}=1\\), and \\(o_{j}\\) into \\(o_{j_{1}}\\) and \\(o_{j_{2}}\\), where \\(\\frac{o_{j_{1}}}{A_{j}}+\\frac{o_{j_{2}}}{A_{j}}=1\\). According to curvature properties of circles, since \\(r_{i}>r_{j}\\), we have \\(\\frac{o_{j_{1}}}{A_{i}}<\\frac{o_{j_{1}}}{A_{j}}\\) (in Figure 6, \\(\\frac{o_{j_{1}}}{A_{1}}<\\frac{o_{j_{1}}}{A_{2}}\\)), which intuitively means, \\(o_{j}\\) is a more probable NN than \\(o_{i}\\) to \\(x\\), i.e., \\(p(o_{j},x)>p(o_{i},x)\\). Thus, \\(x\\) needs to be shifted to a point towards \\(c_{i}\\) (along the line \\(\\overline{xc_{i}}\\)), such that the probabilities of \\(o_{i}\\) and \\(o_{j}\\) being the NNs to the new point become equal. Suppose a point \\(x^{\\prime}\\) is on \\(bs_{c_{i}c_{j}}\\) at the positive infinity. If a circle centered at \\(x^{\\prime}\\) goes through the centers of both objects \\(o_{i}\\) and \\(o_{j}\\), then the curvature of the portion of the circle that falls inside an object (\\(o_{i}\\) or \\(o_{j}\\)) will become a straight line. This is because, in this case we consider a small portion of the curve of an infinitely large circle. This circle divides both objects \\(o_{i}\\) and \\(o_{j}\\) into two equal parts \\(o_{i_{1}}=o_{i_{2}}\\) and \\(o_{j_{1}}=o_{j_{2}}\\), respectively. Thus, the probabilities of \\(o_{i}\\) and \\(o_{j}\\) being the NNs will approach to being equal at positive infinity, i.e., \\(p(o_{j},x^{\\prime})\\approx p(o_{i},x^{\\prime})\\), for a large values of \\(dist(x^{\\prime},x)\\). Similarly, we can show the case for a point \\(x^{\\prime\\prime}\\) at the negative infinity on \\(bs_{c_{i}c_{j}}\\) (see Figure 6). Next, we show in Lemma 4.6 that \\(pb_{o_{i}o_{j}}\\) shifts from \\(bs_{c_{i}c_{j}}\\) towards the object with larger radius, and the distance of \\(pb_{o_{i}o_{j}}\\) from \\(bs_{c_{i}c_{j}}\\) widens with the increase of the ratio of two radii (i.e., \\(r_{i}\\) and \\(r_{j}\\)). Figure 7 shows an example of this case. **Lemma 4.6**.: _Let \\(o_{i}\\) and \\(o_{j}\\) be two objects with non-equi-range uncertain circular regions \\((c_{i},r_{i})\\) and \\((c_{j},r_{j})\\), respectively, and \\(x=\\frac{c_{i}+c_{j}}{2}\\) be the midpoint of the line segment \\(\\overline{c_{i}c_{j}}\\). If \\(r_{i}>r_{j}\\), then the probabilistic bisector \\(pb_{o_{i}o_{j}}\\) meets \\(\\overline{c_{i}c_{j}}\\) at point \\(x^{\\prime}\\), where \\(x^{\\prime}\\) lies between \\(x\\) and \\(c_{i}\\). If the circular range of \\(o_{i}\\) increases such that \\(r_{i}^{\\prime}>r_{i}\\), then the new probabilistic bisector \\(pb^{\\prime}_{o_{i}o_{j}}\\) meets \\(\\overline{c_{i}c_{j}}\\) at point \\(x^{\\prime\\prime}\\), where \\(x^{\\prime\\prime}\\) lies between \\(x\\) and \\(c_{i}\\), and \\(dist(x,x^{\\prime})<dist(x,x^{\\prime\\prime})\\)._Proof.: Suppose a circle centered at \\(x\\) with radius \\(\\frac{dist(c_{i},c_{j})}{2}\\) divides \\(o_{i}\\) into \\(o_{i_{1}}\\) and \\(o_{i_{2}}\\), where \\(\\frac{o_{i_{1}}}{A_{i}}+\\frac{o_{i_{2}}}{A_{i}}=1\\), and \\(o_{j}\\) into \\(o_{i_{1}}\\) and \\(o_{j_{2}}\\), where \\(\\frac{o_{j_{1}}}{A_{i}}+\\frac{o_{i_{2}}}{A_{i}}=1\\). According to curvature properties of circles, since \\(r_{i}>r_{j}\\), we have \\(\\frac{o_{i_{1}}}{A_{i}}<\\frac{o_{j_{1}}}{A_{j}}\\) (in Figure 7, \\(\\frac{o_{j_{1}}}{A_{1}}<\\frac{o_{i_{2}}}{A_{2}}\\)), which intuitively means, \\(o_{j}\\) is a more probable NN than \\(o_{i}\\) to \\(x\\), i.e., \\(p(o_{j},x)>p(o_{i},x)\\). Thus, \\(x\\) needs to be shifted to a point \\(x^{\\prime}\\) towards \\(c_{i}\\), such that the probabilities of \\(o_{i}\\) and \\(o_{j}\\) being the NN to \\(x^{\\prime}\\) become equal. Let \\(o^{\\prime}_{i}\\) be an object, such that \\(r^{\\prime}_{i}>r_{i}\\) and \\(c^{\\prime}_{i}=c_{i}\\). Then the circle centered at \\(x\\) with radius \\(\\frac{dist(c_{i},c_{j})}{2}\\) divides \\(o^{\\prime}_{i}\\) into \\(o^{\\prime}_{i}\\) and \\(o^{\\prime}_{i_{2}}\\), where \\(\\frac{o^{\\prime}_{i_{1}}}{A^{\\prime}_{i}}+\\frac{o^{\\prime}_{i_{2}}}{A^{\\prime }_{i}}=1\\). Now, we have \\(\\frac{o^{\\prime}_{i_{1}}}{A^{\\prime}_{i}}<\\frac{o_{i_{1}}}{A_{i}}<\\frac{o_{j_{ 1}}}{A_{j}}\\). Thus, \\(x\\) needs to be shifted to a point \\(x^{\\prime\\prime}\\) more towards \\(c_{i}\\), i.e., \\(dist(x,x^{\\prime})<dist(x,x^{\\prime\\prime})\\), such that the probabilities of \\(o^{\\prime}_{i}\\) and \\(o_{j}\\) being the NNs become equal at \\(x^{\\prime\\prime}\\). The next lemma shows the influence of a third object on the probabilistic bisector of two non-equi-range objects. (Note that the probabilistic bisector of two equi-range objects does not change with the influence of any other object (see Lemma 4.4)). Figure 8 shows an example, where object \\(o_{3}\\) influences the probabilistic bisector of objects \\(o_{1}\\) and \\(o_{2}\\). In this figure, the dotted circle centered at \\(s_{1}\\) with radius \\(dist(s_{1},c_{1})+r_{1}\\) encloses one candidate object \\(o_{1}\\), but only touches the third object \\(o_{3}\\). Thus, the probability of \\(o_{3}\\) being the NN to \\(s_{1}\\) is zero. However, for any point between \\(s_{1}\\) and \\(s_{2}\\), \\(o_{3}\\) has a non-zero probability of being the NN of that point, and thus \\(o_{3}\\) influences \\(pb_{o_{i}o_{2}}\\). **Lemma 4.7**.: _Let \\(o_{i}\\) and \\(o_{j}\\) be two objects with non-equi-range uncertain circular regions \\((c_{i},r_{i})\\) and \\((c_{j},r_{j})\\), respectively, where \\(r_{i}<r_{j}\\), and \\(bs_{c_{i}c_{j}}\\) be the bisector of \\(c_{i}\\) and \\(c_{j}\\). An object \\(o_{k}\\) influences the probabilistic bisector \\(pb_{o_{i}o_{j}}\\) for the part of the segment \\([s_{1},s_{2}]\\) on the line \\(bs_{c_{i}c_{j}}\\), where \\(dist(s,c_{i})+r_{i}>dist(s,c_{k})-r_{k}\\) for \\(s\\in bs_{c_{i}c_{j}}\\)._ Proof.: Since \\(r_{i}<r_{j}\\), we have \\(maxdist(s,o_{i})<maxdist(s,o_{j})\\). Thus, if the minimum distance \\(mindist(s,o_{k})\\) of an object \\(o_{k}\\) from \\(s\\) is greater than the maximum distance \\(maxdist(s,o_{j})\\) of \\(o_{j}\\) from \\(s\\), i.e., \\(dist(s,c_{k})-r_{k}>dist(s,c_{i})+r_{i}\\), the object \\(o_{k}\\) cannot be the NN to the point \\(s\\), otherwise \\(o_{k}\\) has the possibility of being the NN to \\(s\\) and hence \\(o_{k}\\) influences \\(pb_{o_{i}o_{j}}\\). It is noted when the centers of two non-equal objects coincide each other, the probability of the smaller object dominates the probability of the larger object. Therefore, in those cases, we only consider the object with a smaller radius, and the other object is discarded. Also, if two objects are equal and their centers coincide each other, no probabilistic bisector exists between them, thus any one of these two objects is considered for computing the PVD. _Algorithms:_ Based on the above lemmas, we propose algorithms to find the probabilistic bisector of any two uncertain 2D objects. We have shown in Lemma 4.4 that the probabilistic bisector of two circular uncertain objects is a straight line when the radii of two objects are equal. On the other hand, Lemma 4.5-Lemma 4.6 show that the probabilistic bisector is a curve when the radii of two objects are non-equal. However, to avoid the computational and maintenance costs, we maintain a bounding box (i.e., quadrilateral) that encloses the actual probabilistic bisector of two objects. Hence, we name the probabilistic bisector of two circular objects as the _Probabilistic Bisector Region_ (PBR). For example, the bounding box that encloses the curve in Figure 6 is the PBR for two objects \\(o_{1}\\) and \\(o_{2}\\). In our algorithm, we first create an ordinary Voronoi diagram by using the centers of all uncertain objects. Then, from each Voronoi edge \\(e_{ij}\\) (i.e., \\(bs_{c_{i}c_{j}}\\)) of two objects \\(o_{i}\\) and \\(o_{j}\\), we compute the PBR that encloses \\(pb_{o_{i}o_{j}}\\). Algorithm 3 computes the probabilistic bisector of two equi-range objects according to Lemma 4.4 (Line 32). Otherwise, it calls the function \\(FindProbBisector2D\\) to determine \\(pb_{o_{i}o_{j}}\\) for two non-equi-range objects \\(o_{i}\\) and \\(o_{j}\\). Figure 7: Influence of objects’ sizes on the probabilistic bisector the left of \\(e_{ij}\\), then _lval_ and _hval_ are set to \\(x^{\\prime}\\) and \\(x\\), respectively. On the other hand, if \\(x^{\\prime}\\) is to the right of \\(e_{ij}\\), then _lval_ and _hval_ are set to \\(x\\) and \\(x^{\\prime}\\), respectively. After that, the function \\(FindInfluencePart\\) finds a list \\(IL\\) that contains different segments of the bisector \\(e_{ij}\\), where other objects influence \\(pb_{o_{i}o_{j}}\\) (see Lemma 4.7). The function returns \\(IL\\) as an empty list when no other object influences the probabilistic bisector. In that case the current _lval_ and _hval_ defines \\(pbr\\). In Lemma 4.6, we have seen that the maximum distance of \\(pb_{o_{i}o_{j}}\\) from the bisector of \\(c_{i}\\) and \\(c_{j}\\) is on the line \\(\\overline{c_{i}c_{j}}\\). Thus, the initially computed \\(pbr\\) encloses the curve of \\(pb_{o_{i}o_{j}}\\). On the other hand, if \\(IL\\) is not empty, then for each line segment \\(ls\\in IL\\), the function \\(UpdatePBRBound\\) is called to update _lval_ and _hval_ based on the influence of other objects. As _lval_ and _hval_ represent the deviation of \\(pb_{o_{i}o_{j}}\\) from \\(e_{ij}\\), we need compute the deviations for each line segment \\(ls\\), and then take the minimum of all _lval_s and the maximum of all _hval_s to compute the \\(pbr\\). To avoid a brute-force approach of computing _lval_ and _hval_ for every point of an \\(ls\\in IL\\), we compute _lval_ and _hval_ for two extreme points and the mid-point of \\(ls\\). Finally, the algorithm returns \\(pbr\\) for \\(pb_{o_{i}o_{j}}\\). Algorithm 5 shows the steps of \\(ProbVoronoi2D\\) that computes \\(PVD\\) for a given set \\(O\\) of 2D objects. In Line 5.2, the algorithm first creates a Voronoi diagram \\(VD\\) for all centers \\(c_{i}\\) of objects \\(o_{i}\\in O\\) using [22]. Then for each Voronoi Figure 8: Influence of object \\(o_{3}\\) on the probabilistic bisector of \\(o_{1}\\) and \\(o_{2}\\) edge \\(e_{ij}\\) between two objects \\(o_{i}\\) and \\(o_{j}\\), the algorithm calls the function \\(ProbBisector2D\\) to compute the probabilistic bisector as PBR between two candidate objects, and finally it returns the PVD for the given set \\(O\\) of objects. Figure 9 shows the PVD for objects \\(o_{1}\\), \\(o_{2}\\), and \\(o_{3}\\). In this figure, \\(PVC(o_{1})\\), \\(PVC(o_{2})\\), and \\(PVC(o_{3})\\) represent the PVCs for objects \\(o_{1}\\), \\(o_{2}\\), and \\(o_{3}\\), respectively. The boundaries between PVCs, i.e., PBRs of objects, \\(\\mathit{pbr}_{o_{1}o_{2}}\\), \\(\\mathit{pbr}_{o_{2}o_{3}}\\), and \\(\\mathit{pbr}_{o_{1}o_{3}}\\), are shown using grey bounded regions. For any point inside a PVC, the corresponding object is guaranteed to be the most probable NN. On the other hand, for any point inside a PBR, any of the two objects that share the PBR can be the most probable NNs. If more than two PBRs intersect each other in a region, any object associated with these PBRs can be the most probable NN to a query point in that region. Figure 9 shows a dark grey region where \\(\\mathit{pbr}_{o_{1}o_{2}}\\), \\(\\mathit{pbr}_{o_{1}o_{3}}\\), and \\(\\mathit{pbr}_{o_{2}o_{3}}\\) meets. _Complexity:_ The complexity of Algorithm 5 can be estimated as follows. The complexity of creating a Voronoi diagram (Line 5.2) is \\(O(n\\log n)\\)[22], where \\(n\\) is the number of objects. The complexity of finding probabilistic bisectors (Lines 5.3-5.4) is \\(O(n_{e}C_{pb})\\), where \\(n_{e}\\) is the number of Voronoi edges and \\(C_{pb}\\) is the expected cost of computing the probabilistic bisector between two circular objects. For real data sets, \\(n_{e}\\) is expected to be a small integer since an object has only a small number of surrounding objects. The total complexity of the algorithm is \\(O(n\\log n)+O(n_{e}C_{pb})\\). \\(C_{pb}\\) can be estimated as follows. Let \\(C_{b}\\) be the cost of computing the probability of an object being the NN of a query point, \\(D\\) be the expected distance between the initial probabilistic bisector \\(ipb\\) and the actual probabilistic bisector, and \\(L\\) be the expected number of points in the bisector that needs to be considered to find upper and lower bounds of the probabilistic bisector. Then we have \\(C_{pb}=O(LC_{b}\\log_{2}D)\\). This is because, the cost of finding a probabilistic bisector is \\(O(1)\\) for the cases when our algorithm can directly compute the probabilistic bisector, and for other cases our algorithm first finds \\(ipb\\) by \\(O(1)\\) and then search for the actual probabilistic bisector by using Algorithm 4 by \\(O(L\\log D)\\). Note that, for both 1D and 2D, the run-time behavior of our algorithm is dominated by those cases for which there is no closed form for a given probabilistic bisector, i.e., the algorithm needs to search for the bisector by using the initial probabilistic bisector. ### Discussion _PVD for Other Distributions:_ In this paper, we assume the uniform distributions for the pdf of uncertain objects to illustrate the concept of the PVD. However, the pdf that describes the distribution of an object inside the uncertainty region can follow arbitrary distributions, e.g., Gaussian. The concept of PVD can be extended for any arbitrary distribution. For example, for an object with Gaussian pdf having a circular uncertain region, the probability of the object Figure 9: The PVD of three objects \\(o_{1}\\), \\(o_{2}\\), and \\(o_{3}\\) of being around the center of the circular region is higher than that of the boundary region of the circle. For such distributions, a straightforward approach to compute the probabilistic bisector between any pair of objects is as follows. First, we can use the bisector of the centroids of two candidate objects as the initial probabilistic bisector. Then we can refine the initial probabilistic bisector to find the actual probabilistic bisector. Finding suitable initial probabilistic bisectors for efficient computation of probabilistic bisectors (e.g., lemmas for different cases for 1D and 2D data sets similar to the uniform pdf) for an arbitrary distribution is the scope of future investigation. _PVD for Higher Dimensions:_ We can compute the PVD for higher dimensional spaces, similar to 1D and 2D spaces. For example, in a 3D space, an uncertain object can be represented as a sphere instead of a circle in 2D. Then, the probabilistic bisector of two equal size spheres will be a plane bisecting the centers of two spheres. Using this as a base, similar to 2D, we can compute the PVD for 3D objects. We omit a detailed discussion on PVDs in spaces of more than 2 dimensions. _Higher order PVDs:_ In this paper, we focus on the first order PVD. By using this PVD, we can find the NN for a given query point. Thus, the PVD can be used for continuously reporting 1-NN for a moving query point. To generalize the concept for \\(k\\)NN queries, we need to develop the \\(k\\)-order PVD. The basic idea would be to find the probabilistic bisectors among size-\\(k\\) subsets of objects. The detailed investigation of higher order PVDs is a topic of future study. _Handling Updates:_ To handle updates on the data objects, like traditional Voronoi diagrams, a straightforward approach is to recompute the entire PVD. There are algorithms [23; 24] to incrementally update a traditional Voronoi diagram. Similar ideas can be applied to the PVD to derive incremental update algorithms. We will defer such incremental update algorithms for future work. It is noted that, to avoid an expensive computation of the PVD for the whole data set and to cope with updates for the data objects, we propose an alternative approach based on the concept of local PVD (see Section 5.5.2). In this approach, only a subset of objects that fall within a specified range of the current position of the query is retrieved from the server and then the local PVD is created for these retrieved objects to answer PMNN queries. If there is any update inside the specified range, the process needs to be repeated. Since, this approach works only with the surrounding objects of a query, updates from objects that are outside the range do not affect the performance of the system. ## 5 Processing PMNN Queries In this section, based on the concept of PVD we propose two techniques: a pre-computation approach and an incremental approach for answering PMNN queries. In the pre-computation approach, we first create the PVD for the whole data set and then index the PVCs for answering PMNN queries. We name the pre-computation based technique for processing PMNN queries as P-PVD-PMNN. On the other hand, in the incremental approach, we retrieve a set of surrounding objects with respect the current query location and then create the local PVD for these retrieved data set, and finally use this local PVD to answer PMNN queries. We name this approach I-PVD-PMNN in this paper. ### Pre-computation Approach In the pre-computation approach, we first create the PVD for all objects in the database. After computing the PVD, we only need to determine the current Probabilistic Voronoi Cell (PVC), where the current query point is located. The query evaluation algorithm can be summarized as follows. Initially, the query issuer requests the most probable NN for the current query position \\(q\\). After receiving the PMNN request for \\(q\\), the server algorithm finds the current PVC to which the query point falls into using a function \\(IdentifyPVC\\) and updates \\(cpvc\\) with the current PVC. The algorithm reports the corresponding object \\(p\\) as the most probable NN and the cell \\(cpvc\\) to the query issuer. Next time when \\(q\\) is updated at the query issuer, if \\(q\\) falls inside \\(cpvc\\), no request is made to the server as the most probable NN has not been changed. Otherwise, the query issuer again sends the PMNN request to the server to determine the new PVC and the answer for the updated query position. As the PVD in a 1D space contains a set of non-overlapping ranges representing PVCs for objects, the algorithm returns a single object as the most probable NN for any query point. On the other hand, in a 2D space, the boundary between two PVCs is a region (i.e., PBR) rather than a line. When a query point falls inside a PBR, the algorithm can possibly return both objects that share the PBR as the most probable NNs, or preferably can decide the most probable NN by computing a top-1-PNN query. (Since, for a realistic setting a PBR is small region compared to that of PVCs,our approach incurs much less computational overhead than that of the sampling based approach for processing a PMNN query.) Figure 10(a) shows that when the query point is at \\(q^{\\prime}\\), PVD-PMNN returns \\(o_{3}\\) as the most probable NN as \\(q^{\\prime}\\) falls into PVC(\\(o_{3}\\)). When the query point moves to \\(q^{\\prime\\prime}\\), the algorithm returns \\(o_{2}\\) as the answer. A naive approach of identifying the desired PVC (i.e., \\(IdentifyPVC\\) function) requires an exhaustive search of all the PVCs in a PVD, which is an expensive operation. Indexing Voronoi diagrams [25; 26; 27] is an well-known approach for efficient nearest neighbor search in high-dimensional spaces. Thus, for efficient search of the PVCs, we index the PVCs of the PVD using an \\(R^{*}\\)-tree [28], a variant of the \\(R\\)-tree [29; 27]. In a 1D space, each PVC is represented as a 1D range and is indexed using a 1D \\(R^{*}\\)-tree. Since there is no overlap among PVCs, a query point always falls inside a single PVC, where the corresponding object is the most probable NN to the query point. On the other hand, in a 2D space, each PVC cell is enclosed using a Minimum Bounding Rectangle (MBR), and is indexed using a 2D \\(R^{*}\\)-tree. Since the MBRs representing PVCs overlap each other, when a query point falls inside only a single MBR, the corresponding object is confirmed to be the most probable NN to the query point. However, when a query point falls inside the overlapping region of two or more MBRs, the actual most probable NN can be identified by checking the PVCs of all candidate MBRs. Figure 10(b) shows the MBRs \\([B_{1},B_{2},B_{3},B_{4}]\\), \\([B_{5},B_{6},B_{7},B_{8}]\\), and \\([B_{9},B_{10},B_{11},B_{12}]\\) for the PVCs of objects \\(o_{1}\\), \\(o_{2}\\), and \\(o_{3}\\), respectively. In this example, the query point \\(q^{\\prime}\\) intersects both \\([B_{5},B_{6},B_{7},B_{8}]\\) and \\([B_{9},B_{10},B_{11},B_{12}]\\), and the actual most probable NN \\(o_{3}\\) can be determined by checking the PVCs of \\(o_{3}\\) and \\(o_{2}\\); on the other hand, the query point \\(q^{\\prime\\prime\\prime}\\) only intersects a single MBR \\([B_{5},B_{6},B_{7},B_{8}]\\), so the corresponding object \\(o_{2}\\) is the most probable NN to \\(q^{\\prime\\prime\\prime}\\). Since the above approach only retrieves the current PVC of a moving query point, it needs to access the PVD using the \\(R^{*}\\)-tree as soon as the query leaves the current PVC. This may incur more I/O costs than what can be achieved. To further reduce I/O and improve the processing time, we use a buffer management technique, where instead of only retrieving the PVC that contains the given query point, we retrieve all PVCs whose MBRs intersect with a given range, called a buffer window, for a given query point. These PVCs are buffered and are used to answer subsequent query points of a moving query. This process is repeated for a PMNN query when the buffered cells cannot answer the query. Since the creation of the entire PVD is computationally expensive, the pre-computation based approach is justified when the PVD can be re-used which is the case for static data, or when the query spans over the whole data space. To avoid expensive pre-computation, next, we propose an incremental approach which is preferable when the query is confined to a small region of the data space or when there are frequent updates in the database. ### Incremental Approach In this section, we describe our incremental evaluation technique for processing a PMNN query based on the concept of known region and the local PVD. Next, we briefly discuss the concept of known region, and then present the detailed algorithm of our incremental approach. **Known Region:** Intuitively, the _known region_ is an explored data space where the position of all objects are known. We define the known region as a circular region that bounds the top-\\(k\\) probable NNs with respect to the current query point (i.e., the center point of the region). For a given point \\(q_{s}\\), the server expands the search space to incrementally access objects in the order of their \\(mindist\\) from \\(q_{s}\\) until it finds top \\(k\\) probable nearest neighbors with respect to \\(q_{s}\\) (we use existing algorithm [8] to find top-\\(k\\)NN). Then the known region is determined by a circular region centered at Figure 10: (a) The PVD, and (b) the MBRs of PVCs for objects \\(o_{1}\\)-\\(o_{3}\\) \\(q_{s}\\) that encloses all these \\(k\\) objects. Figure 11 shows the known circular region \\(K(q_{s},r)\\) using a dashed circle, where \\(k=3\\). Then the radius \\(r\\) of this known area is determined by \\(max(maxdist(q_{s},o_{1}),maxdist(q_{s},o_{2}),maxdist(q_{s},o_{3}))\\). In this example, top-3 most probable nearest neighbors are \\(o_{1}\\),\\(o_{2}\\), and \\(o_{3}\\). The key idea of incremental approach is to consider only a sub-set of objects surrounding the moving query point while evaluating a PMNN query. For example, in a client-server paradigm, the client first requests the server for objects and the known region by providing the starting point of the moving query path as a query point. Then the client locally creates a PVD based on the retrieved objects, and uses the local PVD for answering the PMNN query. This process needs to be repeated as soon as the user's request for the PMNN query cannot be satisfied by the already retrieved data at the client. Though this incremental approach applies to both centralized and client-server paradigms, without loss of generality, next we explain how to incrementally evaluate a PMNN query in the client-server paradigm. **Algorithm:** After retrieving a set of objects from the server, the client locally computes a PVD for those objects. Then, the client can use the local PVD to determine the most probable nearest neighbor among the objects inside the known region. However, since the client does not have any knowledge about objects that are outside of the known region, the most probable nearest neighbor based on the local PVD formed for objects inside the known region, might not guarantee the most probable nearest neighbor with respect to _all_ objects in the database. This is because, a PVC of the local PVD determines the region where the corresponding object is the most probable NN with respect to objects inside the known region. However, certain locations of the PVC can have other non-retrieved objects, which are outside the known region, as the most probable NN. Thus, we need to determine a region in the PVC for which the query result is guaranteed. That is, all locations inside this guaranteed region will have the corresponding object as the most probable NN. To define the guaranteed region for an object, we have two conditions. Let \\(q\\) be a query point and \\(o_{i}\\) be an object inside the known region. Then, if the query point \\(q\\) is inside a PVC cell of object \\(o_{i}\\) and the condition in the following equation (see Equation 2) holds, then it is ensured that \\(o_{i}\\) is the most probable NN among all objects in the database. \\[maxdist(q,c_{i})\\leq r-dist(q,q_{s}). \\tag{2}\\] The condition in Equation 2 ensures that no object outside the known region can be the nearest neighbor for the given query point. This is because, when a circle centered at \\(q\\) completely contains an object, all objects outside this circle will have zero probability of being the NN to \\(q\\). To formally define a region based on the above inequality, we re-arrange Equation 2 as follows. \\[dist(q,c_{i})+r_{i}\\leq r-dist(q,q_{s})\\] \\[=>dist(q,c_{i})+dist(q,q_{s})\\leq r-r_{i}\\] We can see that the boundary of the above formula forms an elliptic region in a 2D Euclidean space, where the two foci of the ellipse are \\(q_{s}\\) and \\(c_{i}\\). i.e., the sum of the distances from \\(q_{s}\\) and \\(c_{i}\\) to any point on the ellipse is \\(r-r_{i}\\). Figure 11 shows an example, where the elliptical region for object \\(o_{2}\\) is shown using dashed border. Figure 11 Figure 11: Known region and objects \\(o_{1}\\), \\(o_{2}\\), and \\(o_{3}\\) shows that when the query point is at \\(q_{1}\\), the object \\(o_{2}\\) is confirmed to be the most probable nearest neighbor, as \\(dist(q_{1},c_{2})+r_{2}<r-dist(q_{1},q_{s})\\). From the above discussion, we see that for an object \\(o_{i}\\), the intersection region of the PVC and the elliptical region for \\(o_{i}\\) forms a region where all points in this region has \\(o_{i}\\) as the most probable NN. Figure 12 shows the PVD and elliptical regions for objects \\(o_{1}\\), \\(o_{2}\\), and \\(o_{3}\\), and a moving query path from \\(q^{\\prime}\\) to \\(q^{\\prime\\prime\\prime}\\). In this figure, since \\(q^{\\prime}\\) is inside the intersection region of \\(PVC(o_{3})\\) and elliptical region of \\(o_{3}\\), thus \\(o_{3}\\) is guaranteed to be the most probable NN for \\(q^{\\prime}\\) with respect to all objects in the database. Similarly, \\(o_{2}\\) is the most probable NN when the query point moves to \\(q^{\\prime\\prime\\prime}\\). If a query point is outside the intersection region of a PVC and the corresponding elliptical region, but falls inside the PVC, still there is a possibility that the object associated with this PVC is the most probable NN for the query point. For example, in Figure 11, when the query point is at \\(q_{2}\\), then the condition in Equation 2 fails. For this case, our algorithm relies on the lower bound of the probability for the object \\(o_{2}\\) of being the nearest neighbor from the query point \\(q_{2}\\). We define the second condition based on the lower bound probability of an object. We can compute the lower bound of probability, \\(lp(o_{i},q)\\) for object \\(o_{i}\\) of being the NN from the query point \\(q\\), by using pessimistic assumption. For computing the lower bound probability, we assume that a non-retrieved _virtual_ point object is located at the minimum distance from the query point and is just outside the boundary surface of the known region. For example, in Figure 11, when the client is at \\(q_{2}\\), we assume that a point object exists at \\(d^{\\prime}\\). Then, we estimate the probability of the object \\(o_{2}\\) being the NN to \\(q_{2}\\), which gives us the lower bound of the probability. By using the lower bound, the client can determine whether there is a possibility of other non-retrieved objects being the most probable NN from the current query location. If the probability of the virtual point object \\(o_{v}\\), \\(p(o_{v},q)\\) is less than the lower bound probability of the candidate object \\(o_{i}\\), \\(lp(o_{i},q)\\), then it is ensured that there is no other object in the database that has higher probability for being the NN of \\(q\\) than that of \\(o_{i}\\); otherwise there may exist other object in the database with higher probabilities for being the NN of \\(q\\) than \\(o_{i}\\). Thus, our second condition for the guaranteed region can be defined as follows: \\[lp(o_{i},q)\\geq p(o_{v},q). \\tag{3}\\] Based on the above observations, we define a _probabilistic safe region_ for an object \\(o_{i}\\), as a region where \\(o_{i}\\) is guaranteed to be the most probable NN for every point inside that region. Thus, Equation 2 and Equation 3 form the guaranteed region for an object \\(o_{i}\\). We use the above two conditions and the local PVD to incrementally evaluate a PMNN query. The algorithm first retrieves a set of surrounding objects for the given query point \\(q\\), and creates a PVD, named \\(IPVD\\), for those objects. Then, the algorithm finds the PVC and the corresponding object \\(o_{i}\\) as the most probable nearest neighbor of the query point \\(q\\) with respect to the objects within the known region. If \\(q\\) is inside a PVC cell, the object \\(o_{i}\\) is returned as the most probable nearest neighbor if \\(q\\) satisfies Equation 2 or Equation 3. If none of the above condition holds, the algorithm requests a new set of objects with respect to the current query point \\(q\\), and repeats the above process on newly retrieved set of objects. _Discussion:_ Our pre-computation based approach computes the PVD for all objects in the database and then indexes the PVD using an \\(R\\)-tree to efficiently process PMNN queries. Since, the pre-computation of the PVD for the entire data set is computationally expensive, the pre-computation based approach is justified when the PVD can be re-used Figure 12: The incremental approach for large number of queries, as the cost is amortized among queries(e.g., [11; 20]). Thus, the pre-computation based approach is appropriate for the following settings: the data set largely remains static, there are large number of queries in the system, and the query spans over the whole data space. On the other hand, in our incremental approach, we retrieve a set of surrounding objects for the current query location, and then incrementally process PMNN queries based only on these retrieved set of objects. Only data close to the given query are accessed for query evaluation. As the evaluation of this approach depends on the location of the query, this approach is also called the query dependent approach, as opposed to the data dependent approach (e.g., pre-computation based approach) where the location of queries are not taken into account. This incremental approach is preferred for the cases when there are updates in the database or the query is confined to a small region in the data space. A comparative discussion between the pre-computation approach and the incremental approach for point data sets can be found in [30; 20]. ## 6 Experimental Study We compare our PVD based approaches for the PMNN query (P-PVD-PMNN and I-PVD-PMNN) with a sampling based approach (Naive-PMNN), which processes a PMNN query as a sequence of static PNN queries at sampled locations. Though in Naive-PMNN we use the most recent technique of static top-1-PNN queries [8], any existing technique for static PNN queries [4; 5] can be used. Note that, by using the existing method in [6], for each uncertain object \\(o_{i}\\), we could only define a region (or UV-cell) where \\(o_{i}\\) has a non-zero probability of being the NN for any point in this region. Thus, this method cannot be used to determine whether an object has the highest probability of being the NN to a query point. Therefore, we compare our approach with a sampling based approach. In our experiments, we measure the query processing time, the I/O costs, and the communication costs as the number of communications between a client and a server. Note that while the processing and I/O costs are the performance measurement metric for both centralized and client-server paradigms, the communication cost only applies to the client-server paradigm. In this paper, we run the experiments in the centralized paradigm, where the query issuer and the processor reside in the same machine. Thus, we measure the communication cost as the number of times the query issuer communicates with the query processor while executing a PMNN query. ### Experimental Setup We present experimental results for both 1D and 2D data sets. For 2D data, we have used both synthetic and real data sets. We normalize the data space into a span of \\(10,000\\times 10,000\\) square units. We generated synthetic data sets with uniform (U) and Zipfian (Z) distributions, representing a wide range of real scenarios. For both uniform and Zipfian, we vary the data set size from 5K to 25K. To introduce uncertainty in data objects, we randomly choose the uncertainty range of an object between \\(5\\times 5\\) and \\(30\\times 30\\) square units, and approximated the selected range using a circle. For real data distributions, we use the data sets from Los Angeles (L) with 12K geographical objects described by ranges of longitudes and latitudes [31]. Note that, in both uniform and Zipfian distributions, objects can overlap each other. More importantly, in Zipfian distribution, most of the objects are concentrated within a small region in the space, thereby objects largely overlap with each other. Also, our real datasets include objects with large and overlapping regions. Thus, we do not present any sperate experimental results for overlapping objects. For 1D data, we have only used syntactic data sets. In this case, we generated synthetic data sets with uniform (U) and Zipfian (Z) distributions in the data space of 10,000 units. The uncertainty range of an object is chosen as any random value between 5 and 30 units. We also vary the data set size from 100 to 500. These values are comparable to 2D data set sizes and scenarios. For query paths, we have generated two different types of query trajectories, random (R) and directional (D), representing the query movement paths covering a large number of real scenarios. The default length of a trajectory is a fixed length of 1000 steps, and consecutive points are connected with a straight line of a length of 5 units. For each type of query path, we run the experiments for 20 different trajectories starting at random positions in the data space, and determine the average results. We present the processing time, I/O cost, and the communication cost for executing a complete trajectory (i.e., a PMNN query). In our experiments, since the trajectory of a moving query path is unknown, we use the generated trajectories as input, but do not provide these to the server in advance. We run the experiments on a desktop computer with Intel(R) Core(TM) 2 CPU 6600 at 2.40 GHz and 2 GB RAM. ### Performance Evaluation In this section, we evaluate our proposed techniques: pre-computation approach (P-PVD-PMNN) and incremental approach (I-PVD-PMNN) in Sections 6.2.1 and 6.2.2, respectively. It is well known that pre-computation based approach is suitable for settings when the PVD can be re-used (e.g., static data sets) for large number of queries or the query span the whole data space, and on the other hand the incremental or local approach is suitable for settings when the query is confined to a small space and there are frequent updates in the database (e.g., [30; 11; 20]). Since two approaches aim at two different environmental settings and also the parameters of these two techniques differ from each other, we independently evaluate them and compare them with the sampling based approach. #### 6.2.1 Pre-computation Approach In the pre-computation approach, we first create the PVD for the entire data set and use an \\(R^{*}\\)-tree to index the MBRs of PVCs. On the other hand, for Naive-PMNN we use an \\(R^{*}\\)-tree to index uncertain objects. In both cases, we use the page size of 1KB and the node capacity of 50 entries for the \\(R^{*}\\)-tree. _Experiments with 2D Data Sets:_ We vary the following parameters in our experiments: the length of a query trajectory, the data set size, and the size of the buffer window that determines the number of PVCs retrieved each time with respect to a query point. _Effect of the Length of a Query Trajectory_: In this set of experiments, we vary the length of moving queries from 1000 to 5000 units of the data space. We run the experiments for data sets U(10K), Z(10K), and L(12K). Since the real data set size is 12K, the data set sizes for U and Z are both set to 10K. Figures 13 show the processing time, I/O costs, and the number of communications required for a PMNN query of different query trajectory length. Figures 13(a)-(c) present the results for U data sets, where we can see that, for both P-PVD-PMNN and Naive-PMNN, the processing time, I/O costs, and the number of communications increase with the increase of the length of the query trajectory, which is expected. Figures also show that our P-PVD-PMNN approach outperforms the Naive-PMNN by at least an order of magnitude in all metrics. This is because, P-PVD-PMNN only needs to identify the current PVC rather than computing top-1-PNN for every sampled location of the moving query. The results for both Z (see Figures 13(d)-(f)) and L (see Figures 13(g)-(i)) data sets show similar trends with U data set as described above. _Effect of Data Set Size_: In this set of experiments, we vary the data set size from 5K to 25K and compare the performance of our P-PVD-PMNN with Naive-PMNN for both U (see Figures 14(a)-(c)) and Z (see Figures 14(d)-(f)) distributions. In these experiments, we set the trajectory length to 5000 units. Figures 14(a)-(f) show that, in general for P-PVD-PMNN, the processing time and I/O costs, and the number of communications increase with the increase of the data set size. The reason is as follows. For a larger data set, since the density of objects is high, we have smaller PVCs. Thus, for a larger data set, as the query point moves, it crosses the boundaries of PVCs more frequently than that of a smaller data set. This operation incurs extra computational overhead for a larger data set. On the other hand, for Naive-PMNN, the processing time, I/O costs, and the communication costs remain almost constant with the increase of the data set size. This is because, unless the \\(R^{*}\\)-tree has a new level due to the increase of the data set size, the processing costs for Naive-PMNN do not vary with increase of the data set size, which is the case in Figures 14(a)-(f)). Figures also show that our P-PVD-PMNN outperforms Naive-PMNN by an order of magnitude in processing time, 2 orders of magnitude in I/Os and number of communications for all data sets. The results also show that P-PVD-PMNN performs similar for both directional (D) and random (R) query movement paths. _Effect of Buffer Window_: In this set of experiments, we study the impact of introducing a buffer for processing a PMNN query. We vary the value of buffer window from 0 to 400 units of the data space, and then run the experiments for data sets U(10K), Z(10K), and L(12K). We set the trajectory length to 5000 units. In these experiments, all PVCs whose MBRs intersect with a buffer window centered at \\(q\\) having the length and width of the buffer window are retrieved from the \\(R^{*}\\)-tree and sent to the client. The client stores these PVCs in its buffer. When buffer window is 0, the algorithm only retrieves those PVCs whose MBRs contain the given query point. On the other hand, when buffer window is 100, all PVCs whose MBRs intersect with the buffer window centered at \\(q\\) having the length and width of 100 units (i.e., the buffer window covers \\(100\\times 100\\) square units in the data space) are retrieved. In this setting, we expect that the I/O costs will be reduced for a larger value of buffer window, because theFigure 14: The effect of the data set size in U (a-c), Z (d-f) Figure 13: The effect of the query trajectory length in U (a-c), Z (d-f), and L (g-i) server does not need to access the \\(R^{*}\\)-tree as long as these buffered PVCs can serve the subsequent query points of a moving query. Figures 15(a)-(c) show the processing time, the I/O costs, and the number of communications, respectively, for varying the size of the buffer window from 0 to 400 units for U data set. Figure 15(a) shows that for P-PVD-PMNN, in general the processing time increase with increase of buffer window. The reason is that for a very large buffer window, a large number of PVCs are buffered and the processing time increases as the algorithm needs to check these PVCs for a moving query. On the other hand, Figure 15(b) shows that for P-PVD-PMNN, I/O costs decrease with the increase of the buffer window. This is because, for a larger value of buffer window the algorithm fetches more PVCs at a time from the server, and thereby needs to access the PVD using the \\(R^{*}\\)-tree reduced number of times. The figure also shows that P-PVD-PMNN outperforms Naive-PMNN by an order of magnitude in processing time and 2 orders of magnitude in I/O. Figure 15(c) shows that the number of communications for P-PVD-PMNN continuously decreases with the increase of buffer window as the client fetches more PVCs at a time from the server. However, for Naive-PMNN, the client communicates with the server for each sampled location of the query, and thus the number of communications remain constant. The results on Z (see Figures 15(d)-(f)) and L (see Figures 15(g)-(i)) data sets show similar trends with U data set described above. _Experiments with 1D Data Sets:_ For 1D data, we have run a similar set of experiments to 2D ones, where we vary the length of the query trajectory, the data set size, and the size of the buffer window. _Effect of the Length of a Query Trajectory:_ In this set of experiments, we vary the query trajectory length from 1000 to 5000 units while evaluating a PMNN query for 1D data sets. We run the experiments for both U (see Figures 16(a) Figure 15: The effect of buffer window in U (a-c), Z (d-f), and L (g-i) (c)) and Z (see Figures 16(d)-(f)) data sets. The data set size is set to 100. We can see that, for both P-PVD-PMNN and Naive-PMNN, the processing time, I/O costs, and number of communications increase with the increase of the query trajectory length for 1D sets, which is expected. Figures also show that our P-PVD-PMNN outperforms the Naive-PMNN by at least an order of magnitude in terms of processing time, I/Os, and communication costs. _Effect of Data Set Size_: We also run the experiments with varying data set size (see Figures 17(a)-(c) for U and Figures 17(d)-(f) for Z data sets). In these experiments, the trajectory length is set to 5000 units. Figures show that, for P-PVD-PMNN, the processing time, I/O costs and number of communications increase with the increase of data set size for 1D sets. This is because, for a larger data set, we have smaller PVCs and thereby a moving query needs to check higher number of PBRs than that of a smaller data set. Figures 17(a)-(f) also show that our P-PVD-PMNN outperforms Naive-PMNN by at least an order of magnitude in all evaluation metrics. The results also show that Figure 16: The effect of the query trajectory length in U (a-c), Z (d-f) for 1D data Figure 17: The effect of the data set size in U (a-c), Z (d-f) for 1D data P-PVD-PMNN performs similar for both directional (D) and random (R) query movement paths. _Effect of Buffer Window_: In this set of experiments, we study the impact of introducing a buffer for processing a PMNN query. We vary the value of buffer window from 0 to 400 units, and then run the experiments for U (see Figures 18(a)-(c)) and Z (see Figures 18(d)-(f)) data sets. In these experiments, we set the data set size to 100 and the trajectory length to 5000 units. The experimental results show that P-PVD-PMNN outperforms Naive-PMNN by 1-2 orders of magnitude for I/O and processing costs, and 2-3 orders of magnitude in terms of communication costs. #### 6.2.2 Incremental Approach In the incremental approach, we use an \\(R^{*}\\)-tree to index the MBRs of uncertain objects, for both I-PVD-PMNN and Naive-PMNN. In both cases, we use the page size of 1KB and the node capacity of 50 entries for the \\(R^{*}\\)-tree. _Experiments with 2D Data Sets:_ We vary the following parameters in our experiments: the value of \\(k\\) (i.e., the number of objects retrieved at each step), the data set size, and the length of the query trajectory, and compare the performance of I-PVD-PMNN with Naive-PMNN. _Effect of \\(k\\):_ In this set of experiments, we study the impact of \\(k\\) in the performance measure for processing a PMNN query. We vary the value of \\(k\\) from 10 to 50, and then run the experiments for all available data sets (U, Z, and L). In these experiments, for both U and Z, we have set the data set size to 10K. Figures 19(a)-(c) show the processing time, the I/O costs, and the number of communications, respectively, for varying \\(k\\) from 10 to 50 for U data set. Figure 19(a) shows that the processing time almost remains constant for varying \\(k\\). The processing time of I-PVD-PMNN is on average 6 times less for directional (D) query paths than that of Naive-PMNN, and on average 13 times less for random (R) query paths than that of Naive-PMNN. On the other hand, Figures 19(b)-(c) show that I/O costs and the number of communications decrease with the increase of \\(k\\). This is because, for a larger value of \\(k\\), the client fetches more data at a time from the server, and thereby needs to communicate less number of times with the server. Figures also show that our I-PVD-PMNN outperforms the Naive-PMNN by 2-3 orders of magnitude for both I/O and communication costs. Figures 19(d)-(f) and (g)-(i) show the performance behaviors of Z and L data sets, respectively, which are similar to U data set. _Effect of Data Set Size_: In this set of experiments, we vary the data set size from 5K to 25K and compare the performance of our approach I-PVD-PMNN with Naive-PMNN. We set the trajectory length to 5000 units. Also, in these experiments, we have set the value of \\(k\\) to 30. Figures 20 (a)-(c) and (d)-(f) show the processing time, I/O costs, Figure 18: The effect of buffer window in U (a-c), Z (d-f) for 1D data Figure 19: The effect of (\\(k\\)) in U (a-c), Z (d-f), and L (g-i) Figure 20: The effect of the data set size in U (a-c), Z (d-f) and the number of communications for U and Z data sets, respectively. Figures also show that our I-PVD-PMNN outperforms Naive-PMNN by 1-3 orders of magnitude for all data sets. _Effect of the Length of a Query Trajectory_: We vary the length of moving queries from 1000 to 5000 units of the data space. In these experiments, for both U and Z, we have set the data set size to 10K. Also, in these experiments, we have set the value of \\(k\\) to 30. Figures 21 show that the processing time, I/O costs, and the number of communications increase with the increase of the length of the query trajectory for both U and Z data sets, which is expected. The processing time of I-PVD-PMNN is on average 5 times less for directional (D) query path and is on average 10 times less for random (R) query paths compared to Naive-PMNN. Also I-PVD-PMNN outperforms Naive-PMNN by at least an order of magnitude for both I/O and communication costs. _Experiments with 1D Data Sets:_ We also evaluate our incremental approach with 1D data sets by varying the following parameters: the value of \\(k\\), the data set size, and the length of the query trajectory. _Effect of \\(k\\):_ In this set of experiments, we study the impact of \\(k\\) in the performance measure of I-PVD-PMNN for 1D data sets. Figures 22(a)-(e) show the results of U and Z data sets, for varying \\(k\\) from 10 to 50. In these experiments, we have set the data set size to 100. Figure 22(a) shows that the processing time almost remains constant for varying \\(k\\). Moreover, the processing time of I-PVD-PMNN is on average 6 times less for directional (D) query paths than that of Naive-PMNN, and on average 10 times less for random (R) query paths than that of Naive-PMNN. Figures 22(b)-(c) show that the I/O costs and the number of communications decrease with the increase of \\(k\\). Figures also show that our I-PVD-PMNN outperforms Naive-PMNN by 2-3 orders of magnitude in terms of both I/O costs and communication costs. Figures 22(d)-(f) show the results for Z data set, which is similar to U data set. _Effect of Data Set Size:_ In this set of experiments, we vary the data set size from 100 to 500 and compare the performance of our approach I-PVD-PMNN with Naive-PMNN. In these experiments, we have set the value of \\(k\\) to 30 and the trajectory length to 5000 units. Figures 20 (a)-(c) and (d)-(f) show the processing time, I/O costs, and the number of communications for U and Z data sets, respectively. The results reveal that the processing time, I/O costs, and the communications costs increase with the increase of the data set size. Figures also show that our I-PVD-PMNN outperforms Naive-PMNN by at least an order of magnitude for all data sets. _Effect of the Length of a Query Trajectory:_ We also vary the length of the query trajectory for 1D data sets and the results (Figures 24) for 1D data sets exhibit similar behavior to 2D data sets. In these experiments, we vary the trajectory length from 1000 to 5000 units of the data space. Also, we have set the data set size to 100, and the value of \\(k\\) to 30. Figures 24 show that for both U and Z data sets, the processing time, I/O costs, and the communication costs increase with the increase of the trajectory length. Figures also show that our I-PVD-PMNN outperforms Naive-PMNN in all evaluation metrics. Our work on PVD opens new avenues for future work. Currently our approach finds the most probable NN for a moving query point; in the future we aim to extend it for top-_k_ most probable NNs. PVDs for other types of probability density functions such as normal distribution are to be investigated. We also plan to have a detailed investigation on PVDs of higher dimensional spaces. ## References * (1) G. Trajcevski, O. Wolfson, K. Hinrichs, S. Chamberlain, Managing uncertainty in moving objects databases, ACM TODS 29 (3) (2004) 463-507. * (2) S. Madden, M. J. Franklin, J. M. H. W. Hong, The design of an acquisitional query processor for sensor networks, in: SIGMOD, 2003, pp. 491-502. * (3) Q. Liu, W. Yan, H. Lu, S. Ma, Occlusion robust face recognition with dynamic similarity features, in: ICPR, 2006, pp. 544-547. * (4) R. Cheng, D. V. Kalashnikov, S. Prabhakar, Evaluating probabilistic queries over imprecise data, in: SIGMOD, 2003, pp. 551-562. * (5) R. Cheng, S. Prabhakar, D. V. Kalashnikov, Querying imprecise data in moving object environments, IEEE TKDE 16 (9) (2004) 1112-1127. * (6) R. Cheng, X. Xie, M. L. Yiu, J. Chen, L. Sun, UV-Diagram: A Voronoi diagram for uncertain data, in: ICDE, 2010. * (7) H.-P. Kriegel, P. Kunath, M. Renz, Probabilistic nearest-neighbor query on uncertain objects, in: DASFAA, 2007, pp. 337-348. * (8) G. Beskales, M. A. Soliman, I. F. Ilyas, Efficient search for the top-k probable nearest neighbors in uncertain databases, Proc. VLDB Endow. 1 (1) (2008) 326-339. * (9) C. Re, N. Dalvi, D. Suciu, Efficient top-k query evaluation on probabilistic data, in: ICDE, 2007, pp. 886-895. * (10) M. A. Soliman, I. F. Ilyas, Top-k query processing in uncertain databases, in: ICDE, 2007, pp. 896-905. * (11) A. Okabe, B. Boots, K. Sugihara, S. N. Chiu, Spatial Tessellations: Concepts and Applications of Voronoi Diagrams, John Wiley & Sons, Inc., 2000. * (12) W. Evans, J. Sember, Guaranteed voronoi diagrams of uncertain sites, in: CCCG, 2008. * (13) R. Cheng, J. Chen, M. F. Mokbel, C.-Y. Chow, Probabilistic verifiers: Evaluating constrained nearest-neighbor queries over uncertain data, in: ICDE, 2008, pp. 973-982. * (14) G. Trajcevski, R. Tamassia, H. Ding, P. Scheuermann, I. F. Cruz, Continuous probabilistic nearest-neighbor queries for uncertain trajectories, in: EDBT, 2009, pp. 874-885. * (15) X. Lian, L. Chen, Probabilistic group nearest neighbor queries in uncertain databases, IEEE TKDE 20 (6) (2008) 809-824. * (16) X. Dai, M. L. Yiu, N. Mamoulis, Y. Tao, M. Vaitis, Probabilistic spatial queries on existentially uncertain data, in: SSTD, 2005, pp. 400-417. * (17) M. L. Yiu, N. Mamoulis, X. Dai, Y. Tao, M. Vaitis, Efficient evaluation of probabilistic advanced spatial queries on existentially uncertain data, IEEE TKDE 21 (1) (2009) 108-122. * (18) T. Mitchell, Machine Learning, Mcgraw-Hill, 1997. * (19) Y. Tao, D. Papadias, Time-parameterized queries in spatio-temporal databases, in: SIGMOD, 2002, pp. 334-345. * (20) J. Zhang, M. Zhu, D. Papadias, Y. Tao, D. L. Lee, Location-based spatial queries, in: SIGMOD, 2003, pp. 443-454. * (21) M. I. Karavelas, Voronoi diagrams for moving disks and applications, in: WADS, 2001, pp. 62-74. * (22) S. Fortune, A sweepline algorithm for voronoi diagrams, Algorithmica 2 (1987) 153-174. * (23) M. de Berg, K. Dobrindt, O. Schwarzkopf, On lazy randomized incremental construction, in: STOC, 1994, pp. 105-114. Figure 24: The effect of the query length in U (a-c), Z (d-f) for 1D data * (24) M. A. Mostafavi, C. Gold, M. Dakovicz, Delete and insert operations in voronoidelaunay methods and applications, Computers & Geosciences 29 (4) (2003) 523-530. * (25) S. Berchtold, B. Ertl, D. A. Keim, H.-P. Kriegel, T. Seidl, Fast nearest neighbor search in high-dimensional space, in: ICDE, 1998, pp. 209-218. * (26) S. Berchtold, D. A. Keim, H.-P. Kriegel, T. Seidl, Indexing the solution space: A new technique for nearest neighbor search in high-dimensional space, IEEE TKDE 12 (1) (2000) 45-57. * (27) H. Samet, Foundations of Multidimensional and Metric Data Structures, Morgan Kaufmann, CA, 2006. * (28) N. Beckmann, H. Kriegel, R. Schneider, B. Seeger, The R*-Tree: an efficient and robust access method for points and rectangles, in: SIGMOD, 1990, pp. 322-331. * (29) A. Guttmann, R-trees: A dynamic index structure for spatial searching, in: SIGMOD, 1984, pp. 47-57. * (30) S. Nutanong, R. Zhang, E. Tanin, L. Kulik, The V*-diagram: a query-dependent approach to moving knn queries, VLDB 1 (1) (2008) 1095-1106. * (31) TIGER, [http://www.census.gov/geo/www/tiger/](http://www.census.gov/geo/www/tiger/).
A large spectrum of applications such as location based services and environmental monitoring demand efficient query processing on uncertain databases. In this paper, we propose the probabilistic Voronoi diagram (PVD) for processing moving nearest neighbor queries on uncertain data, namely the probabilistic moving nearest neighbor (PMNN) queries. A PMNN query finds the most probable nearest neighbor of a _moving query point_ continuously. To process PMNN queries efficiently, we provide two techniques: a pre-computation approach and an incremental approach. In the pre-computation approach, we develop an algorithm to efficiently evaluate PMNN queries based on the pre-computed PVD for the entire data set. In the incremental approach, we propose an incremental probabilistic safe region based technique that does not require to pre-compute the whole PVD to answer the PMNN query. In this incremental approach, we exploit the knowledge for a known region to compute the lower bound of the probability of an object being the nearest neighbor. Experimental results show that our approaches significantly outperform a sampling based approach by orders of magnitude in terms of I/O, query processing time, and communication overheads. keywords: Voronoi diagrams, continuous queries, moving objects, uncertain data + Footnote †: journal: Elsevier
Summarize the following text.
253
arxiv-format/1804_02335v1.md
# Free-space remote sensing of rotation at photon-counting level Wuhong Zhang Department of Physics, Jiujiang Research Institute and Collaborative Innovation Center for Optoelectronic Semiconductors and Efficient Devices, Xiamen University, Xiamen 361005, China Jingsong Gao Department of Physics, Jiujiang Research Institute and Collaborative Innovation Center for Optoelectronic Semiconductors and Efficient Devices, Xiamen University, Xiamen 361005, China Dongkai Zhang Department of Physics, Jiujiang Research Institute and Collaborative Innovation Center for Optoelectronic Semiconductors and Efficient Devices, Xiamen University, Xiamen 361005, China Yilin He Department of Physics, Jiujiang Research Institute and Collaborative Innovation Center for Optoelectronic Semiconductors and Efficient Devices, Xiamen University, Xiamen 361005, China Tianzhe Xu Department of Physics, Jiujiang Research Institute and Collaborative Innovation Center for Optoelectronic Semiconductors and Efficient Devices, Xiamen University, Xiamen 361005, China Robert Fickler [email protected] Department of Physics, Jiujiang Research Institute and Collaborative Innovation Center for Optoelectronic Semiconductors and Efficient Devices, Xiamen University, Xiamen 361005, China Department of Physics, University of Ottawa, 25 Templeton St., Ottawa, Ontario K1N 6N5, Canada Lixiang Chen [email protected] Department of Physics, Jiujiang Research Institute and Collaborative Innovation Center for Optoelectronic Semiconductors and Efficient Devices, Xiamen University, Xiamen 361005, China Department of Physics, Jiujiang Research Institute and Collaborative Innovation Center for Optoelectronic Semiconductors and Efficient Devices, Xiamen University, Xiamen 361005, China November 6, 2021 ## I I. Introduction The Doppler effect is a well-known phenomenon describing the frequency shift of a wave, such as sound waves and light waves. The frequency emitted by a moving source becomes higher as the source is approaching an observer, while it becomes lower as it is receding. This linear Doppler effect is widely used for sonar and radar systems to deduce the speed of a moving object [1]. In contrast to linear motion, a rarely encountered example is the angular version of the Doppler shift arising from rotation [2]. The rotational Doppler effect was first observed by Garetz and Arnold, who used rotating half-wave plates of angular velocity \\(\\Omega\\) to imprint a frequency shift \\(2\\Omega\\) to circularly polarized light [3]. Such a frequency shift is in essence associated with spin angular momentum of photons, and can be understood from the dynamically evolving geometric phase in light of the Poincare sphere [4]. More recently, the rotational Doppler effect was also verified in the second-harmonic generation with a spinning nonlinear crystal of three-fold rotational symmetry [5]. In addition to spin, a light beam with a twisted phase front of \\(\\exp(i\\ell\\phi)\\) carries \\(\\ell\\hbar\\) orbital angular momentum (OAM) per photon, where \\(\\phi\\) is the azimuthal angle and \\(\\ell\\) is an integer [6].It was first demonstrated by Courtial et al. that a rotating Dove prism could impart the OAM beams with a frequency shift of \\(\\ell\\Omega\\), where \\(\\Omega\\) denotes the angular velocity [7, 8]. Via the coherent interaction of OAM beams with atom samples, Barreiro et al. reported on the first spectroscopic observation of rotational Doppler shift [9]. Also, Korech et al. developed an intuitive method to observe the molecular spinning based on the rotational Doppler effect [10]. Recently, considerable attention was paid to exploit the twisted light's rotational Doppler Effect to detect rotating bodies. Lavery et al. demonstrated that the angular speed can be deduced by detecting the frequency shift of on-axis OAM components that are scattered from a spinning object with an optically rough surface [11]. They further employed an OAM-carrying white-light beam and only observed a single frequency shift within the same detection mode [12]. Rosales-Guzmn et al. showed that under different OAM-mode illumination the full three-dimensional movement of particles could be characterized by using both the translational and rotational Doppler Effect [13]. Fang et al. experimentally demonstrated that both the rotational and linear Doppler effect actually share a common origin such that one can use one effect to drive the other effect [14]. Zhao et al. extended the OAM illumination beam into the radio frequency area and detected the speed of a rotor in a proof-of-concept experiment [15]. Based on the rotational Doppler Effect, Zhou et al. devised an OAM-spectrum analyzer that enables simultaneous measurements of the power and phase distributions of OAM modes [16]. Although the real potential of the rotational Doppler effect lies at its practical application in the non-contact remote sensing [17, 18], we note that thus far no experimental verification of rotational Doppler effect has been conducted outside of the laboratory. The challenges for a long-distance implementation originate from the OAM mode spreading induced by atmospheric turbulence and the low photon collection efficiency due to beam divergence misalignment [19]. Here we report observation of the rotational Doppler effect over a 120-meter free-space link between the rooftops of two buildings across the Haiyun Campus of Xiamen University in a city environment. Our scheme works with extremely weak light illumination by employing a single-photon counting module. This work moves a step towards the remote sensing applications in realistic environment with twisted light's rotational Doppler effect. ## II II. Theoretical analysis and simulations We assume that the rotating object is mathematically described by a complex function \\(\\psi(r,\\varphi)\\) in the cylindrical coordinates and is illuminated with a fundamental Gaussian mode. As the Laguerre Gaussian(LG) modes form a complete and orthogonal basis, we can describe the light field reflected from the object as \\(\\psi(r,\\varphi)=\\sum_{\\ell,p}A_{\\ell,p}\\mathrm{LG}_{p}^{\\ell}(r,\\varphi)\\), where \\(\\mathrm{LG}_{p}^{\\ell}(r,\\varphi)\\) denotes the LG mode with azimuthal and radial indices \\(\\ell\\) and \\(p\\), respectively, and \\(A_{\\ell,p}=\\int\\int\\left[\\mathrm{LG}_{p}^{\\ell}(r,\\varphi)\\right]^{*}\\psi(r, \\varphi)rdrd\\varphi\\) is the overlap amplitude. The method of using a coherent superposition of LG modes to represent an object was called digital spiral imaging by Torner et al. [20], and has been found as an effective technique to encode optical information and retrieve topographic information of an object, particularly useful for objects of high spatial symmetry [21; 22; 23]. We plot the LG mode spectra, \\(|A_{\\ell,p}|\\), for two typical objects, i.e., a three-leaf Clover and a five-petal Pentas in Fig. 1. One can see from the top panel of Fig. 1, the dominant LG modes are those with the azimuthal indices of \\(\\ell=0,\\pm 3,\\pm 6,\\cdots\\) (Fig. 1a) and \\(\\ell=0,\\pm 5,\\pm 10,\\cdots\\) (Fig. 1b), being associated with the three-fold and five-fold rotational symmetry of the Clover and Pentas, respectively. This symmetry can be illustrated more evidently by the pure OAM spectra characterized by \\(P_{\\ell}=\\sum_{p}|A_{\\ell,p}|^{2}\\), as a sum of the mode weights over the radial \\(p\\) index. In the lower panel of Fig. 1, we plot the pure OAM spectra for the three-leaf Clover (Fig. 1c) and five-petal Pentas (Fig. 1d). This follows closer the experimental expectation, because our detection technique uses a pure phase grating that only distinguishes between different \\(\\ell\\) modes irrespective of their radial structure. More details of the mode expansion can be found in our recent paper [24]. The frame work of digital spiral imaging provides an intuitive understanding of the mechanism of the rotational Doppler effect. If the object is rotated by a constant angular speed \\(\\Omega\\) around its own axis, a time-varying phase shift of \\(\\ell\\Omega t\\) will be imparted to each OAM eigenmode. As a consequence, we rewrite the transmitted or reflected light fields as \\[\\psi(r,\\varphi,t)=\\sum_{\\ell,p}A_{\\ell,p}\\mathrm{LG}_{p}^{\\ell}(r,\\varphi) \\exp(i\\ell\\Omega t), \\tag{1}\\] which manifests the OAM-dependent frequency shift \\(\\ell\\Omega\\). By using specific OAM superposition modes \\(\\Phi(r,\\varphi)=[\\exp(i\\ell\\phi)+\\exp(-i\\ell\\phi)]/\\sqrt{2}\\) to detect the reflected light fields, we can obtain the signal as \\[I(t)= \\left|\\int\\int\\Phi^{*}(r,\\varphi)\\psi(r,\\varphi,t)rdrd\\Phi\\right| ^{2}\\] \\[\\propto 2P_{\\ell}[\\cos(2\\ell\\Omega t)+1] \\tag{2}\\] assuming \\(P_{\\ell}=P_{-\\ell}\\). Hence, we find an intensity modulation of frequency \\(f_{mod}=\\frac{|2\\ell\\Omega|}{2\\pi}\\) for the detected superposition modes \\(\\Phi(r,\\varphi)\\). The maximum detected intensity modulation is only determined by the mode spectrum \\(P_{\\ell}\\) of the reflected light fields. It is obvious that one cannot detect the intensity modulation signal with the OAM mode spectrum \\(|P_{\\ell}|=0\\). By scanning the intensity modulation signal of the entire OAM spectrum, the rotational symmetry of object may be also deduced. Generally, as all of the constituent LG modes in Eq. (1) propagate individually in free space without coupling with each other, the mode weights \\(A_{\\ell,p}\\), i.e. the OAM spectra, will remain unchanged without any influence by atmospheric disturbance [25]. This indicates that the measurement of rotational Doppler effect \\(f_{mod}\\) at the distant receiver will be equivalent to that at the transmitter. ## III III. Experimental setup We implement the 120-m free-space link between the rooftops of two buildings. Fig. 2 shows the transmitter and receiver locations. At transmitter (top-left inset in Fig. 2), the fundamental Gaussian mode from a 633 nm Helium-Neon (HeNe) laser beam (Thorlabs HNL210L), after being collimated by the first telescope, illuminates a computer-controlled spatial light modulator (SLM). We prepare the desired holographic gratings with the SLM to mimic the rotating objects, e.g., a Colver and a Pentas. The first diffraction order, which is selected by a 4-f filtering system (\\(f_{1}\\)=500 mm, \\(f_{2}\\)=150 mm), then acquires the profile of the rotating objects. The beam diameter on the image plane is about 2 mm, which is the diffraction-limited input beam diameter of the second telescope (Thorlabs GBE20-A). The telescope is used to further expand the beam to a diameter of about 40 mm, and is followed by the transmission of the rotating pat Figure 1: Spiral spectrum of the image gives an intuitive indication of the object symmetry. Top panel is the peak-normalized LG mode spectra while bottom is the pure OAM spectra (a), (c): a three-leaf Clover. (b), (d): a five-petal Pentas. tern over the 120-m free-space link to the receiver. Due to diffraction of the image, we get a beam diameter about 50 mm at the receiver. There, another 4-f filtering system with a collection lens of 100 mm in diameter (\\(f_{3}\\)=500 mm) and a re-imaging lens (\\(f_{4}\\)=75 mm) is used to demagnify the collected beam to have a diameter of \\(\\sim\\) 7.5 mm. Considering the limited aperture of the lens, we estimate that approximately 95 % of the incoming light is collected to illuminate the second SLM. A holographic grating with desired superposition of \\(\\pm\\ell^{\\prime}\\) OAM modes is displayed on another SLM and together with a third 4-f filtering system is used to select the first diffraction order and image the SLM plane on the single mode fiber. The grating flattens the phase of one specific superposition mode. Because only modes with a plane phase front are coupled into a single mode fiber, this arrangement acts as a mode filter and we experimentally measure the overlap probability as described by Eq.(2). In the experiment, we start with using the full power of the HeNe laser with 21 mW to align the whole optical path. Then we use a series of neutral-density filters to attenuate the laser beam to a very faint level and demonstrate the ability of our scheme to work at the photon-counting level. We employ the single photon counting module (SPCM, Excelitas) to detect single photons that are phase-flattened and coupled into the single-mode fiber. The single-photon events are monitored by a Digital Phosphor Oscilloscope (DPO3012, Tektronix) over the measurement time of a few minutes. An exemplary measured data set of single photon detections can be seen in the right-bottom inset of Fig. 2. Each blue line in the graph represents a detected photon, and the degree of sparsity represents the number of the detected photons varying with time. By subsequent applying a fast Fourier transform (FFT) to the time-varying photon count sequence, we extract directly the frequency shifts. It is noted that the optical alignment between the receiver and transmitter is very important, as both lateral displacement and angular deflection could cause severe modal crosstalk [26; 27]. To minimize possible misalignments, we couple a second HeNe laser at the receiver into the fiber before each measurement and send the light back to the transmitter to make sure the forward and backward propagating beams are well overlapped at both stations. Moreover, to minimize vibrations induced by people walking around in the building of the sender, we performed all experiments after midnight. ## IV IV. Experimental Results We first investigate the propagation features of the Clover and Pentas images transmitted through the 120-m free-space link. We use a low-noise electron multiplier CCD camera (EMCCD, E2V, \\(768\\times 576\\) pixels) to record the light fields at both the transmitter and receiver. Fig. 3(a) and 3(b) display the Clover and Pentas images at the transmitter, respectively. After propagating through the 120-m free-space link we see in Figs. 3(c) and 3(d) that both the Clover and Pentas at the receiver aperture become barely recognizable, as a main consequence of the free-space diffraction. We estimate that the photon flux recorded in each of these images is \\(\\sim 10^{5}\\) s\\({}^{-1}\\) based on the sender power of the light field. At the receiver, we use holographic gratings for measuring specific OAM superposition, with diffraction efficiency \\(\\sim\\) 20 %. The fiber-optic coupling efficiency is \\(\\sim\\) 80 % and the SPCM detection efficiency is \\(\\sim\\) 60 % at 633 nm. Thus we record approximately \\(10^{3}\\) s\\({}^{-1}\\) single-photon events. Without losing generality, we restrict our first set of measurements to the Clover and Pentas with a rotating velocity \\(\\Omega_{1}=90^{\\circ}\\) s\\({}^{-1}\\) corresponding to a rotational frequency of 0.25 Hz. According to Eq. (2), we measure Figure 2: 120-m free-space optical link implemented from building to building in the Haiyun Campus of Xiamen University. Left-top inset, optical setup of the transmitting terminal; Right-bottom inset, optical setup of the receiving terminal, SLM: spatial light modulator, OSC: Oscilloscope, SPCM: Single photon counting module. Figure 3: Weak-light images of the rotating objects. (a) and (b), show the gray scale images of the Clover and Pentas recorded by EMCCD camera at transmitter, respectively. (c) and (d), show the false color images of the Clover and Pentas images at receiver, respectively. OAM superpositions ranging from \\(\\ell^{\\prime}=\\pm 1\\) to \\(\\ell^{\\prime}=\\pm 7\\) for clover and up to \\(\\ell^{\\prime}=\\pm 11\\) for Pentas, respectively. After applying a fast Fourier Transform (FFT) directly to the time-varying single-photon events, we obtain the experimental results in Fig. 4. The peak power frequency observed at \\(\\ell^{\\prime}=\\pm 3\\) with \\(f=\\)1.526 Hz and \\(\\ell^{\\prime}=\\pm 6\\) with \\(f=\\)3.052 Hz for the Clover in Fig. 4(a), while at \\(\\ell^{\\prime}=\\pm 5\\) with \\(f=\\)2.527 Hz and \\(\\ell^{\\prime}=\\pm 10\\) with \\(f=\\)5.102 Hz for the Pentas in Fig. 4(b), which manifests again the three-fold and five-fold rotational symmetry of the Clover and the Pentas. It is noted that the full width at half maximum (FWHM) of the peak frequency is very narrow, which shows the accuracy of our measurements. We use a Gaussian fitting curve to calculate the Gaussian RMS width \\(\\sigma\\) of each peak frequency, one example of \\(\\sigma\\approx 0.0235\\) for \\(\\ell^{\\prime}=\\pm 3\\) is shown in the inset of Fig. 4(a). According to \\(f_{\\,\\rm mod}=|2\\ell\\Omega|/2\\pi\\), we deduce the rotating velocity with \\(\\Omega_{Clover}=(91.44^{\\circ}\\pm 1.4^{\\circ})\\) s\\({}^{-1}\\) and \\(\\Omega_{Pentas}=(90.97^{\\circ}\\pm 0.97^{\\circ})\\) s\\({}^{-1}\\). We further set both the Clover and Pentas to rotate at \\(\\Omega_{2}=135^{\\circ}\\) s\\({}^{-1}\\) corresponding to a rotational frequency of 0.375 Hz to test our setup. From the measured power spectra in Fig. 5(a), we find the peak frequency of \\(f=\\)2.289 Hz are detected in modes \\(\\ell^{\\prime}=\\pm 3\\), and \\(f=\\)4.578 Hz in modes \\(\\ell^{\\prime}=\\pm 6\\) for the rotating Clover. For the Pentas object we find peak frequencies \\(f=\\)3.815 Hz in modes \\(\\ell^{\\prime}=\\pm 5\\), and \\(f=\\)7.629 Hz in modes \\(\\ell^{\\prime}=\\pm 10\\) (see Fig. 5(b)). The analog data processing gives the deduced rotating velocity \\(\\Omega_{Clover}=(137.34^{\\circ}\\pm 1.32^{\\circ})\\) s\\({}^{-1}\\) and \\(\\Omega_{Pentas}=(137.34^{\\circ}\\pm 1.18^{\\circ})\\) s\\({}^{-1}\\). Hence, from the obtained results the rotation speed can be deduced with very high accuracy, which demonstrates the validity of our 120 m free space sensing of rotation at photon-counting level. Moreover, in the absence of any information about the detected rotation object, the peak power of the detected scanning OAM frequency spectrum also provided a way to analyze the objects rotational symmetry. In addition to the expected signal from the rotating pattern, we also detected the frequencies \\(f=\\)3.052 Hz, 4.578 Hz in the main mode \\(\\ell^{\\prime}=\\pm 3\\) for the Clover and \\(f=\\)5.102 Hz, 7.629 Hz in \\(\\ell^{\\prime}=\\pm 5\\) for the Pentas under a rotation speed \\(\\Omega_{1}=90^{\\circ}\\) s\\({}^{-1}\\), as denoted by the blue text in Fig. 4(a) and Fig. 4(b). Similarly, under a different rotation speed \\(\\Omega_{2}=135^{\\circ}\\) s\\({}^{-1}\\), the same effect can be observed with \\(f=\\)4.578 Hz, 6.867 Hz in the main mode \\(\\ell^{\\prime}=\\pm 3\\) for the Clover and \\(f=\\)7.629 Hz in \\(\\ell^{\\prime}=\\pm 5\\) for the Pentas, as the blue text labeled in Fig. 5(a) and Fig. 5(b), respectively. According to Eq.(2), the photon counts over time should appear like Figure 5: Measured power spectra of rotational Doppler shifts. a, the Clover. b, the Pentas. Both are set to rotate at the same angular rate \\(\\Omega_{2}=135^{\\circ}\\) s\\({}^{-1}\\). The different colors are used for the same purpose as described in the caption of Fig. 4. Figure 4: Measured power spectra of rotational Doppler shifts. a, Clover. b, Pentas. Both are set to rotate at the same angular rate \\(\\Omega_{1}=90^{\\circ}\\) s\\({}^{-1}\\). The obtained frequencies for each OAM mode are represented with different colors. The value of the frequency labeled in red text denotes the main modes frequency, which are used to deduce the rotation speed. The inset of (a) denote the data process of one of the main modes \\(\\ell^{\\prime}=\\pm 3\\). The black texts denote the undesired frequency which is mainly caused by slight misalignment. The blue texts besides the main modes denote the additional frequency that might be caused by the slight deviation of photon counts from the standard sinusoid. a sinusoidal oscillation with \\(\\ell^{\\prime}=\\pm 3\\) and \\(\\ell^{\\prime}=\\pm 5\\), and the corresponding Fourier transform should only give one peak, respectively. However, in our practical detection the signal was obtained over a few minutes. Over that time frame, any slight misalignment caused by tiny vibrations of the transmitter, leads to a deviation from the expected sinusoidal recording. Such slight deviations from the standard sinusoid of photon counts will cause the higher harmonic peaks of frequency shifts. As mentioned before, we should observe no modulation of the counts, i.e. \\(f=\\)0 for those OAM modes with \\(|P_{\\ell}|=0\\) as the simulation results in Fig. 1(c) and 1(d). However, in contrast to this theoretical expectation we also find the frequency \\(f=\\)1.526 Hz, \\(f=\\)2.289 Hz in the adjacent modes with \\(\\ell^{\\prime}=\\pm 2,\\pm 4\\), and \\(f=\\)3.052 Hz, \\(f=\\)4.578 Hz in \\(\\ell^{\\prime}=\\pm 5,\\pm 7\\) for the different rotation frequency of Clover object, as labeled by the black texts in Fig. 4(a) and 5(a). This effect is more dominant for Pentas with different rotation speed: \\(f=\\)2.527 Hz, \\(f=\\)3.815 Hz in modes \\(\\ell^{\\prime}=\\pm 4,\\pm 6\\), and\\(f=\\)5.102 Hz, \\(f=\\)7.629 Hz in modes \\(\\ell^{\\prime}=\\pm 9,\\pm 11\\). We attribute this effect to energy spreading accompanied with the transfer of frequency shifts from the dominant modes to the adjacent ones. If we have preknowledge of an object's symmetry, this could be identified as erroneous measured frequency shifts such that the rotation speed will still be simply deduced from the dominant modes. However, if there is no information about the object, one needs to measure different OAM components \\(\\ell^{\\prime}\\) and deduce the symmetry from the measured spectrum to determine the dominant mode. In this case, if we measure \\(\\ell^{\\prime}\\), it is possible to obtain a mixing of frequency shifts from different OAM modes \\(\\Delta\\omega=\\{2\\ell^{\\prime}_{1}\\omega,2\\ell^{\\prime}_{2}\\omega,2\\ell^{ \\prime}_{3}\\omega,\\cdots\\}\\). One reasonable assumption might be that with an energy spread to adjacent modes of more than 50 % a correct discrimination of the main mode is not easily possible anymore, such that the real frequency shifts and the symmetry of the object cannot be accurately deduced. Hence, the main contributions to the mode spreading phenomenon will be carefully analyzed in the following paragraphs. Firstly, it might be caused by slight misalignments between the grating and detection pattern in the experiment. Lavery et al. have demonstrated that the desired LG mode would expand into its adjacent mode with about 25 % \\(-35\\) % when there is a lateral displacement of \\(\\Delta x_{0}=0.5w_{0}\\), or a tilt angle of \\(\\Delta\\alpha=0.5\\lambda/w_{0}\\) between the grating and the detection pattern in lab scale [28]. In our experimental setting a small tilt angle vibration \\(\\Delta\\theta\\) will cause a magnified displacement \\(\\Delta d=\\tan(\\Delta\\theta)\\cdot L\\approx\\Delta\\theta\\cdot L\\) between the received pattern and the grating, where \\(L\\) is the distance between the sender and receiver. In this case, our detected signal should have an overlap probability: \\[I(t) = \\left|\\int\\int\\Phi^{*}(x,y)\\psi(x+\\Delta d,y+\\Delta d,t)dxdy \\right|^{2} \\tag{3}\\] \\[\\propto \\left|\\sum_{\\ell}\\left(B_{\\ell^{\\prime},\\ell}+B_{-\\ell^{\\prime},\\ell}\\right)\\exp(i\\ell\\Omega t)\\right|^{2},\\] Note that we describe the beam in Cartesian coordinates here, as this is more convenient in our misalignment considerations. \\(B_{\\ell^{\\prime},\\ell}=\\sum_{p^{\\prime},p}\\int\\int[\\text{LG}^{\\ell^{\\prime}}_ {p^{\\prime}}(x,y)]^{*}\\text{LG}^{\\ell}_{p}(x,y)dxdy\\) denotes the total coupling efficiency from the OAM mode \\(\\ell\\) to \\(\\ell^{\\prime}\\) due to the misalignment of the receiver pattern and the grating. If there is no mode spreading, i.e., \\(B_{\\ell^{\\prime},\\ell}=\\delta_{\\ell^{\\prime},\\ell}\\), we have \\(I(t)\\propto\\left|A_{\\ell^{\\prime}}\\exp(i\\ell^{\\prime}\\Omega t)+A_{-\\ell^{ \\prime}}\\exp(-i\\ell^{\\prime}\\Omega t)\\right|^{2}\\), which is the trivial case without misalignment. In our experiment, the beam waist \\(w_{0}\\) on the grating is about 7.5 mm. As an exemplary simulation to show the mode spreading of Clover and Pentas light field, we plot \\(P^{\\prime}_{\\ell^{\\prime}}=\\sum_{\\ell}\\left|B_{\\ell^{\\prime},\\ell}\\right|^{2}\\) in Fig. 6(a) and 6(b) with \\(\\Delta d=0.2w_{0}\\), which is corresponding to a small tilt angle vibration \\(\\Delta\\theta\\approx 0.000013^{\\circ}\\) at the sender. Such tiny swing angles are very likely caused by slight misalignments of the telescope, the SLM or the laser due to mechanical instabilities. To focus on the main modes coupling to the adjacent ones contributions we define a spreading efficiency as \\(\\xi_{clover}=(P^{\\prime}_{|\\ell|=2}+P^{\\prime}_{|\\ell|=4})/2P^{\\prime}_{|\\ell| =3}\\) for the Clover and \\(\\xi_{pentas}=(P^{\\prime}_{|\\ell|=4}+P^{\\prime}_{|\\ell|=6})/2P^{\\prime}_{|\\ell| =5}\\) for the Pentas, respec Figure 6: Influence of the misalignment on the mode spectrum. Simulation results of mode spreading effect when there is a small misalignment between the receiver pattern and the grating with \\(\\Delta d=0.2w_{0}\\) for (a): Clover object, (b): Pentas object. (c) The relationship of the spreading efficiency and the vibration angle in sender for both three-leaf Clover object and five-leaf Pentas object. Here the spreading efficiency is defined as \\(\\xi_{clover}=(P^{\\prime}_{|\\ell|=2}+P^{\\prime}_{|\\ell|=4})/2P^{\\prime}_{|\\ell| =3}\\) for the Clover, while \\(\\xi_{pentas}=(P^{\\prime}_{|\\ell|=4}+P^{\\prime}_{|\\ell|=6})/2P^{\\prime}_{|\\ell| =5}\\) for the Pentas tively. One can see (Fig. 6(a)) the dominant modes of \\(\\ell^{\\prime}=\\pm 3\\) are spreading to the modes \\(\\ell^{\\prime}=\\pm 2\\) and \\(\\ell^{\\prime}=\\pm 4\\) with an efficiency nearly 24 %. This effect is even stronger for the Pentas pattern, where a spreading to the next neighbouring modes of up to 48 % can be observed (see Fig 6(b)). It seems that more complex objects lead to higher mode spreading effect. This observation becomes clearer, when we plot the spreading efficiency with respect to the vibration angle \\(\\Delta\\theta\\) at sender (see Fig. 6(c)). Under the same vibration angle, the five-leaf object always has a higher spreading efficiency than the three-leaf object. From this investigations we see that minimizing vibrations at the sender are crucial, especially when longer measurement times are needed. Secondly, for a practical free-space optical link, the atmospheric turbulence can lead to random variations in the refractive index such that the phase front of a propagating light is inevitably distorted [29]. This is particularly important for the detection of rotational Doppler effect, as its measurement is very sensitive to the optical filtering of suitable OAM superposition [11]. Here we adopt the model developed by von Karman to describe the influence of turbulence [30; 31] and represent the atmospheric turbulence link as several turbulent phase screens, each separated by some distance of propagation [32; 33]. After propagating through the turbulence with a distance \\(Z\\), the modified light field at the receiver can be written as, \\(\\psi^{\\prime}(r,\\varphi,Z,t)\\). We first perform a numerical simulation with \\(Z=120m\\) for the Clover and Pentas image under different air turbulence strength \\(C_{n}^{2}\\) in Fig. 7(a). For comparison, we also show the experimental light field captured at receiver, as shown in Fig. 7(b). One can see the most deformation of the image comes from the diffraction but not the air turbulence. So it is reasonable to say that our air turbulence strength should on the order of \\(10^{-15}-10^{-14}\\). To further simulate the effect of atmosphere turbulence on the rotational light field, we use the similar mode expansion method described earlier: \\(A_{\\ell,p}^{\\prime}=\\int\\int[\\mathrm{LG}_{p}^{\\ell}(r,\\varphi,\\mathrm{Z})]^{ \\ast}\\psi^{\\prime}(r,\\varphi,Z,t)drdrd\\varphi\\). When taking the effect of atmospheric turbulence with such a strength of \\(C_{n}^{2}=7.5\\ast 10^{-15}\\) into account, we only find a very small influence on the pure OAM spectra characterized by \\(P_{\\ell}^{\\prime}=\\sum_{p}|A^{\\prime}_{\\ell,p}|^{2}\\) (see Fig. 7(c)). For the Clover, the power of the dominant single OAM modes, i.e., \\(\\ell^{\\prime}=\\pm 3\\), is slightly spread to the adjacent modes with around 1 % efficiency. For the five-leaf Pentas and the dominant modes \\(\\ell^{\\prime}=\\pm 5\\) to the adjacent modes with around 1.2 %. Again, this demonstrates that the more complex the pattern the more severe the mode coupling. To see under which turbulence conditions the 120m link might not have worked, we do a further simulation and investigate the relationship between the strength of turbulence(\\(C_{n}^{2}\\)) and the mode spreading efficiency(\\(\\xi\\)) (see Fig. 7(d)). Only strong turbulence (stronger than \\(5\\ast 10^{-14}\\)) will cause \\(\\xi>50\\) % and may cause the incorrect discrimination of the main mode from the adjacent one. Thus we believe that in our situation the atmospheric turbulence only contributed very weakly to the mode spreading phenomena observed in our results. This can be clearly seen in the frequency spectrum in Fig. 5(a), where nearly no signal can be detected for the modes \\(\\ell^{\\prime}=\\pm 4\\) and \\(\\ell^{\\prime}=\\pm 5\\), while during another measurement run (shown in Fig 4(a)) we observe stronger coupling to neighboring modes. This asymmetric phenomenon gives additional evidence that the mode spreading is mainly coming from the misalignment but not the turbulence. However, irrespective where this mode spreading is coming from, misalignment or turbulence, one may always deduce the rotation speed and the rotational symmetry of an unknown rotating object as long as one can clearly discriminate the dominant modes from the spread modes. For a longer distance sensing of an rotating object, one will need to address both of the analyzed effects. Here, adaptive optics and machining learning-based pattern recognition may be introduced to compensate the effect of turbulence or misalignment[34]. ## V V. Conclusion In conclusion, we have conducted an outdoor experiment of measuring the rotational Doppler effect by building a 120-m free-space optical link in a realistic city environment. Our experimental results with two typical rotating objects, i.e., Clover and Pentas patterns, demonstrate that long-distance remote sensing of spinning bod Figure 7: Influence of the atmospheric turbulence on received patterns. (a): simulation results of the Clover and Pentas light fields at the receiver under different turbulence strength. (b): experimentally obtained results recorded by an EMCCD at the receiver. (c): simulation results of the slightly disturbed pure OAM spectra with a turbulence strength of \\(C_{n}^{2}=7.5\\ast 10^{-15}\\), which we estimated to resemble our experimental conditions. (d): simulation results of the variation of spreading efficiency with respect to different turbulence strength over 120 m. ies is practically feasible, particularly for those objects possessing a high spatial symmetry. Despite the appearance of frequency shift to adjacent modes that was caused by the slight misalignments and influence of atmospheric turbulence, we can still observe a clearly distinguishable peak of frequency shifts at the desired OAM detection modes, which is associated with the objects rotational symmetry. The effect of the energy spread accompanied with the transfer of frequency shifts from the dominant OAM modes to their adjacent ones was carefully analysed, which might offer a important value of reference for further exploration of this field. The natural extension of our scheme is to implement the free-space link in a longer distance to detect any rotating bodies [35; 36; 37]. Moreover, the ability of our scheme to work at the photon-counting regimes suggests the potential to combine the rotational Doppler effect with quantum entangled light source for long-distance entanglement-enhanced remote sensing technique [38]. In addition, our feasibility study with extremely low light intensities may pave the way towards applications, such as in covert imaging and biological sensing, where a low photon flux is essential as a high photon flux might have detrimental effects [39]. ###### Acknowledgements. This work is supported by the National Natural Science Foundation of China (NSFC) (11474238, 91636109), the Fundamental Research Funds for the Central Universities at Xiamen University (20720160040), the Natural Science Foundation of Fujian Province of China for Distinguished Young Scientists(2015J06002), and the program for New Century Excellent Talents in University of China (NCET-13-0495). R.F. is thankful for financial support by the Banting postdoctoral fellowship of the Natural Sciences and Engineering Research Council of Canada (NSERC). ## References * Raymond [1984]M. Raymond, _Laser remote sensing: fundamentals and applications_ (Krieger Publishing Company, 1984). * Padgett [2006]M. Padgett, Nature **443**, 924 (2006). * Garetz and Arnold [1979]B. A. Garetz and S. Arnold, Optics Communications **31**, 1 (1979). * Simon _et al._ [1988]R. Simon, H. Kimble, and E. Sudarshan, Physical review letters **61**, 19 (1988). * Li _et al._ [2016]G. Li, T. Zentgraf, and S. Zhang, Nature Physics **12**, 736 (2016). * Allen _et al._ [1992]L. Allen, M. W. Beijersbergen, R. Spreeuw, and J. Woerdman, Physical Review A **45**, 8185 (1992). * Courtial _et al._ [1998]J. Courtial, K. Dholakia, D. Robertson, L. Allen, and M. Padgett, Physical review letters **80**, 3217 (1998). * Courtial _et al._ [1998]J. Courtial, D. Robertson, K. Dholakia, L. Allen, and M. Padgett, Physical review letters **81**, 4828 (1998). * Barreiro _et al._ [2006]S. Barreiro, J. W. R. Tabosa, H. Failache, and A. Lezama, Physical review letters **97**, 113601 (2006). * Korech _et al._ [2013]O. Korech, U. Steinitz, R. J. Gordon, I. S. Averbukh, and Y. Prior, Nature Photonics **7**, 711 (2013). * Lavery _et al._ [2013]M. P. Lavery, F. C. Speirits, S. M. Barnett, and M. J. Padgett, Science **341**, 537 (2013). * Lavery _et al._ [2014]M. P. Lavery, S. M. Barnett, F. C. Speirits, and M. J. Padgett, Optica **1**, 1 (2014). * Rosales-Guzman _et al._ [2014]C. Rosales-Guzman, N. Hermosa, A. Belmonte, and J. P. Torres, Optics express **22**, 16504 (2014). * Fang _et al._ [2017]L. Fang, M. J. Padgett, and J. Wang, Laser & Photonics Reviews **11** (2017). * Zhao _et al._ [2016]M. Zhao, X. Gao, M. Xie, W. Zhai, W. Xu, S. Huang, and W. Gu, Optics letters **41**, 2549 (2016). * Zhou _et al._ [2017]H.-L. Zhou, D.-Z. Fu, J.-J. Dong, P. Zhang, D.-X. Chen, X.-L. Cai, F.-L. Li, and X.-L. Zhang, Light: Science & Applications **6**, e16251 (2017). * Padgett [2014]M. Padgett, Physics Today **67**, 58 (2014). * Marrucci [2013]L. Marrucci, Science **341**, 464 (2013). * Willner _et al._ [2015]A. E. Willner, H. Huang, Y. Yan, Y. Ren, N. Ahmed, G. Xie, C. Bao, L. Li, Y. Cao, Z. Zhao, _et al._, Advances in Optics and Photonics **7**, 66 (2015). * Torner _et al._ [2005]L. Torner, J. P. Torres, and S. Carrasco, Optics express **13**, 873 (2005). * Molina-Terriza _et al._ [2007]G. Molina-Terriza, L. Rebane, J. P. Torres, L. Torner, and S. Carrasco, Journal of the European Optical Society-Rapid Publications **2** (2007). * Petrov _et al._ [2012]D. Petrov, N. Rahael, G. Molina-Terriza, and L. Torner, Optics letters **37**, 869 (2012). * Chen _et al._ [2014]L. Chen, J. Lei, and J. Romero, Light: Science & Applications **3**, e153 (2014). * Zhang and Chen [2016]W. Zhang and L. Chen, Optics Letters **41**, 2843 (2016). * Zhang _et al._ [2016]W. Zhang, Z. Wu, J. Wang, and L. Chen, Chinese Optics Letters **14**, 110501 (2016). * Vasnetsov _et al._ [2005]M. Vasnetsov, V. Pas'Ko, and M. Soskin, New Journal of Physics **7**, 46 (2005). * Lavery _et al._ [2017]M. P. Lavery, C. Peuntinger, K. Gunthner, P. Banzer, D. Elser, R. W. Boyd, M. J. Padgett, C. Marquardt, and G. Leuchs, Science Advances **3**, e1700552 (2017). * Lavery _et al._ [2011]M. P. Lavery, G. C. Berkhout, J. Courtial, and M. J. Padgett, Journal of Optics **13**, 064006 (2011). * Paterson [2005]C. Paterson, Physical review letters **94**, 153901 (2005). * Lane _et al._ [1992]R. Lane, A. Glindemann, J. Dainty, _et al._, Waves in random media **2**, 209 (1992). * Ostashev _et al._ [1998]V. Ostashev, B. Brahler, V. Mellett, and G. Goedecke, The Journal of the Acoustical Society of America **104**, 727 (1998). * Fu and Gao [2016]S. Fu and C. Gao, Photonics Research **4**, B1 (2016). * Zhao _et al._ [2012]S. Zhao, J. Leach, L. Gong, J. Ding, and B. Zheng, Optics express **20**, 452 (2012). * Ren _et al._ [2014]Y. Ren, G. Xie, H. Huang, N. Ahmed, Y. Yan, L. Li, C. Bao, M. P. Lavery, M. Tur, M. A. Neifeld, _et al._, Optica **1**, 376 (2014). * Krenn _et al._ [2014]M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, New Journal of Physics **16**, 113028 (2014). * Krenn _et al._ [2016]M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, Proceedings of the National Academy of Sciences **113**, 13648 (2016). * Tamburini _et al._ [2011]F. Tamburini, B. Thide, G. Molina-Terriza, and G. Anzolin, Nature Physics **7**, 195 (2011). * Krenn _et al._ [2015]M. Krenn, J. Handsteiner, M. Fink, R. Fickler, and A. Zeilinger, Proceedings of the National Academy of Sciences **112**, 14197 (2015). * Morris _et al._ [2015]P. A. Morris, R. S. Aspden, J. E. Bell, R. W. Boyd, and M. J. Padgett, Nature communications **6**, 5913 (2015).
The rotational Doppler effect associated with light's orbital angular momentum (OAM) has been found as a powerful tool to detect rotating bodies. However, this method was only demonstrated experimentally on the laboratory scale under well controlled conditions so far. And its real potential lies at the practical applications in the field of remote sensing. We have established a 120-meter long free-space link between the rooftops of two buildings and show that both the rotation speed and the rotational symmetry of objects can be identified from the detected rotational Doppler frequency shift signal at photon count level. Effects of possible slight misalignments and atmospheric turbulences are quantitatively analyzed in terms of mode power spreading to the adjacent modes as well as the transfer of rotational frequency shifts. Moreover, our results demonstrate that with the preknowledge of the object's rotational symmetry one may always deduce the rotation speed no matter how strong the coupling to neighboring modes is. Without any information of the rotating object, the deduction of the object's symmetry and rotational speed may still be obtained as long as the mode spreading efficiency does not exceed 50 %. Our work supports the feasibility of a practical sensor to remotely detect both the speed and symmetry of rotating bodies.
Give a concise overview of the text below.
241
arxiv-format/1708_02883v3.md
Maximum Volume Inscribed Ellipsoid: A New Simplex-Structured Matrix Factorization Framework via Facet Enumeration and Convex Optimization Chia-Hsiang Lin Instituto de Telecomunicacoes, Instituto Superior Tecnico, Universidade de Lisboa, Lisbon, Portugal Emails: [email protected] Ruiyuan Wu Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong. Emails: [email protected], [email protected] Wing-Kin Ma Institute of Communications Engineering, National Tsing-Hua University, Hsinchu, Taiwan 30013, R.O.C. Emails: [email protected] Chong-Yung Chi Department of Electrical and Computer Engineering, Virginia Polytechnic Institute and State University, VA, USA. Email: [email protected] Yue Wang Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong. Emails: [email protected], [email protected] ## 1 Introduction Consider the following problem. Let \\(\\mathbf{X}\\in\\mathbb{R}^{M\\times L}\\) be a given data matrix. The data matrix \\(\\mathbf{X}\\) adheres to a low-rank model \\(\\mathbf{X}=\\mathbf{AS}\\), where \\(\\mathbf{A}\\in\\mathbb{R}^{M\\times N},\\mathbf{S}\\in\\mathbb{R}^{N\\times L}\\) with \\(N\\leq\\min\\{M,L\\}\\). The goal is to recover \\(\\mathbf{A}\\) and \\(\\mathbf{S}\\) from \\(\\mathbf{X}\\), with the aid of some known or hypothesized structures with \\(\\mathbf{A}\\) and/or \\(\\mathbf{S}\\). Such a problem is called _structured matrix factorization (SMF)_. In this paper we focus on a specific type of SMF called _simplex-SMF (SSMF)_, where the columns of \\(\\mathbf{S}\\) are assumed to lie in the unit simplex. SSMF has been found to be elegant and powerful--as shown by more than a decade of research on _hyperspectral unmixing_ (HU) in geoscience and remote sensing [8, 43], and more recently, by research in areas such as computer vision, machine learning, text mining and optimization [30]. To describe SSMF and its underlying significance, it is necessary to mention two key research topics from which important SSMF techniques were developed. The first is HU, a main research topic in hyperspectral remote sensing. The task of HU is to decompose a remotely sensed hyperspectral image into endmember spectral signatures and the corresponding abundance maps, and SSMF plays the role of tackling such a decomposition. A widely accepted assumption in HU is that \\(\\mathbf{S}\\) has columns lying in the unit simplex; or, some data pre-processing may be applied to make the aforementioned assumption happen [8, 16, 30, 44]. Among the many SSMF techniques established within the hyperspectral remote sensing community, we should mention pure-pixel search and minimum volume enclosing simplex (MVES) [9, 14, 20, 39, 40, 46]--they are insightful and are recently shown to be theoretically sound [15, 31, 41]. The second topic that SSMF has shown impact is topic discovery for text mining--which has recently received much interest in machine learning. In this context, the so-called separable NMF techniques have attracted considerable attention [1, 2, 21, 22, 24, 25, 29, 48]. Separable NMF falls into the scope of SSMF as it also assumes that the columns of \\(\\mathbf{S}\\) lie in the unit simplex. Separable NMF is very closely related to, if not exactly the same as, pure-pixel search developed earlier in HU; the two use essentially the same model assumption. However, separable NMF offers new twists not seen in traditional HU, such as convex optimization solutions and robustness analysis in the noisy case; see the aforementioned references for details. Some recent research also considers more relaxed techniques than separable NMF, such as subset-separable NMF [28] and MVES [37]. Furthermore, it is worth noting that other than HU and topic discovery, SSMF also find applications in various areas such as gene expression data analysis, dynamic biomedical imaging, and analytical chemistry [17, 42, 50]. The beauty of the aforementioned SSMF frameworks lies in how they utilize the geometric structures of the SSMF model to pin down sufficient conditions for exact recovery, and to build algorithms with good recovery performance. We will shed some light onto those geometric insights when we review the problem in the next section, and we should note that recent theoretical breakthroughs in SSMF have played a key role in understanding the fundamental natures of SSMF better and in designing better algorithms. Motivated by such exciting advances, in this paper we explore a new theoretical direction for SSMF. Our idea is still geometrical, but we use a different way, namely, by considering the maximum volume ellipsoid inscribed in a data-constructed convex hull; the intuition will be elucidated later. As the main contribution of this paper, we will show a sufficient condition under which this maximum volume inscribed ellipsoid (MVIE) framework achieves exact recovery. The sufficient recovery condition we prove is arguably not hard to satisfy in practice and is much more relaxed than that of pure-pixel search and separable NMF, and coincidentally it is the same as that of MVES--which is a powerful SSMF framework for non-separable problem instances. In addition, our development will reveal that MVIE can be practically realized by solving a facet enumeration problem, and then by solving a convex optimization problem in form of log determinant maximization. This shows a very different flavor from the MVES framework in which we are required to solve a non-convex problem. While we should point out that our MVIE solution may not be computed in polynomial time because facet enumeration is NP-hard in general [5, 10], it still brings a new perspective to the SSMF problem. In particular, for instances where facet enumeration can be efficiently computed, the remaining problem with MVIE is to solve a convex problem in which local minima are no longer an issue. We will provide numerical results to show the potential of the MVIE framework. The organization of this paper is as follows. We succinctly review the SSMF model and some existing frameworks in Section 2. The MVIE framework is described in Section 3. Section 4 provides the proof of the main theoretical result in this paper. Section 5 develops an MVIE algorithm and discusses computational issues. Numerical results are provided in Section 6, and we conclude this work in Section 7. Our notations are standard, and some of them are specified as follows. Boldface lowercase and capital letters, like \\(\\mathbf{a}\\) and \\(\\mathbf{A}\\), represent vectors and matrices, respectively (resp.); unless specified, \\(\\mathbf{a}_{i}\\) denotes the \\(i\\)th column of \\(\\mathbf{A}\\); \\(\\mathbf{e}_{i}\\) denotes a unit vector with \\([\\mathbf{e}_{i}]_{i}=1\\) and \\([\\mathbf{e}_{i}]_{j}=0\\) for \\(j\ eq i\\); \\(\\mathbf{1}\\) denotes an all-one vector; \\(\\mathbf{a}\\geq\\mathbf{0}\\) means that \\(\\mathbf{a}\\) is element-wise non-negative; the pseudo-inverse of a given matrix \\(\\mathbf{A}\\) is denoted by \\(\\mathbf{A}^{\\dagger}\\); \\(\\|\\cdot\\|\\) denotes the Euclidean norm (for both vectors and matrices); given a set \\(\\mathcal{C}\\) in \\(\\mathbb{R}^{n}\\), aff \\(\\mathcal{C}\\) and conv \\(\\mathcal{C}\\) denote the affine hull and convex hull of \\(\\mathcal{C}\\), resp.; the dimension of a set \\(\\mathcal{C}\\) is denoted by \\(\\dim\\mathcal{C}\\); \\(\\text{int }\\mathcal{C},\\text{ri }\\mathcal{C},\\text{bd }\\mathcal{C}\\) and rbd \\(\\mathcal{C}\\) denote the interior, relative interior, boundary and relative boundary of the given set \\(\\mathcal{C}\\), resp.; vol \\(\\mathcal{C}\\) denotes the volume of a measurable set \\(\\mathcal{C}\\); \\(\\mathcal{B}_{n}=\\{\\mathbf{x}\\in\\mathbb{R}^{n}\\ |\\ \\|\\mathbf{x}\\|\\leq 1\\}\\) denotes the \\(n\\)-dimensional unit Euclidean-norm ball, or simply unit ball; \\(\\mathbb{S}^{n}\\) and \\(\\mathbb{S}^{n}_{+}\\) denote the sets of all \\(n\\times n\\) symmetric and symmetric positive semidefinite matrices, resp.; \\(\\lambda_{\\min}(\\mathbf{X})\\) and \\(\\lambda_{\\max}(\\mathbf{X})\\) denote the smallest and largest eigenvalues of \\(\\mathbf{X}\\), resp. ## 2 Data Model and Related Work In this section we describe the background of SSMF. ### Model As mentioned in the Introduction, we consider a low-rank data model \\[\\mathbf{X}=\\mathbf{A}\\mathbf{S},\\] where \\(\\mathbf{A}\\in\\mathbb{R}^{M\\times N},\\mathbf{S}\\in\\mathbb{R}^{N\\times L}\\) with \\(N\\leq\\min\\{M,L\\}\\). The model can be written in a column-by-column form as \\[\\mathbf{x}_{i}=\\mathbf{A}\\mathbf{s}_{i},\\quad i=1,\\ldots,L, this problem simplex-structured matrix factorization, or SSMF in short. We will focus only on the recovery of \\(\\mathbf{A}\\); once \\(\\mathbf{A}\\) is retrieved, the factor \\(\\mathbf{S}\\) can simply be recovered by solving the inverse problems \\[\\min_{\\mathbf{s}_{i}\\geq\\mathbf{0},\\mathbf{1}^{T}\\mathbf{s}_{i}=1}\\ \\|\\mathbf{x}_{i}-\\mathbf{A}\\mathbf{s}_{i} \\|^{2},\\qquad i=1,\\ldots,L.\\] SSMF finds many important applications as we reviewed in the Introduction, and one can find an enormous amount of literature--from remote sensing, signal processing, machine learning, computer vision, optimization, etc.--on the wide variety of techniques for SSMF or related problems. Here we selectively and concisely describe two mainstream frameworks. ### Pure-Pixel Search and Separable NMF The first framework to be reviewed is pure-pixel search in HU in remote sensing [43] or separable NMF in machine learning [30]. Both assume that for every \\(k\\in\\{1,\\ldots,N\\}\\), there exists an index \\(i_{k}\\in\\{1,\\ldots,N\\}\\) such that \\[\\mathbf{s}_{i_{k}}=\\mathbf{e}_{k}.\\] The above assumption is called the pure-pixel assumption in HU or separability assumption in separable NMF. Figure 1(a) illustrates the geometry of \\(\\mathbf{s}_{1},\\ldots,\\mathbf{s}_{L}\\) under the pure-pixel assumption, where we see that the pure pixels \\(\\mathbf{s}_{i_{1}},\\ldots,\\mathbf{s}_{i_{N}}\\) are the vertices of the convex hull \\(\\operatorname{conv}\\{\\mathbf{s}_{1},\\ldots,\\mathbf{s}_{L}\\}\\). This suggests that some kind of vertex search can lead to recovery of \\(\\mathbf{A}\\)--the key insight of almost all algorithms in this framework. The beauty of pure-pixel search or separable NMF is that under the pure-pixel assumption, SSMF can be accomplished either via simple algorithms [1, 25] or via convex optimization [21, 22, 24, 29, 48]. Also, as shown in the aforementioned references, some of these algorithms are supported by theoretical analyses in terms of guarantees on recovery accuracies. To give insights into how the geometry of the pure-pixel case can be utilized for SSMF, we briefly describe a pure-pixel search framework based on _maximum volume inscribed simplex_ (MVIS) [14, 46]. The MVIS framework considers the following problem \\[\\max_{\\mathbf{b}_{1},\\ldots,\\mathbf{b}_{N}\\in\\mathbb{R}^{M}} \\operatorname{vol}(\\operatorname{conv}\\{\\mathbf{b}_{1},\\ldots,\\mathbf{b}_ {N}\\}) \\tag{1}\\] \\[\\text{s.t.}\\operatorname{conv}\\{\\mathbf{b}_{1},\\ldots,\\mathbf{b}_{N}\\} \\subseteq\\operatorname{conv}\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\},\\] where we seek to find a simplex \\(\\operatorname{conv}\\{\\mathbf{b}_{1},\\ldots,\\mathbf{b}_{N}\\}\\) such that it is inscribed in the data convex hull \\(\\operatorname{conv}\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\}\\) and its volume is the maximum; see Figure 2 for an illustration. Intuitively, it seems true that the vertices of the MVIS, under the pure-pixel assumption, should be \\(\\mathbf{a}_{1},\\ldots,\\mathbf{a}_{N}\\). In fact, this can be shown to be valid: **Theorem 1**: _[_14_]_ _The optimal solution to the MVIS problem (1) is \\(\\mathbf{a}_{1},\\ldots,\\mathbf{a}_{N}\\) or their permutations if and only if the pure-pixel assumption holds._ It should be noted that the above theorem also reveals that the MVIS cannot correctly recover \\(\\mathbf{a}_{1},\\ldots,\\mathbf{a}_{N}\\) for no-pure-pixel or non-separable problem instances. Readers are also referred to [14] for details on how the MVIS problem is handled in practice. ### Minimum Volume Enclosing Simplex While SSMF under the pure-pixel assumption gives many benefits, the assumption of having pure pixels in the data is somewhat strong. A question that has previously puzzled researchers is whether recovery of \\(\\mathbf{A}\\) is possible without the pure-pixel assumption. This leads to another framework that hinges on minimum volume enclosing simplex (MVES)--a notion conceived first by Craig in the Figure 2: Geometrical illustration of MVIS. The instance shown satisfies the pure-pixel assumption. The way we visualize is similar to that in Figure 1, where we project the data points \\(\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\) onto the affine hull \\(\\operatorname{aff}\\{\\mathbf{a}_{1},\\mathbf{a}_{2},\\mathbf{a}_{3}\\}\\). The solid dark dots are the data points \\(\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\). The subfigure in (a) depicts a simplex inscribed in the data convex hull \\(\\operatorname{conv}\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\}\\). The outer triangle represents \\(\\operatorname{conv}\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\}\\), while the inner triangle the inscribed simplex. The subfigure in (b) depicts the MVIS. The vertices of the MVIS, marked by “\\(\\times\\)”, are seen to be \\(\\mathbf{a}_{1},\\mathbf{a}_{2},\\mathbf{a}_{3}\\). HU context [20] and an idea that can be traced back to the 1980's [27]. The idea is to solve an MVES problem \\[\\min_{\\mathbf{b}_{1},\\ldots,\\mathbf{b}_{N}\\in\\mathbb{R}^{M}} \\operatorname{vol}(\\operatorname{conv}\\{\\mathbf{b}_{1},\\ldots,\\mathbf{b}_{N}\\}) \\tag{2}\\] \\[\\text{s.t. }\\mathbf{x}_{i}\\in\\operatorname{conv}\\{\\mathbf{b}_{1},\\ldots,\\mathbf{b }_{N}\\},\\quad i=1,\\ldots,L,\\] or its variants (see, e.g., [7, 23]). As can be seen in (2) and as illustrated in Figure 3, the goal is to find a simplex that encloses the data points and has the minimum volume. The vertices of the MVES, which is the solution \\(\\mathbf{b}_{1},\\ldots,\\mathbf{b}_{N}\\) to Problem (2), then serves as the estimate of \\(\\mathbf{A}\\). MVES is more commonly seen in HU, and most recently the idea has made its way to machine learning [26, 37]. Empirically it has been observed that MVES can achieve good recovery accuracies in the absence of pure pixels, and MVES-based algorithms are often regarded as tools for resolving instances of \"heavily mixed pixels\" in HU [45]. Recently, the mystery of whether MVES can provide exact recovery _theoretically_ has been answered: **Theorem 2**: _[_41_]_ _Define_ \\[\\gamma=\\max\\left\\{r\\leq 1\\ |\\ (\\operatorname{conv}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e }_{N}\\})\\cap(r\\mathcal{B}_{N})\\subseteq\\operatorname{conv}\\{\\mathbf{s}_{1},\\ldots,\\mathbf{s}_{L}\\}\\right\\}, \\tag{3}\\] _which is called the uniform pixel purity level. If \\(N\\geq 3\\) and_ \\[\\gamma>\\frac{1}{\\sqrt{N-1}},\\] _then the optimal solution to the MVES problem (2) must be given by \\(\\mathbf{a}_{1},\\ldots,\\mathbf{a}_{N}\\) or their permutations._ The uniform pixel purity level has elegant geometric interpretations. To give readers some feeling, Figure 1(b) illustrates an instance for which \\(\\gamma>1/\\sqrt{N-1}\\) holds, but the pure-pixel assumption does not. Also, note that \\(\\gamma=1\\) corresponds to the pure-pixel case. Interested readers are referred to [41] for more explanations of \\(\\gamma\\), and [23, 26, 37] for concurrent and more recent results for theoretical MVES recovery. Loosely speaking, the premise in Theorem 2 should have a high probability to satisfy in practice as far as the data points are reasonably well spread. While MVES is appealing in its recovery guarantees, the pursuit of SSMF frameworks is arguably not over. The MVES problem (2) is non-convex and NP-hard in general [47]. Our numerical experience is that the convergence of an MVES algorithm to a good result could depend on the starting point. Hence, it is interesting to study alternative frameworks that can also go beyond the pure-pixel or separability case and can bring new perspective to the no-pure-pixel case--and this is the motivation for our development of the MVIE framework in the next section. ## 3 Maximum Volume Inscribed Ellipsoid Let us first describe some facts and our notations with ellipsoids. Any \\(n\\)-dimensional ellipsoid \\(\\mathcal{E}\\) in \\(\\mathbb{R}^{m}\\) may be characterized as \\[\\mathcal{E}=\\mathcal{E}(\\mathbf{F},\\mathbf{c})\\triangleq\\{\\mathbf{F}\\mathbf{\\alpha}+\\mathbf{c}\\ |\\ \\|\\mathbf{\\alpha}\\|\\leq 1\\},\\] for some full column-rank \\(\\mathbf{F}\\in\\mathbb{R}^{m\\times n}\\) and \\(\\mathbf{c}\\in\\mathbb{R}^{m}\\). The volume of an \\(n\\)-dimensional ellipsoid \\(\\mathcal{E}(\\mathbf{F},\\mathbf{c})\\) is given by \\[\\operatorname{vol}(\\mathcal{E}(\\mathbf{F},\\mathbf{c}))=\\rho_{n}(\\det(\\mathbf{F}^{T}\\mathbf{F}) )^{1/2},\\]where \\(\\rho_{n}\\) denotes the volume of the \\(n\\)-dimensional unit ball [11]. We are interested in an MVIE problem whose aim is to find a maximum volume ellipsoid contained in the convex hull of the data points. For convenience, denote \\[\\mathcal{X}=\\operatorname{conv}\\{\\boldsymbol{x}_{1},\\ldots,\\boldsymbol{x}_{L}\\}\\] to be the convex hull of the data points. As a basic result one can show that \\[\\dim\\mathcal{X}=\\dim(\\operatorname{aff}\\{\\boldsymbol{x}_{1},\\ldots, \\boldsymbol{x}_{L}\\})=\\dim(\\operatorname{aff}\\{\\boldsymbol{a}_{1},\\ldots, \\boldsymbol{a}_{N}\\})=N-1; \\tag{4}\\] note that the second equality is due to \\(\\operatorname{aff}\\{\\boldsymbol{x}_{1},\\ldots,\\boldsymbol{x}_{L}\\}= \\operatorname{aff}\\{\\boldsymbol{a}_{1},\\ldots,\\boldsymbol{a}_{N}\\}\\) under (A3), which was proved in [14, 16]. Hence we also restrict the dimension of the ellipsoid to be \\(N-1\\), and the MVIE problem is formulated as \\[\\begin{array}{rl}\\max_{\\boldsymbol{F},\\boldsymbol{c}}&\\det(\\boldsymbol{F}^ {T}\\boldsymbol{F})\\\\ &\\text{s.t.}&\\mathcal{E}(\\boldsymbol{F},\\boldsymbol{c})\\subseteq\\mathcal{X}, \\end{array} \\tag{5}\\] where \\(\\boldsymbol{F}\\in\\mathbb{R}^{M\\times(N-1)},\\boldsymbol{c}\\in\\mathbb{R}^{M}\\).1 It is interesting to note that the MVIE formulation above is similar to the MVIS formulation (1); the inscribed simplex in MVIS is replaced by an ellipsoid. However, the pursuit of MVIE leads to significant differences from that of MVIS. To see it, consider the illustration in Figure 4. We observe that the MVIE and the data convex hull \\(\\mathcal{X}\\) have contact points on their relative boundaries. Since those contact points are also on the \"appropriate\" facets of \\(\\operatorname{conv}\\{\\boldsymbol{a}_{1},\\ldots,\\boldsymbol{a}_{N}\\}\\) (for the instance in Figure 4), they may provide clues on how to recover \\(\\boldsymbol{a}_{1},\\ldots,\\boldsymbol{a}_{N}\\). Footnote 1: Notice that we do not constrain \\(\\boldsymbol{F}\\) to be of full column rank in Problem (5) for the following reasons. First, it The following theorem describes the main result of this paper. Figure 3: Geometrical illustration of MVES. The instance shown does not satisfy the pure-pixel assumption. The way we visualize is the same as that in Figure 2. The solid dark dots are the data points \\(\\boldsymbol{x}_{1},\\ldots,\\boldsymbol{x}_{L}\\), the dashed line outlines where is \\(\\operatorname{conv}\\{\\boldsymbol{a}_{1},\\boldsymbol{a}_{2},\\boldsymbol{a}_{3}\\}\\), the solid line inside \\(\\operatorname{conv}\\{\\boldsymbol{a}_{1},\\boldsymbol{a}_{2},\\boldsymbol{a}_{3}\\}\\) shows the relative boundary of the data convex hull \\(\\operatorname{conv}\\{\\boldsymbol{x}_{1},\\ldots,\\boldsymbol{x}_{L}\\}\\), and the solid line outside \\(\\operatorname{conv}\\{\\boldsymbol{a}_{1},\\boldsymbol{a}_{2},\\boldsymbol{a}_{3}\\}\\) shows the relative boundary of a data-enclosing simplex \\(\\operatorname{conv}\\{\\boldsymbol{b}_{1},\\boldsymbol{b}_{2},\\boldsymbol{b}_{3}\\}\\). From this illustration it seems likely that the minimum volume data-enclosing simplex would be \\(\\operatorname{conv}\\{\\boldsymbol{a}_{1},\\boldsymbol{a}_{2},\\boldsymbol{a}_{3}\\}\\) itself. **Theorem 3**: _Suppose that \\(N\\geq 3\\) and \\(\\gamma>1/\\sqrt{N-1}\\). The MVIE, or the optimal ellipsoid of Problem (5), is uniquely given by_ \\[\\mathcal{E}^{\\star}=\\mathcal{E}\\left(\\tfrac{1}{\\sqrt{N(N-1)}}\\mathbf{A}\\mathbf{C},\\bar{ \\mathbf{a}}\\right), \\tag{6}\\] _where \\(\\mathbf{C}\\in\\mathbb{R}^{N\\times(N-1)}\\) is any semi-unitary matrix such that \\(\\mathbf{C}^{T}\\mathbf{1}=\\mathbf{0}\\), and \\(\\bar{\\mathbf{a}}=\\frac{1}{N}\\sum_{i=1}^{N}\\mathbf{a}_{i}\\). Also, there are exactly \\(N\\) contact points between \\(\\mathcal{E}^{\\star}\\) and \\(\\mathrm{rbd}\\ \\mathcal{X}\\), that is,_ \\[\\mathcal{E}^{\\star}\\cap(\\mathrm{rbd}\\ \\mathcal{X})=\\{\\mathbf{q}_{1},\\ldots,\\mathbf{q}_{N }\\}, \\tag{7}\\] _and those contact points are given by_ \\[\\mathbf{q}_{i}=\\frac{1}{N-1}\\sum_{j\ eq i}\\mathbf{a}_{j}. \\tag{8}\\] Theorem 3 gives a vital implication on a condition under which we can leverage MVIE to exactly recover \\(\\mathbf{A}\\). Consider the following corollary as a direct consequence of Theorem 3. **Corollary 1**: _Under the premises of \\(N\\geq 3\\) and \\(\\gamma>1/\\sqrt{N-1}\\), we can exactly recover \\(\\mathbf{A}\\) by solving the MVIE problem (5), finding the contact points \\(\\mathbf{q}_{i}\\)'s in (7), and reconstructing \\(\\mathbf{a}_{i}\\)'s either via_ \\[\\mathbf{a}_{i}=N\\bar{\\mathbf{a}}-(N-1)\\mathbf{q}_{i},\\quad i=1,\\ldots,N,\\] _or via_ \\[\\mathbf{a}_{i}=\\sum_{j=1}^{N}\\mathbf{q}_{j}-(N-1)\\mathbf{q}_{i},\\quad i=1,\\ldots,N.\\] Figure 4: Geometrical illustration of MVIE. The instance shown does not satisfy the pure-pixel assumption. The way we visualize is the same as that in Figure 2. In the subfigure (a), the circle depicts an ellipsoid inscribed in the data convex hull \\(\\mathrm{conv}\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\}\\). The subfigure in (b) shows a possible scenario for which the MVIE has contact points with \\(\\mathrm{conv}\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\}\\); those contact points are marked by “\\(\\times\\)”. Hence, we have shown a new and provably correct SSMF framework via MVIE. Coincidentally and beautifully, the sufficient exact recovery condition of this MVIE framework is the same as that of the MVES framework (cf. Theorem 2)--which suggests that MVIE should be as powerful as MVES. In the next section we will describe the proof of Theorem 3. We will also develop an algorithm for implementing MVIE, and then testing it through numerical experiments; these will be considered in Sections 5-6. ## 4 Proof of Theorem 3 Before we give the full proof of Theorem 3, we should briefly mention the insight behind. At the heart of our proof is John's theorem for MVIE characterization, which is described as follows. **Theorem 4**: _[_36_]_ _Let \\(\\mathcal{T}\\subset\\mathbb{R}^{n}\\) be a compact convex set with non-empty interior. The following two statements are equivalent._ _(a) The_ \\(n\\)_-dimensional ellipsoid of maximum volume contained in_ \\(\\mathcal{T}\\) _is uniquely given by_ \\(\\mathcal{B}_{n}\\)_._ _(b)_ \\(\\mathcal{B}_{n}\\subseteq\\mathcal{T}\\) _and there exist points_ \\(\\boldsymbol{u}_{1},\\ldots,\\boldsymbol{u}_{r}\\in\\mathcal{B}_{n}\\cap(\\mathrm{bd} \\ \\mathcal{T})\\)_, with_ \\(r\\geq n+1\\)_, such that_ \\[\\sum_{i=1}^{r}\\lambda_{i}\\boldsymbol{u}_{i}=\\boldsymbol{0},\\qquad\\sum_{i=1}^{ r}\\lambda_{i}\\boldsymbol{u}_{i}\\boldsymbol{u}_{i}^{T}=\\boldsymbol{I},\\] _for some_ \\(\\lambda_{1},\\ldots,\\lambda_{r}>0\\)_._ There are however challenges to be overcome. First, John's theorem cannot be directly applied to our MVIE problem (5) because \\(\\mathcal{X}\\) does not have an interior (although \\(\\mathcal{X}\\) has non-empty relative interior). Second, John's theorem does not tell us how to identify the contact points \\(\\boldsymbol{u}_{i}\\)'s--which we will have to find out. Third, our result in Theorem 3 is stronger in the sense that we characterize the set of _all_ the contact points, and this will require some extra work. The proof of Theorem 3 is divided into three parts and described in the following subsections. Before we proceed, let us define some specific notations that will be used throughout the proof. We will denote an affine set by \\[\\mathcal{A}(\\boldsymbol{\\Phi},\\boldsymbol{b})\\triangleq\\{\\boldsymbol{\\Phi} \\boldsymbol{\\alpha}+\\boldsymbol{b}\\ |\\ \\boldsymbol{\\alpha}\\in\\mathbb{R}^{n}\\},\\] for some \\(\\boldsymbol{\\Phi}\\in\\mathbb{R}^{m\\times n},\\boldsymbol{b}\\in\\mathbb{R}^{n}\\). In fact, any affine set \\(\\mathcal{A}\\) in \\(\\mathbb{R}^{m}\\) of \\(\\dim\\mathcal{A}=n\\) may be represented by \\(\\mathcal{A}=\\mathcal{A}(\\boldsymbol{\\Phi},\\boldsymbol{b})\\) for some full column rank \\(\\boldsymbol{\\Phi}\\in\\mathbb{R}^{m\\times n}\\) and \\(\\boldsymbol{b}\\in\\mathbb{R}^{m}\\). Also, we let \\(\\boldsymbol{C}\\in\\mathbb{R}^{N\\times(N-1)}\\) denote any matrix such that \\[\\boldsymbol{C}^{T}\\boldsymbol{C}=\\boldsymbol{I},\\quad\\boldsymbol{C}^{T} \\boldsymbol{1}=\\boldsymbol{0}, \\tag{9}\\] and we let \\[\\boldsymbol{d}=\\tfrac{1}{N}\\boldsymbol{1}\\in\\mathbb{R}^{N}. \\tag{10}\\] ### Dimensionality Reduction Our first task is to establish an equivalent MVIE transformation result. **Proposition 1**: _Represent the affine hull \\(\\operatorname{aff}\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\}\\) by_ \\[\\operatorname{aff}\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\}=\\mathcal{A}(\\mathbf{\\Phi},\\mathbf{b}) \\tag{11}\\] _for some full column rank \\(\\mathbf{\\Phi}\\in\\mathbb{R}^{M\\times(N-1)}\\) and \\(\\mathbf{b}\\in\\mathbb{R}^{M}\\). Let_ \\[\\mathbf{x}^{\\prime}_{i}=\\mathbf{\\Phi}^{\\dagger}(\\mathbf{x}_{i}-\\mathbf{b}),\\ i=1,\\ldots,L, \\qquad\\mathcal{X}^{\\prime}=\\operatorname{conv}\\{\\mathbf{x}^{\\prime}_{1},\\ldots,\\bm {x}^{\\prime}_{L}\\}\\subset\\mathbb{R}^{N-1}.\\] _The MVIE problem (5) is equivalent to_ \\[\\max_{\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime}} \\ |\\det(\\mathbf{F}^{\\prime})|^{2} \\tag{12}\\] \\[\\text{s.t.} \\ \\mathcal{E}(\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\subseteq \\mathcal{X}^{\\prime},\\] _where \\(\\mathbf{F}^{\\prime}\\in\\mathbb{R}^{(N-1)\\times(N-1)}\\), \\(\\mathbf{c}^{\\prime}\\in\\mathbb{R}^{N-1}\\). In particular, the following properties hold:_ _(a) If_ \\((\\mathbf{F},\\mathbf{c})\\) _is a feasible (resp., optimal) solution to Problem (_5_), then_ \\[(\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})=(\\mathbf{\\Phi}^{\\dagger}\\mathbf{F},\\mathbf{\\Phi}^{\\dagger }(\\mathbf{c}-\\mathbf{b})) \\tag{13}\\] _is a feasible (resp., optimal) solution to Problem (_12_)._ _(b) If_ \\((\\mathbf{F}^{,\\prime},\\mathbf{c}^{\\prime})\\) _is a feasible (resp., optimal) solution to Problem (_12_), then_ \\[(\\mathbf{F},\\mathbf{c})=(\\mathbf{\\Phi}\\mathbf{F}^{\\prime},\\mathbf{\\Phi}\\mathbf{c}^{\\prime}+\\mathbf{b}) \\tag{14}\\] _is a feasible (resp., optimal) solution to Problem (_5_)._ _(c) The set_ \\(\\mathcal{X}^{\\prime}\\) _has non-empty interior._ _(d) Let_ \\((\\mathbf{F},\\mathbf{c})\\) _be a feasible solution to Problem (_5_), and let_ \\((\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\) _be given by (_13_); or, let_ \\((\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\) _be a feasible solution to Problem (_12_), and let_ \\((\\mathbf{F},\\mathbf{c})\\) _be given by (_14_). Denote_ \\(\\mathcal{E}=\\mathcal{E}(\\mathbf{F},\\mathbf{c})\\) _and_ \\(\\mathcal{E}^{\\prime}=\\mathcal{E}(\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\)_. Then_ \\[\\mathbf{q}\\in\\mathcal{E}\\cap(\\operatorname{rbd}\\ \\mathcal{X}) \\Longrightarrow\\quad\\mathbf{q}^{\\prime}=\\mathbf{\\Phi}^{\\dagger}(\\mathbf{q}-\\bm {b})\\in\\mathcal{E}^{\\prime}\\cap(\\operatorname{bd}\\ \\mathcal{X}^{\\prime}),\\] \\[\\mathbf{q}^{\\prime}\\in\\mathcal{E}^{\\prime}\\cap(\\operatorname{bd}\\ \\mathcal{X}^{\\prime}) \\Longrightarrow\\quad\\mathbf{q}=\\mathbf{\\Phi}\\mathbf{q}^{\\prime}+\\mathbf{b}\\in \\mathcal{E}\\cap(\\operatorname{rbd}\\ \\mathcal{X}).\\] The above result is a dimensionality reduction (DR) result where we equivalently transform the MVIE problem from a higher dimension space (specifically, \\(\\mathbb{R}^{M}\\)) to a lower dimensional space (specifically, \\(\\mathbb{R}^{N-1}\\)). It has the same flavor as the so-called affine set fitting result in [14, 16], which is also identical to principal component analysis. This DR result will be used again when we develop an algorithm for MVIE in later sections. We relegate the proof of Proposition 1 to Appendix A. Now, we construct an equivalent MVIE problem via a specific choice of \\((\\mathbf{\\Phi},\\mathbf{b})\\). It has been shown that under (A3), \\[\\operatorname{aff}\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\}=\\operatorname{aff}\\{\\mathbf{a}_ {1},\\ldots,\\mathbf{a}_{N}\\}; \\tag{15}\\] see [14, 16]. Also, consider the following fact. **Fact 1**: _[_41_]_ _The affine hull of all unit vectors \\(\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\) in \\(\\mathbb{R}^{N}\\) can be characterized as_ \\[\\mathrm{aff}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\}=\\mathcal{A}(\\mathbf{C},\\mathbf{d}),\\] _where \\(\\mathbf{C}\\) and \\(\\mathbf{d}\\) have been defined in (9) and (10), resp._ Applying Fact 1 to (15) yields \\[\\mathrm{aff}\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\}=\\mathcal{A}(\\mathbf{AC},\\mathbf{Ad}).\\] By choosing \\((\\mathbf{\\Phi},\\mathbf{b})=(\\mathbf{AC},\\mathbf{Ad})\\) and applying Proposition 1, we obtain an equivalent MVIE problem in (12) that has \\[\\mathbf{x}_{i}=\\mathbf{AC}\\mathbf{x}_{i}^{\\prime}+\\mathbf{Ad},\\quad i=1,\\ldots,L.\\] The above equation can be simplified. By plugging the model \\(\\mathbf{x}_{i}=\\mathbf{As}_{i}\\) into the above equation, we get \\(\\mathbf{s}_{i}=\\mathbf{C}\\mathbf{x}_{i}^{\\prime}+\\mathbf{d}\\); and using the properties \\(\\mathbf{C}^{T}\\mathbf{C}=\\mathbf{I}\\) and \\(\\mathbf{C}^{T}\\mathbf{d}=\\mathbf{0}\\) we further get \\(\\mathbf{x}_{i}^{\\prime}=\\mathbf{C}^{T}\\mathbf{s}_{i}\\). By changing the notation \\(\\mathcal{X}^{\\prime}\\) to \\(\\mathcal{S}^{\\prime}\\), and \\(\\mathbf{x}_{i}^{\\prime}\\) to \\(\\mathbf{s}_{i}^{\\prime}\\), we rewrite the equivalent MVIE problem (12) as \\[\\max_{\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime}} \\mid\\det(\\mathbf{F}^{\\prime})|^{2}\\] (16) s.t. \\[\\mathcal{E}(\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\subseteq\\mathcal{S}^ {\\prime},\\] where we again have \\(\\mathbf{F}^{\\prime}\\in\\mathbb{R}^{(N-1)\\times(N-1)}\\), \\(\\mathbf{c}^{\\prime}\\in\\mathbb{R}^{N-1}\\); \\(\\mathcal{S}^{\\prime}\\) is given by \\(\\mathcal{S}^{\\prime}=\\mathrm{conv}\\{\\mathbf{s}_{1}^{\\prime},\\ldots,\\mathbf{s}_{L}^{ \\prime}\\}\\) with \\[\\mathbf{s}_{i}^{\\prime}=\\mathbf{C}^{T}\\mathbf{s}_{i},\\quad i=1,\\ldots,L.\\] Furthermore, note that \\(\\mathcal{S}^{\\prime}\\) has non-empty interior; cf. Statement (c) of Proposition 1. ### Solving the MVIE via John's Theorem Next, we apply John's theorem to the equivalent MVIE problem in (16). It would be helpful to first describe the outline of our proof. For convenience, let \\[\\beta=\\frac{1}{\\sqrt{N(N-1)}}\\] and \\[\\mathbf{q}_{i}^{\\prime}=\\frac{1}{N-1}\\sum_{j\ eq i}\\mathbf{C}^{T}\\mathbf{e}_{j},\\quad i=1,\\ldots,N.\\] We will show that the optimal ellipsoid to Problem (16) is uniquely given by \\(\\beta\\mathcal{B}_{N-1}\\), and that \\(\\mathbf{q}_{1}^{\\prime},\\ldots,\\mathbf{q}_{N}^{\\prime}\\) lie in \\((\\beta\\mathcal{B}_{N-1})\\cap(\\mathrm{bd}\\ \\mathcal{S}^{\\prime})\\); the underlying premise is \\(\\gamma\\geq 1/\\sqrt{N-1}\\). Subsequently, by the equivalence properties in Proposition 1, and by \\(\\beta\\mathcal{B}_{N-1}=\\mathcal{E}(\\beta\\mathbf{I},\\mathbf{0})\\), we have \\[\\mathcal{E}(\\beta\\mathbf{AC},\\mathbf{Ad})=\\mathcal{E}^{\\star} \\tag{17}\\] as the optimal ellipsoid of our original MVIE problem (5); also, we have \\[\\mathbf{q}_{i}=\\mathbf{AC}\\mathbf{q}_{i}^{\\prime}+\\mathbf{Ad}\\in\\mathcal{E}^{\\star}\\cap( \\mathrm{rbd}\\ \\mathcal{X}),\\quad i=1,\\ldots,N.\\] Furthermore, it will be shown that \\(\\mathbf{q}_{i}\\) can be reduced to \\(\\mathbf{q}_{i}=\\frac{1}{N-1}\\sum_{j\ eq i}\\mathbf{a}_{j}\\). Hence, except for the claim \\(\\{\\mathbf{q}_{1},\\ldots,\\mathbf{q}_{N}\\}=\\mathcal{E}^{\\star}\\cap(\\mathrm{rbd}\\ \\mathcal{X})\\), we see all the results in Theorem 3. Now, we show the more detailed parts of the proof. _Step 1:_ Let us assume \\(\\beta\\mathcal{B}_{N-1}\\subseteq\\mathcal{S}^{\\prime}\\) and \\(\\boldsymbol{q}_{i}^{\\prime}\\in(\\beta\\mathcal{B}_{N-1})\\cap(\\text{bd }\\mathcal{S}^{\\prime})\\) for all \\(i\\); we will come back to this later. The aim here is to verify that \\(\\beta\\mathcal{B}_{N-1}\\) and \\(\\boldsymbol{q}_{1}^{\\prime},\\ldots,\\boldsymbol{q}_{N}^{\\prime}\\) satisfy the MVIE conditions in John's theorem. Since \\(\\boldsymbol{C}^{T}\\boldsymbol{1}=\\boldsymbol{0}\\), we can simplify \\(\\boldsymbol{q}_{i}^{\\prime}\\) to \\[\\boldsymbol{q}_{i}^{\\prime}=\\frac{1}{N-1}\\boldsymbol{C}^{T}(\\boldsymbol{1}- \\boldsymbol{e}_{i})=-\\frac{1}{N-1}\\boldsymbol{C}^{T}\\boldsymbol{e}_{i}.\\] Consequently, one can verify that \\[(N-1)^{2}\\sum_{i=1}^{N}\\boldsymbol{q}_{i}^{\\prime} =-(N-1)\\boldsymbol{C}^{T}\\boldsymbol{1}=\\boldsymbol{0},\\] \\[(N-1)^{2}\\sum_{i=1}^{N}(\\boldsymbol{q}_{i}^{\\prime})(\\boldsymbol {q}_{i}^{\\prime})^{T} =\\boldsymbol{C}^{T}\\left(\\sum_{i=1}^{N}\\boldsymbol{e}_{i} \\boldsymbol{e}_{i}^{T}\\right)\\boldsymbol{C}=\\boldsymbol{C}^{T}\\boldsymbol{I} \\boldsymbol{C}=\\boldsymbol{I},\\] which are the MVIE conditions of John's theorem; see Statement (b) of Theorem 4, with \\(\\boldsymbol{u}_{i}=\\boldsymbol{q}_{i}^{\\prime}\\), \\(\\lambda_{i}=(N-1)^{2}\\), \\(i=1,\\ldots,N\\). Hence, \\(\\beta\\mathcal{B}_{N-1}\\) is the unique maximum volume ellipsoid contained in \\(\\mathcal{S}^{\\prime}\\). _Step 2:_ We verify that \\(\\beta\\mathcal{B}_{N-1}\\subseteq\\mathcal{S}^{\\prime}\\) if \\(\\gamma\\geq 1/\\sqrt{N-1}\\). The verification requires another equivalent MVIE problem, given as follows: \\[\\begin{split}\\max_{\\boldsymbol{F},\\boldsymbol{c}}& \\text{det}(\\boldsymbol{F}^{T}\\boldsymbol{F})\\\\ \\text{s.t.}&\\mathcal{E}(\\boldsymbol{F},\\boldsymbol{c })\\subseteq\\mathcal{S},\\end{split} \\tag{18}\\] where \\[\\mathcal{S}=\\text{conv}\\{\\boldsymbol{s}_{1},\\ldots,\\boldsymbol{s}_{L}\\},\\] and with a slight abuse of notations we redefine \\(\\boldsymbol{F}\\in\\mathbb{R}^{N\\times(N-1)}\\), \\(\\boldsymbol{c}\\in\\mathbb{R}^{N}\\). Using the same result in the previous subsection, it can be readily shown that Problem (18) is equivalent to Problem (16) under \\((\\boldsymbol{\\Phi},\\boldsymbol{b})=(\\boldsymbol{C},\\boldsymbol{d})\\). Let \\[\\mathcal{E}=\\mathcal{E}\\left(\\beta\\boldsymbol{C},\\boldsymbol{d}\\right),\\qquad \\mathcal{E}^{\\prime}=\\mathcal{E}\\left(\\beta\\boldsymbol{I},\\boldsymbol{0} \\right)=\\beta\\mathcal{B}_{N-1}.\\] From Statement (a) of Proposition 1, we have \\(\\mathcal{E}\\subseteq\\mathcal{S}\\Longrightarrow\\mathcal{E}^{\\prime}\\subseteq \\mathcal{S}^{\\prime}\\); thus, we turn to proving \\(\\mathcal{E}\\subseteq\\mathcal{S}\\). Recall from the definition of \\(\\gamma\\) in (3) that \\[(\\text{conv}\\{\\boldsymbol{e}_{1},\\ldots,\\boldsymbol{e}_{N}\\})\\cap(\\gamma \\mathcal{B}_{N})\\subseteq\\mathcal{S}. \\tag{19}\\] For \\(\\gamma\\geq 1/\\sqrt{N-1}\\), (19) implies \\[(\\text{conv}\\{\\boldsymbol{e}_{1},\\ldots,\\boldsymbol{e}_{N}\\})\\cap\\left(\\tfrac{ 1}{\\sqrt{N-1}}\\mathcal{B}_{N}\\right)\\subseteq\\mathcal{S}. \\tag{20}\\] Consider the following fact. **Fact 2**: _[_41_]_ _The following results hold.__(a)_ \\((\\mathrm{aff}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\})\\cap(r\\mathcal{B}_{N})=\\mathcal{E} \\left(\\sqrt{r^{2}-\\frac{1}{N}\\mathbf{C},\\mathbf{d}}\\right)\\) _for_ \\(r\\geq\\frac{1}{\\sqrt{N}}\\)_;_ _(b)_ \\((\\mathrm{conv}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\})\\cap(r\\mathcal{B}_{N})=\\mathrm{aff }\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\}\\cap(r\\mathcal{B}_{N})\\) _for_ \\(\\frac{1}{\\sqrt{N}}<r\\leq\\frac{1}{\\sqrt{N-1}}\\)_._ Applying Fact 2 to the left-hand side of (20) yields \\[(\\mathrm{conv}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\})\\cap\\left(\\frac{1}{\\sqrt{N-1}} \\mathcal{B}_{N}\\right)=\\mathcal{E}\\left(\\beta\\mathbf{C},\\mathbf{d}\\right). \\tag{21}\\] Hence, we have \\(\\mathcal{E}=\\mathcal{E}\\left(\\beta\\mathbf{C},\\mathbf{d}\\right)\\subseteq\\mathcal{S}\\), which implies that \\(\\beta\\mathcal{B}_{N-1}=\\mathcal{E}^{\\prime}\\subseteq\\mathcal{S}^{\\prime}\\). _Step 3:_ We verify that \\(\\mathbf{q}_{i}^{\\prime}\\in(\\beta\\mathcal{B}_{N-1})\\cap(\\mathrm{bd}~{}\\mathcal{S}^ {\\prime})\\) for all \\(i\\). Again, the verification is based on the equivalence of Problem (18) and Problem (16) used in Step 2. Let \\[\\mathbf{w}_{i}=\\frac{1}{N-1}\\sum_{j\ eq i}\\mathbf{e}_{j},\\quad i=1,\\ldots,N, \\tag{22}\\] and let \\(\\mathbf{w}_{i}^{\\prime}=\\mathbf{C}^{T}(\\mathbf{w}_{i}-\\mathbf{d})\\) for all \\(i\\). By Statement (d) of Proposition 1, we have \\(\\mathbf{w}_{i}\\in\\mathcal{E}\\cap(\\mathrm{rbd}~{}\\mathcal{S})\\Longrightarrow\\mathbf{w} _{i}^{\\prime}\\in\\mathcal{E}^{\\prime}\\cap(\\mathrm{bd}~{}\\mathcal{S}^{\\prime})\\). Also, owing to \\(\\mathbf{C}^{T}\\mathbf{d}=\\mathbf{0}\\), we see that \\(\\mathbf{w}_{i}^{\\prime}=\\mathbf{C}^{T}(\\frac{1}{N-1}\\sum_{j\ eq i}\\mathbf{e}_{j})=\\mathbf{q}_ {i}^{\\prime}\\). Hence, we can focus on showing \\(\\mathbf{w}_{i}\\in\\mathcal{E}\\cap(\\mathrm{rbd}~{}\\mathcal{S})\\). Since \\(\\mathbf{w}_{i}\\in\\mathrm{aff}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\}=\\mathcal{A}(\\mathbf{C}, \\mathbf{d})\\) (cf. Fact 1), we can represent \\(\\mathbf{w}_{i}\\) by \\[\\mathbf{w}_{i}=\\mathbf{C}\\mathbf{w}_{i}^{\\prime}+\\mathbf{d}. \\tag{23}\\] Using (22), \\(\\mathbf{C}^{T}\\mathbf{C}=\\mathbf{I}\\) and \\(\\mathbf{C}^{T}\\mathbf{d}=\\mathbf{0}\\), one can verify that \\[\\frac{1}{N-1}=\\|\\mathbf{w}_{i}\\|^{2}=\\|\\mathbf{C}\\mathbf{w}_{i}^{\\prime}\\|^{2}+\\|\\mathbf{d}\\|^ {2}=\\|\\mathbf{w}_{i}^{\\prime}\\|^{2}+\\frac{1}{N},\\] which is equivalent to \\(\\|\\mathbf{w}_{i}^{\\prime}\\|=\\beta\\). We thus have \\(\\mathbf{w}_{i}\\in\\mathcal{E}(\\beta\\mathbf{C},\\mathbf{d})=\\mathcal{E}\\). Since \\(\\mathcal{E}\\subseteq\\mathcal{S}\\) (which is shown in Step 2), we also have \\(\\mathbf{w}_{i}\\in\\mathcal{S}\\). The vector \\(\\mathbf{w}_{i}\\) has \\([\\mathbf{w}_{i}]_{i}=0\\), and as a result \\(\\mathbf{w}_{i}\\) must not lie in \\(\\mathrm{ri}~{}\\mathcal{S}\\). It follows that \\(\\mathbf{w}_{i}\\in\\mathrm{rbd}~{}\\mathcal{S}\\). _Step 4:_ Steps 1-3 essentially prove all the key components of the big picture proof described in the beginning of this subsection. In this last step, we show the remaining result, namely, \\(\\mathbf{q}_{i}=\\mathbf{ACq}_{i}^{\\prime}+\\mathbf{Ad}=\\frac{1}{N-1}\\sum_{j\ eq i}\\mathbf{a}_{j}\\). In Step 3, we see from \\(\\mathbf{w}_{i}^{\\prime}=\\mathbf{q}_{i}^{\\prime}\\) and (22)-(23) that \\(\\mathbf{C}\\mathbf{q}_{i}^{\\prime}+\\mathbf{d}=\\frac{1}{N-1}\\sum_{j\ eq i}\\mathbf{e}_{j}\\). Plugging this result into \\(\\mathbf{q}_{i}\\) yields the desired result. ### On the Number of Contact Points Our final task is to prove that \\(\\{\\mathbf{q}_{1},\\ldots,\\mathbf{q}_{N}\\}=\\mathcal{E}^{\\star}\\cap(\\mathrm{rbd}~{} \\mathcal{X})\\); note that the previous proof allows us only to say that \\(\\{\\mathbf{q}_{1},\\ldots,\\mathbf{q}_{N}\\}\\subseteq\\mathcal{E}^{\\star}\\cap(\\mathrm{rbd}~ {}\\mathcal{X})\\). We use the equivalent MVIE problem (18) to help us solve the problem. Again, let \\(\\mathcal{E}=\\mathcal{E}(\\beta\\mathbf{C},\\mathbf{d})\\) for convenience. The crux is to show that \\[\\mathbf{w}\\in\\mathcal{E}\\cap(\\mathrm{rbd}~{}\\mathcal{S})\\quad\\Longrightarrow\\quad \\mathbf{w}=\\mathbf{w}_{i}\\text{ for some }i\\in\\{1,\\ldots,N\\}, \\tag{24}\\] where \\(\\mathbf{w}_{i}\\)'s have been defined in (22); the premise is \\(\\gamma>1/\\sqrt{N-1}\\). By following the above development, especially, the equivalence results of Problems (18) and (16) and those of Problems (5) and (16), it can be verified that (24) is equivalent to \\[\\mathbf{q}\\in\\mathcal{E}^{\\star}\\cap(\\mathrm{rbd}~{}\\mathcal{X})\\quad\\Longrightarrow \\quad\\mathbf{q}=\\mathbf{q}_{i}\\text{ for some }i\\in\\{1,\\ldots,N\\},\\]which completes the proof of \\(\\{\\mathbf{q}_{1},\\ldots,\\mathbf{q}_{N}\\}=\\mathcal{E}^{*}\\cap(\\mbox{rbd }\\mathcal{X})\\). We describe the proof of (24) as follows. _Step 1:_ First, we show the following implication under \\(\\gamma>1/\\sqrt{N-1}\\): \\[\\mathbf{w}\\in\\mathcal{E}\\cap(\\mbox{rbd }\\mathcal{S})\\quad\\Longrightarrow\\quad\\mathbf{w} \\in\\mathcal{E}\\cap(\\mbox{rbd}(\\mbox{conv}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\})). \\tag{25}\\] The proof is as follows. Let \\[\\mathcal{R}(\\gamma)=(\\mbox{conv}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\})\\cap(\\gamma \\mathcal{B}_{N}),\\] and note from (19)-(21) that \\[\\mathcal{E}\\subseteq\\mathcal{R}(\\gamma)\\subseteq\\mathcal{S} \\tag{26}\\] holds for \\(\\gamma\\geq 1/\\sqrt{N-1}\\). It can be seen or easily verified from the previous development that \\[\\mbox{aff }\\mathcal{E}=\\mbox{aff }\\mathcal{S}=\\mbox{aff}(\\mbox{conv}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\})=\\mbox{aff}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\}=\\mathcal{A}( \\mathbf{C},\\mathbf{d}). \\tag{27}\\] Also, by applying (27) to (26), we get \\(\\mbox{aff}(\\mathcal{R}(\\gamma))=\\mathcal{A}(\\mathbf{C},\\mathbf{d})\\). It is then immediate that \\[\\mbox{ri}(\\mathcal{R}(\\gamma))\\subseteq\\mbox{ri }\\mathcal{S}. \\tag{28}\\] From (26)-(28) we observe that \\[\\mathbf{w}\\in\\mathcal{E},\\ \\mathbf{w}\\in\\mbox{rbd }\\mathcal{S}\\quad\\Longrightarrow\\quad\\mathbf{w} \\in\\mathcal{R}(\\gamma),\\ \\mathbf{w}\ otin\\mbox{ri}(\\mathcal{R}(\\gamma))\\quad\\Longrightarrow\\quad\\mathbf{w} \\in\\mbox{rbd}(\\mathcal{R}(\\gamma)). \\tag{29}\\] Let us further examine the right-hand side of the above equation. For \\(\\gamma>1/\\sqrt{N}\\), we can write \\[\\mathcal{R}(\\gamma) =(\\mbox{conv}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\})\\cap(\\mbox{aff}\\{ \\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\}\\cap(\\gamma\\mathcal{B}_{N}))\\] \\[=(\\mbox{conv}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\})\\cap\\left(\\mathcal{ E}\\left(\\sqrt{\\gamma^{2}-\\frac{1}{N}}\\mathbf{C},\\mathbf{d}\\right)\\right),\\] where the second equality is due to Fact 2.(a). It follows that \\[\\mathbf{w}\\in\\mbox{rbd}(\\mathcal{R}(\\gamma))\\quad\\Longrightarrow\\quad\\mathbf{w}\\in \\mbox{rbd}(\\mbox{conv}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\})\\mbox{ or }\\mathbf{w}\\in\\mbox{rbd}\\left(\\mathcal{E}\\left(\\sqrt{\\gamma^{2}-\\frac{1}{N}}\\mathbf{C},\\mathbf{d}\\right)\\right). \\tag{30}\\] However, for \\(\\gamma>1/\\sqrt{N-1}\\), we have \\[\\mathbf{w}\\in\\mathcal{E}=\\mathcal{E}(\\beta\\mathbf{C},\\mathbf{d})=\\mathcal{E}\\left(\\sqrt{ \\frac{1}{N-1}-\\frac{1}{N}}\\mathbf{C},\\mathbf{d}\\right)\\quad\\Longrightarrow\\quad\\mathbf{w} \ otin\\mbox{rbd}\\left(\\mathcal{E}\\left(\\sqrt{\\gamma^{2}-\\frac{1}{N}}\\mathbf{C}, \\mathbf{d}\\right)\\right). \\tag{31}\\] By combining (29), (30) and (31), we obtain (25). _Step 2:_ Second, we show that \\[\\mathbf{w}\\in\\mathcal{E}\\cap(\\mbox{rbd}(\\mbox{conv}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N }\\}))\\quad\\Longrightarrow\\quad\\mathbf{w}=\\mathbf{w}_{i}\\mbox{ for some }i\\in\\{1,\\ldots,N\\}. \\tag{32}\\] The proof is as follows. The relative boundary of \\(\\mbox{conv}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\}\\) can be expressed as \\[\\mbox{rbd}(\\mbox{conv}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\})=\\bigcup_{i=1}^{N} \\mathcal{F}_{i}\\]where \\[\\mathcal{F}_{i}=\\{\\mathbf{s}\\in\\mathbb{R}^{N}\\ |\\ \\mathbf{s}\\geq\\mathbf{0},\\mathbf{1}^{T}\\mathbf{s}=1,s_ {i}=0\\}. \\tag{33}\\] It follows that \\[\\mathbf{w}\\in\\mathcal{E}\\cap(\\text{rbd}(\\text{conv}\\{\\mathbf{e}_{1},\\ldots,\\mathbf{e}_{N}\\}) )\\quad\\Longrightarrow\\quad\\mathbf{w}\\in\\mathcal{E}\\cap\\mathcal{F}_{i}\\ \\text{for some}\\ i\\in\\{1,\\ldots,N\\}.\\] Recall \\(\\mathbf{w}_{i}=\\frac{1}{N-1}\\sum_{j\ eq i}\\mathbf{e}_{j}\\). By the Cauchy-Schwartz inequality, any \\(\\mathbf{w}\\in\\mathcal{F}_{i}\\) must satisfy \\[\\|\\mathbf{w}\\|=\\sqrt{N-1}\\|\\mathbf{w}_{i}\\|\\|\\mathbf{w}\\|\\geq\\sqrt{N-1}\\mathbf{w}_{i}^{T}\\mathbf{w }=\\frac{1}{\\sqrt{N-1}}.\\] Also, the above equality holds (for \\(\\mathbf{w}\\in\\mathcal{F}_{i}\\)) if and only if \\(\\mathbf{w}=\\mathbf{w}_{i}\\). On the other hand, it can be verified that any \\(\\mathbf{w}\\in\\mathcal{E}\\) must satisfy \\(\\|\\mathbf{w}\\|\\leq 1/\\sqrt{N-1}\\); see (26). Hence, any \\(\\mathbf{w}\\in\\mathcal{E}\\cap\\mathcal{F}_{i}\\) must be given by \\(\\mathbf{w}=\\mathbf{w}_{i}\\), and applying this result to (33) leads to (32). Finally, by (25) and (32), the desired result in (24) is obtained. ## 5 An SSMF Algorithm Induced from MVIE In this section we use the MVIE framework developed in the previous sections to derive an SSMF algorithm. We follow the recovery procedure in Corollary 1, wherein the main problem is to solve the MVIE problem in (5). To solve Problem (5), we first consider DR. The required tool has been built in Proposition 1: If we can find a 2-tuple \\((\\mathbf{\\Phi},\\mathbf{b})\\in\\mathbb{R}^{M\\times(N-1)}\\times\\mathbb{R}^{M}\\) such that \\(\\text{aff}\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\}=\\mathcal{A}(\\mathbf{\\Phi},\\mathbf{b})\\), then the MVIE problem (5) can be equivalently transformed to Problem (12), restated here for convenience as follows: \\[\\max_{\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime}} \\ |\\det(\\mathbf{F}^{\\prime})|^{2}\\] (34) s.t. \\[\\mathcal{E}(\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\subseteq\\mathcal{X} ^{\\prime}=\\text{conv}\\{\\mathbf{x}^{\\prime}_{1},\\ldots,\\mathbf{x}^{\\prime}_{L}\\},\\] where \\((\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\in\\mathbb{R}^{(N-1)\\times(N-1)}\\times \\mathbb{R}^{N-1}\\), and \\(\\mathbf{x}^{\\prime}_{i}=\\mathbf{\\Phi}^{\\dagger}(\\mathbf{x}_{i}-\\mathbf{b}),i=1,\\ldots,L\\) are the dimensionality-reduced data points. Specifically, recall that if \\((\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\) is an optimal solution to Problem (34) then \\((\\mathbf{F},\\mathbf{c})=(\\mathbf{\\Phi}\\mathbf{F}^{\\prime},\\mathbf{\\Phi}\\mathbf{F}^{\\prime}+\\mathbf{c})\\) is an optimal solution to Problem (5); if \\(\\mathbf{q}^{\\prime}\\in(\\mathcal{E}(\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime}))\\cap(\\text{ bd}\\ \\mathcal{X}^{\\prime})\\), then \\(\\mathbf{q}=\\mathbf{\\Phi}\\mathbf{q}^{\\prime}+\\mathbf{b}\\in(\\mathcal{E}(\\mathbf{F},\\mathbf{c}))\\cap( \\text{rbd}\\ \\mathcal{X})\\) is one of the desired contact points in (8). The problem is to find one such \\((\\mathbf{\\Phi},\\mathbf{b})\\) from the data. According to [14], we can extract \\((\\mathbf{\\Phi},\\mathbf{b})\\) from the data using affine set fitting; it is given by \\(\\mathbf{b}=\\frac{1}{L}\\sum_{n=1}^{L}\\mathbf{x}_{n}\\) and by having columns of \\(\\mathbf{\\Phi}\\) to be first \\(N-1\\) principal left-singular vectors of the matrix \\([\\ \\mathbf{x}_{1}-\\mathbf{b},\\ldots,\\mathbf{x}_{L}-\\mathbf{b}\\ ]\\). Next, we show how Problem (34) can be recast as a convex problem. To do so, we consider representing \\(\\mathcal{X}^{\\prime}\\) in polyhedral form, that is, \\[\\mathcal{X}^{\\prime}=\\bigcap_{i=1}^{K}\\{\\mathbf{x}\\ |\\ \\mathbf{g}_{i}^{T}\\mathbf{x}_{i}\\leq h _{i}\\},\\] for some positive integer \\(K\\) and for some \\((\\mathbf{g}_{i},h_{i})\\in\\mathbb{R}^{N-1}\\times\\mathbb{R}\\), \\(i=1,\\ldots,K\\), with \\(\\|\\mathbf{g}_{i}\\|=1\\) without loss of generality. Such a conversion is called facet enumeration in the literature [12], and in practice \\((\\mathbf{g}_{i},h_{i})_{i=1}^{K}\\) may be obtained by calling an off-the-shelf algorithm such as QuickHull [4]. Usingthe polyhedral representation of \\(\\mathcal{X}^{\\prime}\\), Problem (34) can be reformulated as a log determinant maximization problem subject to second-order cone (SOC) constraints [11]. Without loss of generality, assume that \\(\\mathbf{F}^{\\prime}\\) is symmetric and positive semidefinite. By noting \\(\\det(\\mathbf{F}^{\\prime})\\geq 0\\) and the equivalence \\[\\mathcal{E}(\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\subseteq\\bigcap_{i=1 }^{K}\\{\\mathbf{x}\\ |\\ \\mathbf{g}_{i}^{T}\\mathbf{x}_{i}\\leq h_{i}\\} \\iff \\sup_{\\|\\mathbf{\\alpha}\\|\\leq 1}\\mathbf{g}_{i}^{T}(\\mathbf{F}^{\\prime}\\mathbf{ \\alpha}+\\mathbf{c}^{\\prime})\\leq h_{i},\\ i=1,\\ldots,K, \\tag{35}\\] \\[\\iff \\|\\mathbf{F}^{\\prime}\\mathbf{g}_{i}\\|+\\mathbf{g}_{i}^{T}\\mathbf{c}^{\\prime}\\leq h _{i},\\ i=1,\\ldots,K;\\] (see, e.g., [11]), Problem (34) can be rewritten as \\[\\begin{split}\\max_{\\mathbf{F}^{\\prime}\\in\\mathbb{S}_{+}^{N-1},\\mathbf{c} ^{\\prime}\\in\\mathbb{R}^{N-1}}&\\ \\log\\det(\\mathbf{F}^{\\prime})\\\\ \\text{s.t.}&\\|\\mathbf{F}^{\\prime}\\mathbf{g}_{i}\\|+\\mathbf{g}_{i }^{T}\\mathbf{c}^{\\prime}\\leq h_{i},\\ i=1,\\ldots,K.\\end{split} \\tag{36}\\] The above problem is convex and can be readily solved by calling general-purpose convex optimization software such as CVX [33]. We also custom-derive a fast first-order algorithm for handling Problem (36). The algorithm is described in Appendix B. The aspect of MVIE optimization is complete. However, we should also mention how we obtain the contact points \\(\\mathbf{q}_{1},\\ldots,\\mathbf{q}_{N}\\) in (7)-(8) as they play the main role in reconstructing \\(\\mathbf{a}_{1},\\ldots,\\mathbf{a}_{N}\\) (cf. Corollary 1). It can be further shown from (35) that \\[\\mathbf{q}^{\\prime}\\in(\\mathcal{E}(\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})) \\cap(\\text{bd }\\mathcal{X}^{\\prime}) \\iff \\mathbf{q}^{\\prime}=\\mathbf{F}^{\\prime}\\left(\\frac{\\mathbf{F}^{\\prime}\\mathbf{g} _{i}}{\\|\\mathbf{F}^{\\prime}\\mathbf{g}_{i}\\|}\\right)+\\mathbf{c}^{\\prime},\\ \\|\\mathbf{F}^{\\prime}\\mathbf{g}_{i}\\|+\\mathbf{g}_{i}^{T}\\mathbf{c}^{ \\prime}=h_{i}, \\tag{37}\\] \\[\\text{for some }i=1,\\ldots,K.\\] Hence, after solving Problem (36), we can use the condition on the right-hand side of (37) to identify the collection of all contact points \\(\\mathbf{q}_{1}^{\\prime},\\ldots,\\mathbf{q}_{N}^{\\prime}\\). Then, we use the relation \\(\\mathbf{q}_{i}=\\mathbf{\\Phi}\\mathbf{q}_{i}^{\\prime}+\\mathbf{b}\\) to construct \\(\\mathbf{q}_{1},\\ldots,\\mathbf{q}_{N}\\). Our MVIE algorithm is summarized in Algorithm 1. ``` 1:Given a data matrix \\(\\mathbf{X}\\in\\mathbb{R}^{M\\times L}\\) and a model order \\(N\\leq\\min\\{M,L\\}\\). 2:Obtain the dimension-reduced data \\(\\mathbf{x}_{i}^{\\prime}=\\mathbf{\\Phi}^{\\dagger}(\\mathbf{x}_{i}-\\mathbf{b}),i=1,\\ldots,L\\), where \\((\\mathbf{\\Phi},\\mathbf{b})\\) is obtained by affine set fitting [14]. 3:Use QuickHull [4] or some other off-the-shelf algorithm to enumerate the facets of \\(\\operatorname{conv}\\{\\mathbf{x}_{1}^{\\prime},\\ldots,\\mathbf{x}_{L}^{\\prime}\\}\\), i.e., find \\((\\mathbf{g}_{i},h_{i})_{i=1}^{K}\\) such that \\(\\operatorname{conv}\\{\\mathbf{x}_{1}^{\\prime},\\ldots,\\mathbf{x}_{L}^{\\prime}\\}=\\cap_{i= 1}^{K}\\{\\mathbf{x}\\ |\\ \\mathbf{g}_{i}^{T}\\mathbf{x}\\leq h_{i}\\}\\). 4:Solve Problem (36) either via CVX [33] or via Algorithm 2, and store the optimal solution obtained as \\((\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\). 5:Compute the contact points 6:Compute the contact points \\(\\mathbf{q}_{i}=\\mathbf{\\Phi}\\mathbf{q}_{i}^{\\prime}+\\mathbf{b},i=1,\\ldots,N\\). 7:Reconstruct \\(\\mathbf{a}_{i}=\\sum_{j=1}^{N}\\mathbf{q}_{j}-(N-1)\\mathbf{q}_{i},i=1,\\ldots,N\\). 8:Output \\(\\mathbf{A}=[\\ \\mathbf{a}_{1},\\ldots,\\mathbf{a}_{N}\\ ]\\). ``` **Algorithm 1** An MVIE Algorithm for Blind Recovery of \\(\\mathbf{A}\\) Some discussions are as follows. 1. As can be seen, the two key steps for the proposed MVIE algorithm are to perform facet enumeration and to solve a convex optimization problem. Let us first discuss issues arising from facet enumeration. Facet enumeration is a well-studied problem in the context of computational geometry [12, 13], and one can find off-the-shelf algorithms, such as QuickHull [4] and VERT2CON2, to perform facet enumeration. However, it is important to note that facet enumeration is known to be NP-hard in general [5, 10]. Such computational intractability was identified by finding a purposely constructed problem instance [3], which is reminiscent of the carefully constructed Klee-Minty cube for showing the worst-case complexity of the simplex method for linear programming [38]. In practice, one would argue that such worst-case instances do not happen too often. Moreover, the facet enumeration problem is polynomial-time solvable under certain sufficient conditions, such as the so-called \"balance condition\" [4, Theorem 3.2] and the case of \\(N=3\\)[19]. Footnote 2: [https://www.mathworks.com/matlabcentral/fileexchange/7895-vert2con-vertices-to-constraints](https://www.mathworks.com/matlabcentral/fileexchange/7895-vert2con-vertices-to-constraints) 2. While the above discussion suggests that MVIE may not be solved in polynomial time, it is based on convex optimization and thus does not suffer from local minima. In comparison, MVES--which enjoys the same sufficient recovery condition as MVIE--may have such issues as we will see in the numerical results in the next section. 3. We should also discuss a minor issue, namely, that of finding the contact points in Step 5 of Algorithm 1. In practice, there may be numerical errors with the MVIE solution, e.g., due to finite number of iterations or approximations involved in the algorithm. Also, data in reality are often noisy. Those errors may result in identification of more than \\(N\\) contact points as our experience suggests. When such instances happen, we mend the problem by clustering the obtained contact points into \\(N\\) points by standard \\(k\\)-means clustering. ## 6 Numerical Simulation and Discussion In this section we use numerical simulations to show the viability of the MVIE framework. ### Simulation Settings The application scenario is HU in remote sensing. The data matrix \\(\\mathbf{X}=\\mathbf{AS}\\) is synthetically generated by following the procedure in [14]. Specifically, the columns \\(\\mathbf{a}_{1},\\ldots,\\mathbf{a}_{N}\\) of \\(\\mathbf{A}\\) are randomly selected from a library of endmember spectral signatures called the U.S. geological survey (USGS) library [18]. The columns \\(\\mathbf{s}_{1},\\ldots,\\mathbf{s}_{L}\\) of \\(\\mathbf{S}\\) are generated by the following way: We generate a large pool of Dirichlet distributed random vectors with concentration parameter \\(\\mathbf{1}/N\\), and then choose \\(\\mathbf{s}_{1},\\ldots,\\mathbf{s}_{L}\\) as a subset of those random vectors whose Euclidean norms are less than or equal to a pre-specified number \\(r\\). The above procedure numerically controls the pixel purity in accordance with \\(r\\), and therefore we will call \\(r\\) the numerically controlled pixel purity level in the sequel. Note that \\(r\\) is not the uniform pixel purity level \\(\\gamma\\) in (3), although \\(r\\) should closely approximate \\(\\gamma\\) when \\(L\\) is large. Also, we should mention that it is not feasible to control the pixel purity in accordance with \\(\\gamma\\) in our numerical experiments because verifying the value of \\(\\gamma\\) is computationally intractable [34] (see also [41]). We set \\(M=224\\) and \\(L=1,000\\). Our main interest is to numerically verify whether the MVIE framework can indeed lead to exact recovery, and to examine to what extent the numerical recovery results match with our theoretical claim in Theorem 3. We measure the recovery performance by the root-mean-square (RMS) angle error \\[\\phi=\\min_{\\mathbf{\\pi}\\in\\Pi_{N}}\\sqrt{\\frac{1}{N}\\sum_{i=1}^{N}\\left[\\arccos\\left( \\frac{\\mathbf{a}_{i}^{T}\\hat{\\mathbf{a}}_{\\pi_{i}}}{\\|\\mathbf{a}_{i}\\|\\cdot\\|\\hat{\\mathbf{a}}_{ \\pi_{i}}\\|}\\right)\\right]^{2}},\\] where \\(\\Pi_{N}\\) denotes the set of all permutations of \\(\\{1,\\ldots,N\\}\\), and \\(\\hat{\\mathbf{A}}\\) denotes an estimate of \\(\\mathbf{A}\\) by an algorithm. We use 200 independently generated realizations to evaluate the average RMS angle errors. Two versions of the MVIE implementations in Algorithm 1 are considered. The first calls the general-purpose convex optimization software CVX as to solve the MVIE problem, while the second applies the custom-derived algorithm in Algorithm 2 (with \\(\\rho=150\\), \\(\\epsilon=2.22\\times 10^{-16}\\), \\(\\alpha=2\\), \\(\\beta=0.6\\)) to solve the MVIE problem (approximately). For convenience, the former and latter will be called \"MVIE-CVX\" and \"MVIE-FPGM\", resp. We also tested some other algorithms for benchmarking, namely, the successive projection algorithm (SPA) [31], SISAL [7] and MVES [14]. SPA is a fast pure-pixel search, or separable NMF, algorithm. SISAL and MVES are non-convex optimization-based algorithms under the MVES framework. Following the original works, we initialize SISAL by vertex component analysis (a pure-pixel search algorithm) [46] and initialize MVES by the solution of a convex feasibility problem [14, Problem (43)]. All the algorithms are implemented under Mathworks Matlab R2015a, and they were run on a computer with Core-i7-4790K CPU (3.6 GHz CPU speed) and with 16GB RAM. ### Recovery Performance Figure 5 plots the average RMS angle errors of the various algorithms versus the (numerically controlled) pixel purity level \\(r\\). As a supplementary result for Figure 5, the precise values of the averages and standard deviations of the RMS angle errors are further shown in Table 1. Let us first examine the cases of \\(3\\leq N\\leq 5\\). MVIE-CVX achieves essentially perfect recovery performance when the pixel purity level \\(r\\) is larger than \\(1/\\sqrt{N-1}\\) by a margin of 0.025. This corroborates our sufficient recovery condition in Theorem 3. We also see from Figure 5 that MVIE-FPGM has similar performance trends. However, upon a closer look at the numbers in Table 1, MVIE-FPGM is seen to have slightly higher RMS angle errors than MVIE-CVX. This is because MVIE-FPGM employs an approximate solver for the MVIE problem (Algorithm 2) to trade for better runtime; the runtime performance will be illustrated later. Let us also compare the MVIE algorithms and the other benchmarked algorithms, again, for \\(3\\leq N\\leq 5\\). SPA has its recovery performance deteriorating as the pixel purity level \\(r\\) decreases. This is expected as separable NMF or pure-pixel search is based on the separability or pure-pixel assumption, which corresponds to \\(r=1\\) in our simulations (with high probability). SISAL and MVES, on the other hand, are seen to give perfect recovery for a range of values of \\(r\\). However, when we observe the transition points from perfect recovery to imperfect recovery, SISAL and MVES appear not as resistant to lower pixel purity levels as MVIE-CVX and MVIE-FPGM. The main reason of this is that SISAL and MVES can suffer from convergence to local minima. To support our argument, Figure 6 gives an additional numerical result where we use slightly perturbed versions of the groundtruth \\(\\mathbf{a}_{1},\\ldots,\\mathbf{a}_{N}\\) as the initialization and see if MVES and SISAL would converge to a different solution. \"SISAL-cheat\" and \"MVES-cheat\" refer to MVES and SISAL run under such cheat initializations, resp.; \"SISAL\" and \"MVES\" refer to the original SISAL and MVES. We see from Figure 6 that the two can have significant gaps, which verifies that SISAL and MVES can be sensitive to initializations. Next, we examine the cases of \\(6\\leq N\\leq 8\\) in Figure 5. For these cases we did not test MVIE-CVX because it runs slowly for large \\(N\\). By comparing the transition points from perfect recovery to imperfect recovery, we observe that MVIE-FPGM is better than SISAL and MVES for \\(N=6\\), on a par with SISAL and MVES for \\(N=7\\), and worse than SISAL and MVES for \\(N=8\\); the gaps are nevertheless not significant. The MVIE framework we established assumes the noiseless case. Having said so, it is still interesting to evaluate how MVIE performs in the noisy case. Figure 6 plots the RMS angle error performance of the various algorithms versus the signal-to-noise ratio (SNR), with \\(N=5\\). Specifically, we add independent and identically distributed mean-zero Gaussian noise to the data, and the SNR is defined as \\(\\text{SNR}=(\\sum_{i=1}^{L}\\|\\mathbf{x}_{i}\\|^{2})/(\\sigma^{2}ML)\\) where \\(\\sigma^{2}\\) is the noise variance. We observe Figure 5: Recovery performance of the SSMF algorithms with respect to the numerically controlled pixel purity level \\(r\\). \\(M=224,L=1,000\\), the noiseless case. that MVIE-CVX performs better than SISAL and MVES when \\(r=0.55\\) and SNR \\(\\geq 25\\)dB; MVIE-FPGM does not work as good as MVIE-CVX but still performs better than SISAL and MVES when \\(r=0.55\\) and SNR \\(\\geq 35\\)dB. This suggests that MVIE may work better for lower pixel purity levels. ### Runtime Performance We now turn our attention to runtime performance. Table 2 shows the runtimes of the various algorithms for various \\(N\\) and \\(r\\). Our observations are as follows. First, we see that MVIE-CVX \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline \\(N\\) & \\(r\\) & SPA & SISAL & MVES & MVIE-CVX & MVIE-FPGM \\\\ \\hline & 0.72 & 4.081\\(\\pm\\)0.538 & 3.601\\(\\pm\\)2.270 & 3.286\\(\\pm\\)2.433 & 0.001\\(\\pm\\)0.001 & 0.161\\(\\pm\\)0.376 \\\\ 3 & 0.85 is slow especially for larger \\(N\\). The reason is that CVX calls an interior-point algorithm to solve the MVIE problem, and second-order methods such as interior-point methods are known to be less efficient when dealing with problems with many constraints. Second, MVIE-FPGM, which uses an approximate MVIE solver based on first-order methodology, runs much faster than MVIE-CVX. Third, MVIE-FPGM is faster than MVES for \\(N\\leq 7\\) and SISAL for \\(N\\leq 4\\), but is slower than the latters otherwise. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline \\(N\\) & \\(r\\) & SPA & SISAL & MVES & MVIE-CVX & MVIE-FPGM \\\\ \\hline & 0.72 & 0.008\\(\\pm\\)0.008 & 0.288\\(\\pm\\)0.011 & 0.285\\(\\pm\\)0.244 & 0.613\\(\\pm\\)0.044 & 0.031\\(\\pm\\)0.016 \\\\ 3 & 0.85 & 0.008\\(\\pm\\)0.008 & 0.282\\(\\pm\\)0.009 & 0.803\\(\\pm\\)0.569 & 0.466\\(\\pm\\)0.039 & 0.034\\(\\pm\\)0.028 \\\\ & 1 & 0.006\\(\\pm\\)0.008 & 0.273\\(\\pm\\)0.009 & 1.506\\(\\pm\\)0.848 & 0.314\\(\\pm\\)0.034 & 0.041\\(\\pm\\)0.031 \\\\ \\hline & 0.595 & 0.009\\(\\pm\\)0.008 & 0.323\\(\\pm\\)0.010 & 0.766\\(\\pm\\)0.759 & 4.112\\(\\pm\\)0.213 & 0.106\\(\\pm\\)0.048 \\\\ 4 & 0.7 & 0.009\\(\\pm\\)0.008 & 0.316\\(\\pm\\)0.010 & 3.327\\(\\pm\\)1.593 & 3.579\\(\\pm\\)0.202 & 0.042\\(\\pm\\)0.019 \\\\ & 1 & 0.006\\(\\pm\\)0.008 & 0.301\\(\\pm\\)0.009 & 5.305\\(\\pm\\)1.015 & 1.378\\(\\pm\\)0.176 & 0.046\\(\\pm\\)0.040 \\\\ \\hline & 0.525 & 0.010\\(\\pm\\)0.008 & 0.371\\(\\pm\\)0.009 & 2.228\\(\\pm\\)1.825 & 33.115\\(\\pm\\)2.362 & 0.514\\(\\pm\\)0.105 \\\\ 5 & 0.7 & 0.012\\(\\pm\\)0.005 & 0.359\\(\\pm\\)0.009 & 10.528\\(\\pm\\)1.955 & 32.642\\(\\pm\\)3.149 & 0.441\\(\\pm\\)0.180 \\\\ & 1 & 0.009\\(\\pm\\)0.004 & 0.339\\(\\pm\\)0.008 & 11.859\\(\\pm\\)1.185 & 10.012\\(\\pm\\)1.651 & 0.340\\(\\pm\\)0.071 \\\\ \\hline & 0.48 & 0.016\\(\\pm\\)0.003 & 0.444\\(\\pm\\)0.010 & 5.303\\(\\pm\\)3.920 & - & 2.354\\(\\pm\\)0.150 \\\\ 6 & 0.7 & 0.014\\(\\pm\\)0.007 & 0.396\\(\\pm\\)0.009 & 19.825\\(\\pm\\)1.737 & - & 2.229\\(\\pm\\)0.321 \\\\ & 1 & 0.009\\(\\pm\\)0.008 & 0.371\\(\\pm\\)0.008 & 20.033\\(\\pm\\)1.973 & - & 1.220\\(\\pm\\)0.130 \\\\ \\hline & 0.45 & 0.018\\(\\pm\\)0.007 & 0.489\\(\\pm\\)0.013 & 11.504\\(\\pm\\)6.392 & - & 10.648\\(\\pm\\)1.113 \\\\ 7 & 0.7 & 0.017\\(\\pm\\)0.005 & 0.426\\(\\pm\\)0.011 & 33.706\\(\\pm\\)1.946 & - & 19.331\\(\\pm\\)0.830 \\\\ & 1 & 0.011\\(\\pm\\)0.009 & 0.402\\(\\pm\\)0.009 & 34.006\\(\\pm\\)2.790 & - & 7.321\\(\\pm\\)0.876 \\\\ \\hline & 0.44 & 0.021\\(\\pm\\)0.008 & 0.549\\(\\pm\\)0.021 & 32.663\\(\\pm\\)6.465 & - & 77.600\\(\\pm\\)8.446 \\\\ 8 & 0.7 & 0.023\\(\\pm\\)0.008 & 0.468\\(\\pm\\)0.012 & 67.577\\(\\pm\\)2.001 & - & 157.313\\(\\pm\\)5.637 \\\\ & 1 & 0.015\\(\\pm\\)0.010 & 0.435\\(\\pm\\)0.010 & 60.882\\(\\pm\\)4.502 & - & 57.613\\(\\pm\\)8.386 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Runtimes (sec.) of the various algorithms. The simulation settings are the same as those in Figure 5. Figure 7: Recovery performance of the SSMF algorithms with respect to the SNR. \\(M=224,N=5,L=1,000\\). In the previous section we discussed the computational bottleneck of facet enumeration in MVIE. To get some ideas on the situation in practice, we show the runtime breakdown of MVIE-FPGM in Table 3. We see that facet enumeration takes only about 10% to 33% of the total runtime in MVIE-FPGM. But there is a caveat: Facet enumeration can output a large number of facets \\(K\\), and from Table 3 we observe that this is particularly true when \\(N\\) increases. Since \\(K\\) is the number of SOC constraints of the MVIE problem (36), solving the MVIE problem for larger \\(N\\) becomes more difficult computationally. While the main contribution of this paper is to introduce a new theoretical SSMF framework through MVIE, as a future direction it would be interesting to study how the aforementioned issue can be mitigated. ## 7 Conclusion In this paper we have established a new SSMF framework through analyzing an MVIE problem. As the main contribution, we showed that the MVIE framework can admit exact recovery beyond separable or pure-pixel problem instances, and that its exact recovery condition is as good as that of the MVES framework. However, unlike MVES which requires one to solve a non-convex problem, the MVIE framework suggests a two-step solution, namely, facet enumeration and convex optimization. The viability of the MVIE framework was shown by numerical results, and it was illustrated that MVIE exhibits stable performance over a wide range of pixel purity levels. Furthermore, we should mention three open questions arising from the current investigation: \\begin{table} \\begin{tabular}{|c|c||c|c||c||c|} \\hline \\multirow{2}{*}{\\(N\\)} & \\multirow{2}{*}{\\(r\\)} & \\multicolumn{3}{c||}{Runtime} & \\multicolumn{1}{c|}{Number of facets \\(K\\)} \\\\ \\cline{3-6} & & MVIE-FPGM & Facet enumeration & FPGM+Others & by facet enumeration \\\\ \\hline \\multirow{3}{*}{3} & 0.72 & 0.031\\(\\pm\\)0.016 & 0.007\\(\\pm\\)0.002 & 0.024\\(\\pm\\)0.014 & 44.03\\(\\pm\\)3.48 \\\\ & 0.85 & 0.034\\(\\pm\\)0.028 & 0.007\\(\\pm\\)0.002 & 0.027\\(\\pm\\)0.026 & 29.91\\(\\pm\\)3.98 \\\\ & 1 & 0.041\\(\\pm\\)0.031 & 0.007\\(\\pm\\)0.002 & 0.035\\(\\pm\\)0.030 & 16.12\\(\\pm\\)3.12 \\\\ \\hline \\multirow{3}{*}{4} & 0.595 & 0.106\\(\\pm\\)0.048 & 0.022\\(\\pm\\)0.005 & 0.084\\(\\pm\\)0.043 & 365.68\\(\\pm\\)17.64 \\\\ & 0.7 & 0.042\\(\\pm\\)0.019 & 0.020\\(\\pm\\)0.003 & 0.022\\(\\pm\\)0.016 & 318.01\\(\\pm\\)18.26 \\\\ & 1 & 0.046\\(\\pm\\)0.040 & 0.012\\(\\pm\\)0.004 & 0.034\\(\\pm\\)0.035 & 114.62\\(\\pm\\)18.49 \\\\ \\hline \\multirow{3}{*}{5} & 0.525 & 0.514\\(\\pm\\)0.105 & 0.109\\(\\pm\\)0.006 & 0.405\\(\\pm\\)0.100 & 2208.76\\(\\pm\\)101.54 \\\\ & 0.7 & 0.441\\(\\pm\\)0.180 & 0.112\\(\\pm\\)0.005 & 0.329\\(\\pm\\)0.174 & 2055.93\\(\\pm\\)88.57 \\\\ & 1 & 0.340\\(\\pm\\)0.071 & 0.052\\(\\pm\\)0.006 & 0.288\\(\\pm\\)0.065 & 764.00\\(\\pm\\)102.10 \\\\ \\hline \\multirow{3}{*}{6} & 0.48 & 2.354\\(\\pm\\)0.150 & 0.663\\(\\pm\\)0.039 & 1.691\\(\\pm\\)0.111 & 11901.32\\(\\pm\\)699.30 \\\\ & 0.7 & 2.229\\(\\pm\\)0.321 & 0.760\\(\\pm\\)0.028 & 1.469\\(\\pm\\)0.293 & 13064.35\\(\\pm\\)511.29 \\\\ & 1 & 1.220\\(\\pm\\)0.130 & 0.345\\(\\pm\\)0.036 & 0.875\\(\\pm\\)0.094 & 4982.35\\(\\pm\\)611.11 \\\\ \\hline \\multirow{3}{*}{7} & 0.45 & 10.648\\(\\pm\\)1.113 & 2.906\\(\\pm\\)0.311 & 7.742\\(\\pm\\)0.801 & 49377.95\\(\\pm\\)4454.29 \\\\ & 0.7 & 19.331\\(\\pm\\)0.830 & 5.947\\(\\pm\\)0.211 & 13.384\\(\\pm\\)0.619 & 81631.50\\(\\pm\\)3398.41 \\\\ & 1 & 7.321\\(\\pm\\)0.876 & 2.541\\(\\pm\\)0.268 & 4.780\\(\\pm\\)0.608 & 29448.52\\(\\pm\\)4109.01 \\\\ \\hline \\multirow{3}{*}{8} & 0.44 & 77.600\\(\\pm\\)8.446 & 19.226\\(\\pm\\)2.171 & 58.374\\(\\pm\\)6.276 & 279720.40\\(\\pm\\)29481.38 \\\\ & 0.7 & 157.313\\(\\pm\\)5.637 & 51.648\\(\\pm\\)1.772 & 105.665\\(\\pm\\)3.865 & 495624.59\\(\\pm\\)18868.73 \\\\ \\cline{1-1} & 1 & 57.613\\(\\pm\\)8.386 & 22.914\\(\\pm\\)3.042 & 34.700\\(\\pm\\)5.344 & 161533.59\\(\\pm\\)24957.12 \\\\ \\hline \\end{tabular} \\end{table} Table 3: Detailed runtimes (sec.) of MVIE-FPGM. The simulation settings are the same as those in Figure 5. * How can we make facet enumeration more efficient in the sense of generating less facets, thereby improving the efficiency of computing the MVIE? In this direction it is worthwhile to point out the subset-separable NMF work [28] which considers a similar facet identification problem but operates on rather different sufficient recovery conditions. * How can we handle the MVIE computations efficiently when the number of facets, even with a better facet enumeration procedure, is still very large? One possibility is to consider the active set strategy, which was found to be very effective in dealing with the minimum volume covering ellipsoid (MVCE) problem [32, 49]. While the MVCE problem is not identical to the MVIE problem, it will be interesting to investigate how the insights in the aforementioned references can be used in our problem at hand. * How should we modify the MVIE formulation in the noisy case such that it may offer better robustness to noise--both practically and provably? We hope this new framework might inspire more theoretical and practical results in tackling SSMF. ## Appendix A Proof of Proposition 1 We will use the following results. **Fact 3**: _Let \\(f(\\mathbf{\\alpha})=\\mathbf{\\Phi}\\mathbf{\\alpha}+\\mathbf{b}\\) where \\((\\mathbf{\\Phi},\\mathbf{b})\\in\\mathbb{R}^{m\\times n}\\times\\mathbb{R}^{m}\\) and \\(\\mathbf{\\Phi}\\) has full column rank. The following results hold._ _(a) Let_ \\(\\mathcal{C}\\) _be a non-empty set in_ \\(\\mathbb{R}^{m}\\) _with_ \\(\\mathcal{C}\\subseteq\\mathcal{A}(\\mathbf{\\Phi},\\mathbf{b})\\)_. Then_ \\[\\operatorname{rbd}(f^{-1}(\\mathcal{C}))=f^{-1}(\\operatorname{rbd}\\,\\mathcal{ C}).\\] _(b) Let_ \\(\\mathcal{C}_{1},\\mathcal{C}_{2}\\) _be sets in_ \\(\\mathbb{R}^{m}\\) _with_ \\(\\mathcal{C}_{1},\\mathcal{C}_{2}\\subseteq\\mathcal{A}(\\mathbf{\\Phi},\\mathbf{b})\\)_. Then_ \\[\\mathcal{C}_{1}\\subseteq\\mathcal{C}_{2}\\quad\\Longleftrightarrow\\quad f^{-1}( \\mathcal{C}_{1})\\subseteq f^{-1}(\\mathcal{C}_{2}).\\] The results in the above fact may be easily deduced or found in textbooks. First, we prove the feasibility results in Statements (a)-(b) of Proposition 1. Let \\((\\mathbf{F},\\mathbf{c})\\) be a feasible solution to Problem (5). Since \\[\\mathcal{E}(\\mathbf{F},\\mathbf{c})\\subseteq\\mathcal{X}\\subseteq\\operatorname{aff}\\{ \\mathbf{x}_{1},\\ldots,\\mathbf{x}_{L}\\}=\\mathcal{A}(\\mathbf{\\Phi},\\mathbf{b}),\\] it holds that \\[\\mathbf{f}_{i}+\\mathbf{c}=\\mathbf{\\Phi}\\mathbf{\\alpha}_{i}+\\mathbf{b},\\ i=1,\\ldots,N,\\qquad\\mathbf{c}= \\mathbf{\\Phi}\\mathbf{c}^{\\prime}+\\mathbf{b},\\] for some \\(\\mathbf{\\alpha}_{1},\\ldots,\\mathbf{\\alpha}_{N},\\mathbf{c}^{\\prime}\\in\\mathbb{R}^{N-1}.\\) By letting \\(\\mathbf{f}^{\\prime}_{i}=\\mathbf{\\alpha}_{i}-\\mathbf{c}^{\\prime},i=1,\\ldots,N\\), one can show that \\(\\mathbf{F}^{\\prime}=[\\ \\mathbf{f}^{\\prime}_{1},\\ldots,\\mathbf{f}^{\\prime}_{N}\\ ]\\) and \\(\\mathbf{c}^{\\prime}\\) are uniquely given by \\((\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})=(\\mathbf{\\Phi}^{\\dagger}\\mathbf{F},\\mathbf{\\Phi}^{ \\dagger}(\\mathbf{c}-\\mathbf{b}))\\). Also, by letting \\(f(\\mathbf{\\alpha})=\\mathbf{\\Phi}\\mathbf{\\alpha}+\\mathbf{b}\\), it can be verified that \\[f^{-1}(\\mathcal{E}(\\mathbf{F},\\mathbf{c}))=\\mathcal{E}(\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime}).\\] Similarly, for \\(\\mathcal{X}\\), we have \\(\\mathbf{x}_{i}\\in\\mathcal{X}\\subseteq\\mathcal{A}(\\mathbf{\\Phi},\\mathbf{b})\\). This means that \\(\\mathbf{x}_{i}\\) can be expressed as \\(\\mathbf{x}_{i}=\\mathbf{\\Phi}\\mathbf{x}^{\\prime}_{i}+\\mathbf{b}\\) for some \\(\\mathbf{x}^{\\prime}_{i}\\in\\mathbb{R}^{N-1}\\), and it can be verified that \\(\\mathbf{x}^{\\prime}_{i}\\) is uniquely given by \\(\\mathbf{x}^{\\prime}_{i}=\\mathbf{\\Phi}^{\\dagger}(\\mathbf{x}_{i}-\\mathbf{b})\\). Subsequently it can be further verified that \\[f^{-1}(\\mathcal{X})=\\mathcal{X}^{\\prime}.\\]Hence, by using Fact 3.(b) via setting \\(\\mathcal{C}_{1}=\\mathcal{E}(\\mathbf{F},\\mathbf{c}),\\mathcal{C}_{2}=\\mathcal{X}\\), we get \\(\\mathcal{E}(\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\subseteq\\mathcal{X}^{\\prime}\\). Thus, \\((\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\) is a feasible solution to Problem (12), and we have proven the feasibility result in Statement (a) of Proposition 1. The proof of the feasibility result in Statement (b) of Proposition 1 follows the same proof method, and we omit it for brevity. Second, we prove the optimality results in Statements (a)-(b) of Proposition 1. Let \\((\\mathbf{F},\\mathbf{c})\\) be an optimal solution to Problem (5), \\((\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\) be equal to \\((\\mathbf{\\Phi}^{\\dagger}\\mathbf{F},\\mathbf{\\Phi}^{\\dagger}(\\mathbf{c}-\\mathbf{b}))\\) which is feasible to Problem (12), and \\(v_{\\mathrm{opt}}\\) be the optimal value of Problem (5). Then we have \\[v_{\\mathrm{opt}}=\\det(\\mathbf{F}^{T}\\mathbf{F})=\\det((\\mathbf{F}^{\\prime})^{T}\\mathbf{\\Phi}^{ T}\\mathbf{\\Phi}\\mathbf{F})=|\\det(\\mathbf{F}^{\\prime})|^{2}\\det(\\mathbf{\\Phi}^{T}\\mathbf{\\Phi}) \\geq v^{\\prime}_{\\mathrm{opt}}\\det(\\mathbf{\\Phi}^{T}\\mathbf{\\Phi}),\\] where \\(v^{\\prime}_{\\mathrm{opt}}\\) denotes the optimal value of Problem (12). Conversely, by redefining \\((\\mathbf{F}^{\\prime},\\mathbf{c}^{\\prime})\\) as an optimal solution to Problem (12) and \\((\\mathbf{F},\\mathbf{c})=(\\mathbf{\\Phi}\\mathbf{F}^{\\prime},\\mathbf{\\Phi}\\mathbf{c}^{\\prime}+\\mathbf{b})\\) (which is feasible to Problem (5)), we also get \\[v^{\\prime}_{\\mathrm{opt}}=|\\det(\\mathbf{F}^{\\prime})|^{2}=\\frac{1}{\\det(\\mathbf{\\Phi} ^{T}\\mathbf{\\Phi})}\\det(\\mathbf{F}^{T}\\mathbf{F})\\geq\\frac{1}{\\det(\\mathbf{\\Phi}^{T}\\mathbf{\\Phi})} v_{\\mathrm{opt}}.\\] The above two equations imply \\(v_{\\mathrm{opt}}=v^{\\prime}_{\\mathrm{opt}}\\det(\\mathbf{\\Phi}^{T}\\mathbf{\\Phi})\\), and it follows that the optimal solution results in Statements (a)-(b) of Proposition 1 are true. Third, we prove Statement (c) of Proposition 1. Recall from (4) that \\(\\dim\\mathcal{X}=N-1\\) (also recall that the result is based on the premise of (A2)-(A3)). From the development above, one can show that \\[\\mathcal{X}=\\{\\mathbf{\\Phi}\\mathbf{x}^{\\prime}+\\mathbf{b}\\ |\\ \\mathbf{x}^{\\prime}\\in\\mathcal{X}^{ \\prime}\\}.\\] It can be further verified from the above equation and the full column rank property of \\(\\mathbf{\\Phi}\\) that \\(\\dim\\mathcal{X}^{\\prime}=\\dim\\mathcal{X}=N-1\\) must hold. In addition, as a basic convex analysis result, a convex set \\(\\mathcal{C}\\) in \\(\\mathbb{R}^{m}\\) has non-empty interior if \\(\\dim\\mathcal{C}=m\\). This leads us to the conclusion that \\(\\mathcal{X}^{\\prime}\\) has non-empty interior. Finally, we prove Statement (d) of Proposition 1. The results therein are merely applications of Fact 3; e.g., \\(\\mathcal{C}_{1}=\\{\\mathbf{q}\\},\\mathcal{C}_{2}=\\mathcal{E}\\) for \\(\\mathbf{q}\\in\\mathcal{E}\\Longrightarrow\\mathbf{q}^{\\prime}\\in\\mathcal{E}^{\\prime}\\), \\(\\mathcal{C}_{1}=\\{\\mathbf{q}\\},\\mathcal{C}_{2}=\\mathrm{rbd}\\ \\mathcal{X}\\) for \\(\\mathbf{q}\\in\\mathrm{rbd}\\ \\mathcal{X}\\Longrightarrow\\mathbf{q}^{\\prime}\\in\\mathrm{rbd}(f^{-1}( \\mathcal{X}))=\\mathrm{bd}\\ \\mathcal{X}^{\\prime}\\), and so forth. ## Appendix B Fast Proximal Gradient Algorithm for Handling Problem (36) In this appendix we derive a fast algorithm for handling the MVIE problem in (36). Let us describe the formulation used. Instead of solving Problem (36) directly, we employ an approximate formulation as follows \\[\\min_{\\mathbf{F}^{\\prime}\\in\\mathbb{S}_{+}^{N-1},\\mathbf{c}^{\\prime}\\in\\mathbb{R}^{N- 1}}\\ -\\log\\det(\\mathbf{F}^{\\prime})+\\rho\\sum_{i=1}^{K}\\psi(\\|\\mathbf{F}^{\\prime}\\mathbf{g}_{i}\\| +\\mathbf{g}_{i}^{T}\\mathbf{c}^{\\prime}-h_{i}), \\tag{38}\\] for a pre-specified constant \\(\\rho>0\\) and for some convex differentiable function \\(\\psi:\\mathbb{R}\\to\\mathbb{R}\\) such that \\(\\psi(x)=0\\) for \\(x\\leq 0\\) and \\(\\psi(x)>0\\) for \\(x>0\\); specifically our choice of \\(\\psi\\) is the one-sided Huber function, i.e., \\[\\psi(z)=\\begin{cases}0,&z<0,\\\\ \\frac{1}{2}z^{2},&0\\leq z\\leq 1,\\\\ z-\\frac{1}{2},&z>1.\\end{cases}\\]Our approach is to use a penalized, or \"soft-constrained\", convex formulation in place of Problem (36), whose SOC constraints may not be easy to deal with as \"hard constraints\". Problem (38) has a nondifferentiable and unbounded-above objective function. To facilitate our algorithm design efforts later, we further approximate the problem by \\[\\min_{\\mathbf{F}^{\\prime}\\in\\mathcal{W},\\mathbf{c}^{\\prime}\\in\\mathbb{R}^{N-1}}\\ -\\log\\det(\\mathbf{F}^{ \\prime})+\\rho\\sum_{i=1}^{K}\\psi(\\sqrt{\\|\\mathbf{F}^{\\prime}\\mathbf{g}_{i}\\|^{2}+\\epsilon }+\\mathbf{g}_{i}^{T}\\mathbf{c}^{\\prime}-h_{i}), \\tag{39}\\] for some small constant \\(\\epsilon>0\\), where \\(\\mathcal{W}\\triangleq\\{\\mathbf{W}\\in\\mathbb{S}^{N-1}\\mid\\lambda_{\\min}(\\mathbf{W}) \\geq\\epsilon\\}\\). Now we describe the algorithm. We employ the fast proximal gradient method (FPGM) or FISTA [6], which is known to guarantee a convergence rate of \\(\\mathcal{O}(1/k^{2})\\) under certain premises; here, \\(k\\) is the iteration number. For notational convenience, let us denote \\(n=N-1\\), \\(\\mathbf{W}=\\mathbf{F}^{\\prime}\\), \\(\\mathbf{y}=\\mathbf{c}^{\\prime}\\), and rewrite Problem (39) as \\[\\min_{\\begin{subarray}{c}\\mathbf{W}\\in\\mathbb{R}^{n\\times n}\\\\ \\mathbf{y}\\in\\mathbb{R}^{n}\\end{subarray}}\\underbrace{\\sum_{i=1}^{K}\\psi\\left( \\sqrt{\\|\\mathbf{W}\\mathbf{g}_{i}\\|^{2}+\\epsilon}+\\mathbf{g}_{i}^{T}\\mathbf{y}-h_{i}\\right)}_{ \\triangleq f(\\mathbf{W},\\mathbf{y})}+\\underbrace{I_{\\mathcal{W}}(\\mathbf{W})-\\frac{1}{ \\rho}\\log\\det(\\mathbf{W})}_{\\triangleq g(\\mathbf{W})}, \\tag{40}\\] where \\(I_{\\mathcal{W}}(\\cdot)\\) is the indicator function of \\(\\mathcal{W}\\). By applying FPGM to the formulation in (40), we obtain Algorithm 2. In the algorithm, the notation \\(\\langle\\cdot,\\cdot\\rangle\\) stands for the inner product, \\(\\|\\cdot\\|\\) still stands for the Euclidean norm, \\(\\psi^{\\prime}\\) is the differentiation of \\(\\psi\\), and \\(\\operatorname{prox}_{f}(\\mathbf{z})=\\operatorname*{arg\\,min}_{\\mathbf{x}}\\frac{1}{2} \\|\\mathbf{z}-\\mathbf{x}\\|^{2}+f(\\mathbf{x})\\) is the proximal mapping of \\(f\\). The algorithm requires computations of the proximal mapping \\(\\operatorname{prox}_{tg}(\\mathbf{W}-t\ abla_{\\mathbf{W}}f)\\). The solution to our proximal mapping is described in the following fact. **Fact 4**: _Consider the proximal mapping \\(\\operatorname{prox}_{tg}(\\mathbf{V})\\) where the function \\(g\\) has been defined in (40) and \\(t>0\\). Let \\(\\mathbf{V}_{\\operatorname{sym}}=\\frac{1}{2}(\\mathbf{V}+\\mathbf{V}^{T})\\), and let \\(\\mathbf{V}_{\\operatorname{sym}}=\\mathbf{U}\\mathbf{\\Lambda}\\mathbf{U}^{T}\\) be the symmetric eigendecomposition of \\(\\mathbf{V}_{\\operatorname{sym}}\\) where \\(\\mathbf{U}\\in\\mathbb{R}^{n\\times n}\\) is orthogonal and \\(\\mathbf{\\Lambda}\\in\\mathbb{R}^{n\\times n}\\) is diagonal with diagonal elements given by \\(\\lambda_{1},\\ldots,\\lambda_{n}\\). We have_ \\[\\operatorname{prox}_{tg}(\\mathbf{V})=\\mathbf{U}\\mathbf{D}\\mathbf{U}^{T}\\] _where \\(\\mathbf{D}\\in\\mathbb{R}^{n\\times n}\\) is diagonal with diagonal elements given by \\(d_{i}=\\max\\left\\{\\frac{\\lambda_{i}+\\sqrt{\\lambda_{i}^{2}+4t/\\rho}}{2},\\epsilon\\right\\}\\), \\(i=1,\\ldots,n\\)._ The proof of the above fact will be given in Appendix B.1. Furthermore, we should mention convergence. FPGM is known to have a \\(\\mathcal{O}(1/k^{2})\\) convergence rate if the problem is convex and \\(f\\) has a Lipschitz continuous gradient. In Appendix B.2, we show that \\(f\\) has a Lipschitz continuous gradient. ### Proof of Fact 4 It can be verified that for any symmetric \\(\\mathbf{W}\\), we have \\(\\|\\mathbf{V}-\\mathbf{W}\\|^{2}=\\|\\mathbf{V}_{\\operatorname{sym}}-\\mathbf{W}\\|^{2}+\\|\\frac{1}{2 }(\\mathbf{V}-\\mathbf{V}^{T})\\|^{2}\\). Thus, the proximal mapping \\(\\operatorname{prox}_{tg}(\\mathbf{V})\\) can be written as \\[\\operatorname{prox}_{tg}(\\mathbf{V})=\\operatorname*{arg\\,min}_{\\mathbf{W}\\in\\mathcal{ W}}\\frac{1}{2}\\|\\mathbf{V}_{\\operatorname{sym}}-\\mathbf{W}\\|^{2}-\\frac{t}{\\rho}\\log \\det(\\mathbf{W}) \\tag{41}\\]Let \\(\\mathbf{V}_{\\text{sym}}=\\mathbf{U}\\mathbf{\\Lambda}\\mathbf{U}^{T}\\) be the symmetric eigendecomposition of \\(\\mathbf{V}_{\\text{sym}}\\). Also, let \\(\\tilde{\\mathbf{W}}=\\mathbf{U}^{T}\\mathbf{W}\\mathbf{U}\\), and note that \\(\\mathbf{W}\\in\\mathcal{W}\\) implies \\(\\tilde{\\mathbf{W}}\\in\\mathcal{W}\\). We have the following inequality for any \\(\\mathbf{W}\\in\\mathcal{W}\\): \\[\\frac{1}{2}\\|\\mathbf{V}_{\\text{sym}}-\\mathbf{W}\\|^{2}-\\frac{t}{\\rho}\\log \\det(\\mathbf{W}) =\\frac{1}{2}\\|\\mathbf{\\Lambda}-\\tilde{\\mathbf{W}}\\|^{2}-\\frac{t}{\\rho}\\log \\det(\\tilde{\\mathbf{W}})\\] \\[\\geq\\sum_{i=1}^{n}\\frac{1}{2}(\\lambda_{i}-\\tilde{w}_{ii})^{2}- \\frac{t}{\\rho}\\log(\\tilde{w}_{ii})\\] \\[\\geq\\sum_{i=1}^{n}\\min_{\\tilde{w}_{ii}\\geq\\epsilon}\\left[\\frac{1 }{2}(\\lambda_{i}-\\tilde{w}_{ii})^{2}-\\frac{t}{\\rho}\\log(\\tilde{w}_{ii})\\right] \\tag{42}\\] where the first equality is due to rotational invariance of the Euclidean norm and determinant; the second inequality is due to \\(\\|\\mathbf{\\Lambda}-\\tilde{\\mathbf{W}}\\|^{2}\\geq\\sum_{i=1}^{n}(\\lambda_{i}-\\tilde{w}_{ ii})^{2}\\) and the Hadamard inequality \\(\\det(\\tilde{\\mathbf{W}})\\leq\\prod_{i=1}^{n}\\tilde{w}_{ii}\\); the third inequality is due to the fact that \\(\\lambda_{\\min}(\\tilde{\\mathbf{W}})\\leq\\tilde{w}_{ii}\\) for all \\(i\\). One can readily show that the optimal solution to the problem in (42) is \\(\\tilde{w}_{ii}^{\\star}=\\max\\left\\{\\left(\\lambda_{i}+\\sqrt{\\lambda_{i}^{2}+4t/ \\rho}\\right)/2,\\epsilon\\right\\}\\). Furthermore, by letting \\(\\mathbf{W}^{\\star}=\\mathbf{U}\\mathbf{D}\\mathbf{U}^{T}\\), \\(\\mathbf{D}=\\text{Diag}(\\tilde{w}_{11}^{\\star},\\ldots,\\tilde{w}_{nn}^{\\star})\\), the equalities in (42) are attained. Since \\(\\mathbf{W}^{\\star}\\) also lies in \\(\\mathcal{W}\\), we conclude that \\(\\mathbf{W}^{\\star}\\) is the optimal solution to the problem in (41). ### Lipschitz Continuity of the Gradient of \\(f\\) In this appendix we show that the function \\(f\\) in (40) has a Lipschitz continuous gradient. To this end, define \\(\\mathbf{z}=[(\\text{vec}(\\mathbf{W}))^{T},\\,\\mathbf{y}^{T}]^{T}\\) and \\[\\phi_{i}(\\mathbf{z})=\\sqrt{\\|\\mathbf{C}_{i}\\mathbf{z}\\|^{2}+\\epsilon}+\\mathbf{d}_{i}^{T}\\mathbf{z} -h_{i},\\quad i=1,\\ldots,K,\\]where \\(\\mathbf{C}_{i}=[(\\mathbf{g}_{i}^{T}\\otimes\\mathbf{I}),\\,\\mathbf{0}]\\) (here \"\\(\\otimes\\)\" denotes the Kronecker product) and \\(\\mathbf{d}_{i}=[\\mathbf{0}^{T},\\,\\mathbf{g}_{i}^{T}]^{T}\\). Then, \\(f\\) can be written as \\(f(\\mathbf{W},\\mathbf{y})=\\sum_{i=1}^{K}\\psi(\\phi_{i}(\\mathbf{z}))\\). From the above equation, we see that \\(f\\) has a Lipschitz continuous gradient if every \\(\\psi(\\phi_{i}(\\mathbf{z}))\\) has a Lipschitz continuous gradient. Hence, we seek to prove the latter. Consider the following fact. **Fact 5**: _Let \\(\\psi:\\mathbb{R}\\to\\mathbb{R}\\), \\(\\phi:\\mathbb{R}^{n}\\to\\mathbb{R}\\) be functions that satisfy the following properties:_ _(a)_ \\(\\psi^{\\prime}\\) _is bounded on_ \\(\\mathbb{R}\\) _and_ \\(\\psi\\) _has a Lipschitz continuous gradient on_ \\(\\mathbb{R}\\)_;_ _(b)_ \\(\ abla\\phi\\) _is bounded on_ \\(\\mathbb{R}^{n}\\) _and_ \\(\\phi\\) _has a Lipschitz continuous gradient on_ \\(\\mathbb{R}^{n}\\)_._ _Then, \\(\\psi(\\phi(\\mathbf{z}))\\) has a Lipschitz continuous gradient on \\(\\mathbb{R}^{n}\\)._ As Fact 5 can be easily proved from the definition of Lipschitz continuity, its proof is omitted here for conciseness. Recall that for our problem, \\(\\psi\\) is the one-sided Huber function. One can verify that the one-sided Huber function has bounded \\(\\psi^{\\prime}\\) and Lipschitz continuous gradient. As for \\(\\phi_{i}\\), let us first evaluate its gradient and Hessian \\[\ abla\\phi_{i}(\\mathbf{z}) =\\frac{\\mathbf{C}_{i}^{T}\\mathbf{C}_{i}\\mathbf{z}}{\\sqrt{\\|\\mathbf{C}_{i}\\mathbf{z}\\| ^{2}+\\epsilon}}+\\mathbf{d}_{i},\\] \\[\ abla^{2}\\phi_{i}(\\mathbf{z}) =\\frac{\\mathbf{C}_{i}^{T}\\mathbf{C}_{i}}{\\sqrt{\\|\\mathbf{C}_{i}\\mathbf{z}\\|^{2}+ \\epsilon}}-\\frac{(\\mathbf{C}_{i}^{T}\\mathbf{C}_{i}\\mathbf{z})(\\mathbf{C}_{i}^{T}\\mathbf{C}_{i}\\mathbf{ z})^{T}}{(\\|\\mathbf{C}_{i}\\mathbf{z}\\|^{2}+\\epsilon)^{3/2}}.\\] We have \\[\\|\ abla\\phi_{i}(\\mathbf{z})\\|\\leq\\|\\mathbf{d}_{i}\\|+\\frac{\\|\\mathbf{C}_{i}^{T}\\mathbf{C}_{i} \\mathbf{z}\\|}{\\sqrt{\\|\\mathbf{C}_{i}\\mathbf{z}\\|^{2}+\\epsilon}}\\leq\\|\\mathbf{d}_{i}\\|+\\frac{ \\sigma_{\\max}(\\mathbf{C}_{i})\\|\\mathbf{C}_{i}\\mathbf{z}\\|}{\\sqrt{\\|\\mathbf{C}_{i}\\mathbf{z}\\|^{2}+ \\epsilon}}\\leq\\|\\mathbf{d}_{i}\\|+\\sigma_{\\max}(\\mathbf{C}_{i}),\\] where \\(\\sigma_{\\max}(\\mathbf{X})\\) denotes the largest singular value of \\(\\mathbf{X}\\). Hence, \\(\ abla\\phi_{i}(\\mathbf{z})\\) is bounded. Moreover, recall that a function has a Lipschitz continuous gradient if its Hessian is bounded. Since \\[\\|\ abla^{2}\\phi_{i}(\\mathbf{z})\\|\\leq\\sqrt{n+n^{2}}\\lambda_{\\max}(\ abla^{2}\\phi_ {i}(\\mathbf{z}))\\leq\\sqrt{n+n^{2}}\\lambda_{\\max}\\left(\\frac{\\mathbf{C}_{i}^{T}\\mathbf{C}_ {i}}{\\sqrt{\\|\\mathbf{C}_{i}\\mathbf{z}\\|^{2}+\\epsilon}}\\right)\\leq\\frac{\\sqrt{n+n^{2}} \\lambda_{\\max}(\\mathbf{C}_{i}^{T}\\mathbf{C}_{i})}{\\sqrt{\\epsilon}},\\] the function \\(\\phi_{i}\\) has a Lipschitz continuous gradient. The desired result is therefore proven. ## References * [1]S. Arora, R. Ge, Y. Halpern, D. Mimno, A. Moitra, D. Sontag, Y. Wu, and M. Zhu, _A practical algorithm for topic modeling with provable guarantees_, in Proc. International Conference on Machine Learning, 2013, pp. 280-288. * [2]S. Arora, R. Ge, R. Kannan, and A. Moitra, _Computing a nonnegative matrix factorization--Provably_, in Proc. 44th Annual ACM Symposium on Theory of Computing, 2012, pp. 145-162. * [3]D. Avis, D. Bremner, and R. Seidel, _How good are convex hull algorithms?_, Computational Geometry, 7 (1997), pp. 265-301. * [4]C. B. Barber, D. P. Dobkin, and H. Hundanpaa, _The quickhull algorithm for convex hulls_, ACM Trans. Mathematical Software, 22 (1996), pp. 469-483. * [5]S. Barot and J. A. Taylor, _A concise, approximate representation of a collection of loads described by polytopes_, International Journal of Electrical Power & Energy Systems, 84 (2017), pp. 55-63. * [6]A. Beck and M. Teboulle, _A fast iterative shrinkage-thresholding algorithm for linear inverse problems_, SIAM Journal on Imaging Sciences, 2 (2009), pp. 183-202. * [7]J. Bioucas-Dias, _A variable splitting augmented Lagrangian approach to linear spectral unmixing_, in Proc. IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Aug. 2009. * [8]J. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot, _Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches_, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 5 (2012), pp. 354-379. * [9]J. W. Boardman, F. A. Kruse, and R. O. Green, _Mapping target signatures via partial unmixing of AVIRIS data_, in Proc. 5th Annual JPL Airborne Earth Science Workshop, 1995, pp. 23-26. * [10]E. Boros, K. Elbassioni, V. Gurvich, and K. Makino, _Generating vertices of polyhedra and related problems of monotone generation_, Centre de Recherches Mathematiques, 49 (2009), pp. 15-43. * [11]S. Boyd and L. Vandenberghe, _Convex Optimization_, Cambridge University Press, 2004. * [12]D. Bremner, K. Fukuda, and A. Marzetta, _Primal-dual methods for vertex and facet enumeration_, Discrete & Computational Geometry, 20 (1998), pp. 333-357. * [13]D. D. Bremner, _On the complexity of vertex and facet enumeration for convex polytopes_, PhD thesis, Citeseer, 1997. * [14]T.-H. Chan, C.-Y. Chi, Y.-M. Huang, and W.-K. Ma, _A convex analysis based minimum-volume enclosing simplex algorithm for hyperspectral unmixing_, IEEE Trans. Signal Processing, 57 (2009), pp. 4418-4432. * [15]T.-H. Chan, W.-K. Ma, A. Ambikapathi, and C.-Y. Chi, _A simplex volume maximization framework for hyperspectral endmember extraction_, IEEE Trans. Geoscience and Remote Sensing, 49 (2011), pp. 4177-4193. * [16]T.-H. Chan, W.-K. Ma, C.-Y. Chi, and Y. Wang, _A convex analysis framework for blind separation of non-negative sources_, IEEE Trans. Signal Processing, 56 (2008), pp. 5120-5134. * [17]L. Chen, P. L. Choyke, T.-H. Chan, C.-Y. Chi, G. Wang, and Y. Wang, _Tissue-specific compartmental analysis for dynamic contrast-enhanced MR imaging of complex tumors_, IEEE Trans. Medical Imaging, 30 (2011), pp. 2044-2058. * [18]R. Clark, G. Swayze, R. Wise, E. Livo, T. Hoefen, R. Kokaly, and S. Sutley, _USGS digital spectral library splib06a: U.S. Geological Survey, Digital Data Series 231_. [http://speclab.cr.usgs.gov/spectral.lib06](http://speclab.cr.usgs.gov/spectral.lib06), 2007. * [19]T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, _Introduction to Algorithms_, The MIT Press (2nd Edition), 2001. * [20]M. D. Craig, _Minimum-volume transforms for remotely sensed data_, IEEE Trans. Geoscience and Remote Sensing, 32 (1994), pp. 542-552. * [21]E. Elhamifar, G. Sapiro, and R. Vidal, _See all by looking at a few: Sparse modeling for finding representative objects_, in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 1600-1607. * [22]E. Esser, M. Moller, S. Osher, G. Sapiro, and J. Xin, _A convex model for nonnegative matrix factorization and dimensionality reduction on physical space_, IEEE Trans. Image Processing, 21 (2012), pp. 3239-3252. * [23]X. Fu, K. Huang, B. Yang, W.-K. Ma, and N. D. Sidiropoulos, _Robust volume minimization-based matrix factorization for remote sensing and document clustering_, IEEE Trans. Signal Processing, 64 (2016), pp. 6254-6268. * [24]X. Fu and W.-K. Ma, _Robustness analysis of structured matrix factorization via self-dictionary mixed-norm optimization_, IEEE Signal Processing Letters, 23 (2016), pp. 60-64. * [25]X. Fu, W.-K. Ma, T.-H. Chan, and J. M. Bioucas-Dias, _Self-dictionary sparse regression for hyperspectral unmixing: Greedy pursuit and pure pixel search are related_, IEEE Journal of Selected Topics in Signal Processing, 9 (2015), pp. 1128-1141. * [26]X. Fu, W.-K. Ma, K. Huang, and N. D. Sidiropoulos, _Blind separation of quasi-stationary sources: Exploiting convex geometry in covariance domain_, IEEE Trans. Signal Processing, 63 (2015), pp. 2306-2320. * [27]W. E. Full, R. Ehrlich, and J. E. Klovan, _EXTENDED QMODEL--objective definition of external endmembers in the analysis of mixtures_, Mathematical Geology, 13 (1981), pp. 331-344. * [28]R. Ge and J. Zou, _Intersecting faces: Non-negative matrix factorization with new guarantees_, in Proc. International Conference on Machine Learning, 2015, pp. 2295-2303. * [29]N. Gillis, _Robustness analysis of hottopixx, a linear programming model for factoring nonnegative matrices_, SIAM Journal on Matrix Analysis and Applications, 34 (2013), pp. 1189-1212. * [30]N. Gillis, _The why and how of nonnegative matrix factorization_, in Regularization, Optimization, Kernels, and Support Vector Machines, Chapman and Hall/CRC, 2014, pp. 257-291. * [31]N. Gillis and S. A. Vavasis, _Fast and robust recursive algorithms for separable nonnegative matrix factorization_, IEEE Trans. Pattern Analysis and Machine Intelligence, 36 (2014), pp. 698-714. * [32]N. Gillis and S. A. Vavasis, _Semidefinite programming based preconditioning for more robust near-separable nonnegative matrix factorization_, SIAM Journal on Optimization, 25 (2015), pp. 677-698. * [33]M. Grant, S. Boyd, and Y. Ye, _CVX: Matlab software for disciplined convex programming_, 2008. * [34]P. Gritzmann and V. Klee, _On the complexity of some basic problems in computational convexity: I. containment problems_, Discrete Mathematics, 136 (1994), pp. 129-174. * [35]M. Grotschel, L. Lovasz, and A. Schrijver, _Geometric Algorithms and Combinatorial Optimization_, vol. 2, Springer Science & Business Media, 2012. * [36]P. M. Gruber and F. E. Schuster, _An arithmetic proof of John's ellipsoid theorem_, Archiv der Mathematik, 85 (2005), pp. 82-88. * [37]K. Huang, X. Fu, and N. D. Sidiropoulos, _Anchor-free correlated topic modeling: Identifiability and algorithm_, in Proc. Advances in Neural Information Processing Systems, 2016, pp. 1786-1794. * [38]V. Klee and G. J. Minty, _How good is the simplex algorithm?_, tech. report, DTIC Document, 1970. * [39]J. Li and J. Bioucas-Dias, _Minimum volume simplex analysis: A fast algorithm to unmix hyperspectral data_, in Proc. IEEE International Geoscience and Remote Sensing Symposium, Aug. 2008. * [40]C.-H. Lin, C.-Y. Chi, Y.-H. Wang, and T.-H. Chan, _A fast hyperplane-based minimum-volume enclosing simplex algorithm for blind hyperspectral unmixing_, IEEE Trans. Signal Processing, 64 (2016), pp. 1946-1961. * [41]C.-H. Lin, W.-K. Ma, W.-C. Li, C.-Y. Chi, and A. Ambikapathi, _Identifiability of the simplex volume minimization criterion for blind hyperspectral unmixing: The no-pure-pixel case_, IEEE Trans. Geoscience and Remote Sensing, 53 (2015), pp. 5530-5546. * [42]M. B. Lopes, J. C. Wolff, J. Bioucas-Dias, and M. Figueiredo, _NIR hyperspectral unmixing based on a minimum volume criterion for fast and accurate chemical characterisation of counterfeit tablets_, Analytical Chemistry, 82 (2010), pp. 1462-1469. * [43]W.-K. Ma, J. M. Bioucas-Dias, T.-H. Chan, N. Gillis, P. Gader, A. J. Plaza, A. Ambikapathi, and C.-Y. Chi, _A signal processing perspective on hyperspectral unmixing_, IEEE Signal Processing Magazine, 31 (2014), pp. 67-81. * [44]W.-K. Ma, T.-H. Chan, C.-Y. Chi, and Y. Wang, _Convex analysis for non-negative blind source separation with application in imaging_, in Convex Optimization in Signal Processing and Communications, D. P. Palomar and Y. C. Eldar, eds., Cambridge, UK: Cambridge Univ. Press, 2010. * [45]L. Miao and H. Qi, _Endmember extraction from highly mixed data using minimum volume constrained nonnegative matrix factorization_, IEEE Trans. Geoscience and Remote Sensing, 45 (2007), pp. 765-777. * [46]J. M. Nascimento and J. M. Dias, _Vertex component analysis: A fast algorithm to unmix hyperspectral data_, IEEE Trans. Geoscience and Remote Sensing, 43 (2005), pp. 898-910. * [47]A. Packer, _NP-hardness of largest contained and smallest containing simplices for V- and H-polytopes_, Discrete and Computational Geometry, 28 (2002), pp. 349-377. * [48]B. Recht, C. Re, J. Tropp, and V. Bittorf, _Factoring nonnegative matrices with linear programs_, in Proc. Advances in Neural Information Processing Systems, 2012, pp. 1214-1222. * [49]P. Sun and R. M. Freund, _Computation of minimum-volume covering ellipsoids_, Operations Research, 52 (2004), pp. 690-706. * [50]N. Wang, E. P. Hoffman, L. Chen, L. Chen, Z. Zhang, C. Liu, G. Yu, D. M. Herrington, R. Clarke, and Y. Wang, _Mathematical modelling of transcriptional heterogeneity identifies novel markers and subpopulations in complex tissues_, Scientific Reports, 6 (2016), p. 18909.
Consider a structured matrix factorization model where one factor is restricted to have its columns lying in the unit simplex. This simplex-structured matrix factorization (SSMF) model and the associated factorization techniques have spurred much interest in research topics over different areas, such as hyperspectral unmixing in remote sensing, topic discovery in machine learning, to name a few. In this paper we develop a new theoretical SSMF framework whose idea is to study a maximum volume ellipsoid inscribed in the convex hull of the data points. This maximum volume inscribed ellipsoid (MVIE) idea has not been attempted in prior literature, and we show a sufficient condition under which the MVIE framework guarantees exact recovery of the factors. The sufficient recovery condition we show for MVIE is much more relaxed than that of separable non-negative matrix factorization (or pure-pixel search); coincidentally it is also identical to that of minimum volume enclosing simplex, which is known to be a powerful SSMF framework for non-separable problem instances. We also show that MVIE can be practically implemented by performing facet enumeration and then by solving a convex optimization problem. The potential of the MVIE framework is illustrated by numerical results. **Index Terms:** maximum volume inscribed ellipsoid, simplex, structured matrix factorization, facet enumeration, convex optimization
Provide a brief summary of the text.
267
cambridge_university_press/35e81882_8ecd_4872_9f4b_bbbb11770f4c.md
# Effect of horizontal divergence on estimates of firm-air content Annika N. Horlings 1Department of Earth and Space Sciences, University of Washington, Seattle, WA 96195, USA 1 Knut Christianson 1Department of Earth and Space Sciences, University of Washington, Seattle, WA 96195, USA 1 Nicholas Holschuh 2Department of Earth and Space Sciences, University of Washington, Seattle, WA 96195, USA 1 C. Max Stevens 1Department of Earth and Space Sciences, University of Washington, Seattle, WA 96195, USA 1 Edwin D. Waddington 1Department of Earth and Space Sciences, University of Washington, Seattle, WA 96195, USA 1 ## Introduction Many outlet glaciers of the polar ice sheets have accelerated and thinned markedly in the last 25 years (Jooughin and others, 2012; Mouginot and others, 2014; Smith and others, 2020). While horizontal divergence is low in the ice-sheet interior, outlet glaciers often have substantial spatially and temporally evolving horizontal divergence rates. Investigating many fundamental glaciological problems depends on accurately estimating changes in firm-air content (FAC; the volume of air in a firm column of unit cross-sectional area), including the determination of mass loss from thinning due to marine ice-sheet instability, an important component of the land-ice contribution to sea-level rise (Shepherd and others, 2012; Depoort and others, 2013; Shepherd and others, 2019; Smith and others, 2020). Despite its importance, most firm-compaction models lack a method to account for horizontal divergence in estimates of FAC in regions of fast flow. Here, we formulate a simple kinematic model scheme that is an accessible and easily applicable alternative to a material-specific constitutive relation (Gagliardini and Meyssonnier, 1997; Luthi and Funk, 2000). We use this scheme to account for horizontal divergence within the firm to show the importance of the effect of horizontal divergence on FAC estimates in regions of the ice sheet with high and rapidly changing horizontal divergence rates. ### Background Firm compaction occurs through a variety of microphysical mechanisms, such as grain-boundary sliding, sintering and bubble compression. These mechanisms respond to overwenden stresses and temperature gradients (Mano and Ebinuma, 1983; Burr and others, 2019), to processes related to melting and refreezing (Reeh and others, 2005), and to ice-flow stresses (Alley and Bentley, 1988). These processes vary spatially and temporally, resulting in variable firm-density profiles. Model estimates of firm-column thickness, FAC and the rate at which these evolve often have substantial uncertainty, partly because not all physical processes are included or accurately captured within the current generation of firm-compaction models. For example, thinning in the firm column due to horizontal divergence in the underlying ice is often assumed to be negligible even in regions with high horizontal divergence rates (Kuipers Munneke and others, 2015), despite observational evidence of its impact on firm-density structure (Christianson and others, 2014; Morris and others, 2017; Riverman and others, 2019). ### Firm-air content Estimating mass change of the ice sheets with repeat satellite-altimetry observations requires model-derived estimates of changes in FAC (Shepherd and others, 2012; Depoort and others,2013; Shepherd and others, 2019). FAC, also known as depth-integrated porosity or DIP, is defined as the porosity integrated over depth \\(z\\), from the surface to the depth \\(z_{\\text{i}}\\) where ice density \\(\\rho_{\\text{i}}\\) is attained: \\[\\text{FAC}=\\int_{0}^{z_{\\text{i}}}\\left(\\frac{\\rho_{\\text{i}}-\\rho(z)}{\\rho_{ \\text{i}}}\\right)\\text{d}z. \\tag{1}\\] The change in mass of the ice sheet \\(\\Delta m\\) can be calculated in terms of the observed surface height change \\(\\Delta h\\), the change in FAC \\(\\Delta\\)FAC, ice density \\(\\rho_{\\text{i}}\\) and area \\(A\\): \\[\\Delta m=(\\Delta h-\\Delta\\text{FAC})\\rho_{\\text{i}}A. \\tag{2}\\] Improving firn-compaction models, and specifically producing more accurate estimates of FAC, is an essential step in reducing uncertainty in altimetry-derived mass-balance products. Currently, model estimates of FAC can have large uncertainty partly because firn-compaction models have been calibrated only to, and therefore are appropriate only for, a limited range of climate and ice-dynamic settings (Lundin and others, 2017). Most firn-compaction models also are compatible with the suggestions by Robin (1958) under steady-state conditions. Robin (1958) suggested that, in steady-state conditions, the change in density with depth \\(\\text{d}\\rho(z)/\\text{d}z\\) (and thus FAC) is proportional to the change in overburden stress. This requires that no horizontal divergence occurs within the firn column (Morris and others, 2017): \\[\\frac{\\text{d}\\rho(z)}{\\text{d}z}=\\rho(z)\\big{(}\\rho_{\\text{i}}-\\rho(z)\\big{)}. \\tag{3}\\] However, observations collected over the last 60 years indicate that large deviatoric stresses in the underlying solid ice affect the firn by increasing the rate of firm-density change with depth (Zumberge, 1960; Crary and Charles, 1961; Gow, 1968; Kirchner and others, 1979; Alley and Bentley, 1988; Christianson and others, 2014; Valleonga and others, 2014; Riverman and others, 2019; Morris and others, 2017). ### Previous observations and modeling work Active-source seismic and radar surveys across the Northeast Greenland Ice Stream show that the firn column is 30 m thinner in the shear margins than outside the margins (Christianson and others, 2014; Vallelonga and others, 2014; Riverman and others, 2019). Riverman and others (2019) inferred that this spatial variability of firm density is due to both horizontal divergence and strain softening (i.e., an increased firn-compaction rate due to the acceleration of time-dependent microphysical processes from increased ice-flow stresses). Morris and others (2017) accounted for horizontal divergence in the firn along the iSTAR traverse on Pine Island Glacier by using a layer-thinning scheme similar to the one we propose here (see Section 2 of the Supplementary Material). Morris and others (2017) showed that the negative ratio of the vertical densification rate to the density-corrected volumetric strain rate is not equal to the mean-annual accumulation rate in some cases; they attributed this to a non-negligible horizontal divergence, and illustrated that the steady-state suggestion by Robin (1958) does not hold in those cases. Some modeling studies have attempted to incorporate the impact of horizontal ice-flow stresses on firn within a constitutive formulation using a generalized form of Glen's Law (Gagliardini and Meyssonnier, 1997; Luthi and Funk, 2000). This approach has not been widely adopted by the firn community because it is difficult to integrate into commonly used firn-compaction models. To our knowledge, no current firn-compaction model used for inferring ice-sheet mass balance from repeat-altimetry observations (e.g., Ligtenberg and others, 2011; Li and Zwally, 2015, etc.) accounts for horizontal divergence. In this study, we incorporate a layer-thinning scheme to account for horizontal divergence in the Community Firn Model (CFM) (Stevens and others, 2020), and systematically investigate the effects of horizontal divergence on FAC. We first describe the specifics of our implementation of the layer-thinning scheme in the CFM. Then, we test the scheme through a series of idealized conceptual runs for climate conditions representative of West Antarctica. We subsequently apply the layer-thinning scheme to a firn column along two flowlines on Thwaites Glacier and Pine Island Glacier, Antarctica, where dynamic ice-sheet thinning due to accelerating ice flow from marine ice-sheet instability is occurring (Joughin and others, 2014; Rignot and others, 2014). We then quantify how much of recent observed thinning results from FAC changes due to horizontal divergence for lower Thwaites Glacier. Finally, we map where horizontal divergence should be considered in FAC estimates across the entire Antarctic Ice Sheet ### Horizontal divergence in the Community Firn Model The CFM is an open-source, modular model framework that is designed to simulate evolution of firm properties, including density, compaction rate and temperature (Stevens and others, 2020). It utilizes a suite of 13 published snow- and firm-compaction models. The CFM uses a one-dimensional Lagrangian framework to track the properties of firm parcels as they advect from the surface into the underlying ice sheet. Users stipulate the surface-boundary conditions, including accumulation rate, surface temperature, surface-snow density and other parameters necessary for the chosen firn- or snow-compaction model. To simulate layer-thickness changes within the CFM due to horizontal divergence, we adopt a kinematic two-part layer-thinning scheme for firm compaction (Fig. 1). During each time step, the firn first densities via the equations of the user-specified firn-compaction model: \\[\\lambda_{\\text{part1}}=\\lambda_{\\text{add}}(1+\\hat{\\epsilon}_{\\text{zz}} \\Delta t). \\tag{4}\\] where \\(\\lambda_{\\text{add}}\\) is the firm-parcel thickness at the previous time step, \\(\\lambda_{\\text{part1}}\\) is the firm-parcel thickness after densification from the firm-compaction model, \\(\\lambda_{\\text{part2}}\\) is the firm-parcel thickness after thinning from horizontal divergence, and \\(\\Delta t\\) is the time step. \\(\\hat{\\epsilon}_{\\text{zz}}\\) is the vertical strain rate due to densification of the firn, provided by the chosen model physics, and commonly implemented in models as: \\[\\hat{\\epsilon}_{\\text{zz}}=\\frac{1}{\\rho(z)}\\frac{\\text{d}\\rho(z)}{\\text{d}t}. \\tag{5}\\] Equation (4) (part one) is identical to the procedure in a conventional firm-compaction model that neglects horizontal divergence. When the firn parcels thin due to horizontal divergence (Eqn (6); part two), during the same time step as densification due to Eqn (4), the firn is horizontally stretched using a prescribed horizontal divergence rate \\(\\hat{\\epsilon}_{\\text{h}}\\): \\[\\lambda_{\\text{part2}}=\\lambda_{\\text{part1}}(1-\\hat{\\epsilon}_{\\text{h}} \\Delta t). \\tag{6}\\]Individual firm parcels stretch horizontally as a result of thinning due to horizontal divergence. Density and FAC calculations of the individual firm parcels in the CFM are produced per unit cross-sectional area of the firm parcels. Eqn (6) does not consider density changes of individual firm parcels. This is because the scheme is solely kinematic, and it is assumed that the material properties of a given parcel of firm do not change with horizontal stretching. However, Eqn (6) reduces the depth at which any density appears in the firm column. ### Choice of firm-compaction models In this study, we run the CFM using the firm-compaction equations from the Herron and Langway (1980) model (HL) and the Ligtenberg and others (2011) model (LIG). Multiple firm-compaction models have been developed in the last 40 years. We use HL because it is still seen as a benchmark firm-compaction model, and most models of polar firm compaction are based on its general framework and assumptions (Li and Zwally, 2011; Ligtenberg and others, 2011; Morris and Wingham, 2014). HL used Antarctic and Greenlandic firm depth-density profiles to derive empirical equations describing the firm-compaction rate in stage one and stage two of the firm column. Stage one is where density \\(\\rho\\!<\\!550\\) kg\\(\\,m^{-3}\\), and represents the shallowest portion of the firm column. Stage two occurs deeper and extends to bubble close-off, where densities range from \\(550\\) kg\\(\\,m^{-3}\\!<\\!\\rho\\!<\\!830\\) kg\\(\\,m^{-3}\\). Robin (1958) assumed that the change in firm-pore volume is proportional to the change in the overburden pressure, a steady-state assumption. This assumption allows HL to parametrize the overburden pressure using mean annual snow accumulation. In addition, the HL densification rate includes an Arrhenius-type temperature dependence, which represents densification via temperature-dependent microphysical mechanisms, such as grain growth (Gow, 1969). We also use LIG because it is the firm-compaction model included in subsurface processes of the regional climate model RACMO (Van Wessem and others, 2018), one of the most common reanalysis products used in Antarctic mass-balance calculations. LIG has been used to estimate FAC changes in multiple ice-sheet mass-balance studies (Shepherd and others, 2012; Gardner and others, 2013; McMillan and others, 2016; Shepherd and others, 2019). LIG used 48 depth-density profiles from Antarctica to tune the firm-compaction model by Arthern and others (2010) for general applicability in Antarctica. Arthern and others (2010) measured vertical strain rates in the firm at four sites in Antarctica and used those data to derive a semi-empirical compaction model based on rate equations of Nabarro-Herring creep and grain growth (Coble, 1970; Gow, 1969), with a form similar to that of HL. ### Model inputs #### Total horizontal divergence rates from ice velocities We use the Mouginot and others (2019) Antarctic ice-velocity map to compute mean horizontal divergence rates from 1996 to 2018 at a spatial resolution of \\(450\\) m. We also use the Mouginot and others (2017) ice-velocity time series to compute the annual horizontal divergence rates from 2007 to 2016 in the Amundsen Sea Embayment with a spatial resolution of \\(1\\) km. To compute the total horizontal divergence rate from the ice velocities, we implement the logarithmic strain-rate formulation from Alley and others (2018) to produce continent-wide horizontal divergence rates. Designating \\(u\\) and \\(v\\) as the \\(x\\) and \\(y\\) components of the velocity field, respectively, in a polar stereographic coordinate system (EPSG: 3031), the two-dimensional strain-rate tensor is: \\[\\dot{\\mathbf{\\epsilon}}=\\!\\left[\\!\\!\\begin{array}{cc}\\frac{\\dot{\\mathbf{\\alpha}}}{ \\alpha}&\\frac{1}{2}(\\frac{\\dot{\\mathbf{\\alpha}}}{\\alpha}+\\frac{\\dot{\\mathbf{\\alpha}}}{ \\beta})\\\\ \\frac{1}{2}(\\frac{\\dot{\\mathbf{\\alpha}}}{\\alpha}+\\frac{\\dot{\\mathbf{\\alpha}}}{\\beta})& \\frac{\\dot{\\mathbf{\\alpha}}}{\\beta}\\end{array}\\!\\!\\right]\\!\\!. \\tag{7}\\] If the strain-rate tensor is rotated with the local ice-flow direction, the total horizontal divergence rate \\(\\dot{\\mathbf{\\epsilon}}_{\\rm h}\\), can be computed as the trace of \\(\\dot{\\mathbf{\\epsilon}}\\), or the sum of the longitudinal and transverse strain rates, respectively. \\[\\dot{\\mathbf{\\epsilon}}_{\\rm h}=\\frac{\\dot{\\mathbf{\\omega}}}{\\dot{\\mathbf{\\alpha}}x}+ \\frac{\\dot{\\mathbf{\\omega}}}{\\dot{\\mathbf{\\omega}}y}. \\tag{8}\\] If the strain-rate tensor is not with the local ice-flow direction, the longitudinal and transverse strain rates can be reoriented to calculate the horizontal divergence rate \\(\\dot{\\mathbf{\\epsilon}}_{\\rm h}\\) (see Alley and others (2018)). Errors can arise when generating the strain-rate tensor from a satellite-derived velocity field in areas of high strain when a nominal strain formulation is used (Alley and others, 2018). Therefore, we apply the logarithmic formulation from Alley and others (2018), which compares the change in length with the previous length, and not the original length, to account for the history of strain that is experienced by that region of the ice sheet. Figure 1: Our layer-thinning scheme that accounts for horizontal divergence rates in the CFM. At each time step, the firm first compresses vertically and dendrites (Eqn (4)) following the equations of the user-specified firm-compaction model (part one). Then the firm stretches horizontally without further density change, as determined by the prescribed horizontal divergence rate \\(\\dot{\\mathbf{\\epsilon}}_{\\rm h}\\) in Eqn (6) (part two). We specify the accumulation rate, surface temperature and surface-snow density as the boundary conditions of the CFM. We force the CFM with a modified MERRA-2 climate reanalysis product (Smith and others, 2020, Brooke Medley, personal communication, 3 March 2020) at 5-day temporal resolution and 12.5 km spatial resolution. All accumulation rates we use in the model runs are in ice-equivalent units. Additionally, for all runs, we prescribe a constant surface-snow density of 400 kg m\\({}^{-3}\\). Fausto and others (2018) discuss uncertainty arising from surface-snow density for firm-density calculations. While variable surface-snow density affects FAC estimates, we suspect it does not affect the relative change in FAC estimates from different horizontal divergence rates enough to alter our conclusions. ## Results ### Idealized runs To provide quantitative insight into the impact of horizontal divergence on FAC, we conduct idealized simulations using the HL and LIG models with constant climate forcing under six horizontal-divergence-rate scenarios that span the range of horizontal divergence rates observed on the ice sheets (Experiment 1; Table 1 and Fig. 2). We choose surface-boundary conditions representative of central West Antarctica: a constant accumulation rate of 0.30 m a\\({}^{-1}\\), and a constant temperature of \\(-20^{\\circ}\\)C (Kaspari and others, 2004; Steig and others, 2005; Medley and others, 2013; Fudge and others, 2016). We spin the model up for 600 years to steady state under this constant climate with no horizontal divergence rate. Then, we run the model for 600 additional years, starting at time \\(t\\!=\\!0\\) years, using the same constant climate. We apply a step-change in horizontal divergence rate to the steady-state firm column at \\(t\\!=\\!100\\) years and track how the simulated FAC evolves for the next 500 years. We run this routine using HL and LIG for six horizontal-divergence-rate scenarios (Table 1 and Fig. 2). Figure 2 shows the evolution of FAC through time predicted by LIG, including the evolution of the depth-density profile and bubble close-off (BCO) depth (i.e. density horizon of 830 kg m\\({}^{-3}\\)). After an initial adjustment period, the simulated firm column reaches a new steady state approximately 500 years following the onset of the horizontal divergence rate for all runs. In this new steady state, the firm parcels have thinned, and the FAC has correspondingly decreased (Fig. 3 and Table 1). Adding a horizontal divergence rate of \\(10^{-3}\\) a\\({}^{-1}\\) with the LIG (HL) model results in a FAC that is 4% (6%) less than for a model with no horizontal divergence (Experiment 1; Table 1 and Fig. 3). Imposing a horizontal divergence rate of \\(10^{-2}\\) a\\({}^{-1}\\) reduces the FAC by 31% (36%) compared to the no-horizontal-divergence scenario. HL estimates larger decreases in FAC than LIG for all idealized runs, and the difference between the models is larger for higher horizontal divergence rates (Fig. 3). ### Spatial variability: flowline runs To determine the impact of horizontal divergence on FAC in realistic climate and ice-flow conditions, we apply the layer-thinning scheme to two flowlines on Thwaites (THW - Experiment 2) and Pine Island (PIG - Experiment 3) Glaciers, West Antarctica (Fig. 4). We choose these flowlines because ice-surface speeds have been monotonically increasing in this area during the satellite record (since 1992; Wingham and others, 1998; Mouginot and others, 2014), and the region is thinning rapidly (Schroder and \\begin{table} \\begin{tabular}{l c c} \\hline \\hline Run & Step in horizontal divergence (a\\({}^{-3}\\)) & \\% Decrease in FAC \\\\ \\hline 1 & 0 to 1\\(\\times 10^{-4}\\) & 0.4 (0.5) \\\\ 2 & 0 to 1\\(\\times 10^{-3}\\) & 4.0 (6.1) \\\\ 3 & 0 to 2.5\\(\\times 10^{-3}\\) & 9.6 (12.8) \\\\ 4 & 0 to 5\\(\\times 10^{-3}\\) & 17.9 (22.3) \\\\ 5 & 0 to 7.5\\(\\times 10^{-3}\\) & 25.0 (30.0) \\\\ 6 & 0 to 1\\(\\times 10^{-2}\\) & 31.1 (36.3) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Steps in horizontal divergence rates used in the layer-thinning scheme. These encompass the range of horizontal divergence rates commonly observed on the ice sheets. The model was forced with an accumulation rate of 0.30 m ice o, a\\({}^{-1}\\), surface temperature of \\(-20^{\\circ}\\)C and surface-snow density of 400 kg m\\({}^{-3}\\). Percent decrease values are shown using LIG; values from HL are shown in parentheses. Figure 3: Estimated firm-air content (FAC) using the layer-thinning scheme to account for horizontal divergence with the HL, and UIG firm-compaction models. The greater the step-change in horizontal divergence rate, the greater the decrease in the FAC after the step change. others, 2019). This makes mass-balance estimates from the region sensitive to the treatment of horizontal divergence in firm-compaction models. The surface temperature and accumulation rate increase non-linearly along both flowlines, as they approach the lower-elevation coast. Along the THW flowline, the surface temperature increases from \\(-27\\)degC to \\(-18\\)degC, and the accumulation rate increases from 0.5 to 0.9 m a\\({}^{-1}\\). Along the PIG flowline, the surface temperature increases from \\(-26\\)degC to \\(-17\\)degC, and the accumulation rate increases from 0.4 to 0.9 m a\\({}^{-1}\\). The ice speed along each flowline increases as the ice enters streaming flow, from speeds \\(<\\)10 m a\\({}^{-1}\\) to speeds \\(>\\)1000 m a\\({}^{-1}\\), with corresponding increases in along-flow horizontal divergence rates (from 0 to \\(>10^{-3}\\) a\\({}^{-1}\\)). The starting points of the flowlines were chosen so that the flowlines would extend through the main trunks of the glaciers. We again spin up the CFM for 600 years with the mean 1980-2019 climate (accumulation rate and temperature) for the head of the flowline, and no horizontal divergence rate. We run the model twice for each flowline, with and without an imposed horizontal divergence rate. The firm column advects through the flowline based on the temporally static, but spatially variable 1996-2018 mean ice velocities from Mouginot and others (2019). We map the temporally evolving firm column to its position along-flow, and plot the associated FAC for each position along the flowline in Figures 5 and 6. Figure 5 shows model results with and without including horizontal divergence using LIG for the THW flowline. Horizontal divergence along the THW flowline results in a mean 3.7% (4.0%) difference for the entire flowline between the no-divergence and divergence runs using LIG (HL). However, the effect of horizontal divergence on modeled FAC increases toward the terminus of the glacier and influences FAC most within the last 150-200 km. At the end of the flowline, horizontal divergence causes the FAC to be up to 9.6 m (9.2 m) less than with a method that does not account for horizontal divergence (41% (40%) difference). For the PIG flowline (Fig. 6), horizontal divergence results in an average 0.68% (0.81%) difference over the entire flowline. Like THW, horizontal divergence has the greatest impact on FAC estimates for the last 150-200 km of the flowline, where the FAC is 4.4 m (4.8 m) less while using a model with horizontal divergence compared to a conventional firm-compaction model (18% (19%) difference). Experiments 2 and 3 indicate that horizontal divergence becomes most important to consider in estimating the FAC in the most coastal 150-200 km of THW and PIG. Horizontal divergence rates increase at approximately 150-200 km before the end of the THW and PIG flowlines, and the firm column begins to thin there (Experiments 2 and 3; Figs 5 and 6). The thickness of the entire firm layer along the flowline can be seen in Figures (b)b and (b)b, where the black line shows the 830 kg m\\({}^{-3}\\) density horizon (BCO depth). The firm thins non-uniformly along the flowline as a result of the variability of the horizontal divergence rates, and thinning or thickening induced by the spatially variable accumulation rate and temperature. Note that higher accumulation rates increase the FAC, whereas higher temperatures decrease the FAC. Thus, higher accumulation rates will offset the effect of horizontal divergence on the FAC, whereas higher temperatures will reinforce the net thinning effect. Results from using HL for both flowlines on THW and PIG show qualitatively similar results and therefore are shown in Section 5 of the Supplementary Material. ### Temporal variability: static location on lower Thwaites Glacier To investigate the effect of temporal variability of horizontal divergence on FAC estimates during the satellite record, we next consider the time evolution of FAC (\\(\\Delta\\)FAC) from 2007 to 2016 for a fixed location on lower THW (Experiment 4; black star in Fig. 4). We choose this location because the time series of ice speed is annually continuous, and mean thinning rates here are characteristic of a large portion of lower THW and of PIG. For spin up of the CFM, we use a constant horizontal divergence rate of 0.015 a\\({}^{-1}\\), based on observations from the beginning of the time series. We spin up the CFM for 600 years using a climate forcing randomly generated from the normal distribution of the 1980-2007 mean climate. We perform four model runs, all of which use the 2007-2016 temperature and accumulation-rate fields from MERRA-2, with annual time steps (Smith and others, 2020, Medley, personal communication, 3 March 2020). The accumulation rate, temperature and divergence-rate boundary conditions are shown in Figure 7. Descriptions of each run are as follows: 1. A baseline conventional firm-compaction-model run, which entails running the CFM with the evolving temperature and surface-accumulation rate but no horizontal divergence rate. 2. A run with climate from (1), and a constant horizontal divergence rate of 0.015 a\\({}^{-1}\\) through the entire spin up and model run. 3. A run with climate from (1); constant horizontal divergence rate of 0.015 a\\({}^{-1}\\) through the spin up until 2007; then the horizontal divergence rate evolves based on the 2007-2016 ice-velocity time series. 4. Run with climate from (1); constant horizontal divergence rate of 0.015 a\\({}^{-1}\\) through the spin up until 1997; linear ramp up to a horizontal divergence rate of 0.04 a\\({}^{-1}\\) at 2007; then the horizontal divergence rate evolves based on the 2007-2016 ice-velocity time series. We choose these runs to demonstrate (a) the impact of horizontal divergence on FAC estimates through time, and (b) the effects of initializing the runs using firm columns with different initial conditions, which will result in different FACs due to the initial-state dependence of the firm-compaction rate (Fig. 7). The lower panel in Figure 7 shows the model-predicted FAC for Figure 4: Location of Experiment 2 (THW) and Experiment 3 (PIG) on a map of mean thinning rate for 1978-2018 (Schönder and others, 2018). The black star represents the location on lower Thwaites used in Experiment 4. Map is superimposed on Reference Elevation Model of Antarctica (ERA) ice-sheet surface elevation (Houart and others, 2019). Inset shows location of figure domain in Antarctica. The projection is polar stereographic (PPSG: 3031). the four runs. Horizontal divergence applied over longer histories has higher influence on the FAC. Run 1 (no imposed horizontal divergence rate) consistently predicts a FAC that is 7-8 m greater than the other runs. Runs 3 and 4 address the influence of temporal variability of horizontal divergence on FAC estimates, which is important for assessing ice-sheet mass balance from repeat satellite-altimetry observations. These runs indicate a marked decrease in FAC associated with greater horizontal divergence rates. \\begin{table} \\begin{tabular}{l c c c} \\hline Run & \\(\\Delta\\)FAC (m) & \\% decrease in FAC & \\% of observed thinning \\\\ \\hline 1 & 0.20 (0.17) & 0.77 (0.66) & \\(-1.18\\) (\\(-1.04\\)) \\\\ 2 & 0.08 (0.12) & 0.42 (0.68) & \\(-0.46\\) (\\(-0.73\\)) \\\\ 3 & \\(-2.66\\) (\\(-2.71\\)) & \\(-15.30\\) (\\(-15.55\\)) & \\(16.12\\) (\\(16.50\\)) \\\\ 4 & \\(-1.83\\) (\\(-1.91\\)) & \\(-11.66\\) (\\(-12.08\\)) & \\(11.12\\) (\\(11.57\\)) \\\\ \\hline \\end{tabular} \\end{table} Table 2: Summary of results from temporally varying the horizontal divergence rate for the location on Thwaites Glacier from 2007 to 2016. Results using the 10 ftm-compaction model in the layer-thinning scheme are shown, with results using the full. ftm-compaction model in parentheses. Figure 5: Results from the layer-thinning scheme for the flow-line on Thwaites Glacier using the U46 finm-compaction model (Experiment 2. (a) Horizontal divergence rates for the flowfront divergence rates were derived from Mouginot and others (2018) following the approach of Alley and others (2018), and exclude compression. (b) The fin depth-density profile along the flowline for the model that accounts for horizontal divergence. Black line indicates the BCO depth. Contour interval is \\(50\\,\\mathrm{kgm^{-1}}\\), (c) FAC results from model runs including the horizontal divergence rates shown in (a) (dotted line) and from a model without the horizontal divergence rates (dashed line). Figure 6: Results from the layer-thinning scheme for a flowline on Pine Island Glacier using the Literpene and others (2011) finm-compaction model (Experiment 3). (a) Horizontal divergence rates for the flowline. Horizontal divergence rates were derived from Mouginot and others (2018) following the approach of Alley and others (2018), and exclude compression. (b) The fin depth-density profile along the flowline for the model that accounts for horizontal divergence. Black line indicates the BCO depth. Contour interval is \\(50\\,\\mathrm{kgm^{-1}}\\), (c) FAC results from model runs with the horizontal divergence rates shown in (a) (dotted line) and from a model without horizontal divergence rates (dashed line). Temporally increasing horizontal divergence rates reduce the FAC substantially (Table 2 and Fig. 7). In Run 1, the FAC slightly increases by 0.20 m (0.17 m), or 0.77% (0.66%), almost a negligible change, from 2007 to 2016. In Run 2, the FAC also slightly increases by 0.08 m (0.12 m) or 0.42% (0.68%). In contrast, in Run 3, the FAC is reduced by 2.66 m (2.71 m), or 15.30% (15.55%). In Run 4, the FAC is reduced by 1.83 m (1.91 m), or 11.66% (12.08%). Even the short-term application of greater and time-variable horizontal divergence rates initialized in 2007 (Runs 3 and 4) produces a substantial difference in predicted FAC compared to the constant horizontal-divergence case (Run 2). FAC in 2016 is estimated to be 3.04 m (3.14 m) less in Run 3 than in Run 2 (18.75% (19.29%) difference) and is 3.86 m (4.00 m) less in Run 4 than in Run 2 (24.38% (25.21%) difference). FAC estimated in Run 3 and Run 4 begin to differ when the Run-4 horizontal divergence rate ramps up in 1997, ultimately resulting in a slightly lower FAC in 2016 (0.81 m (0.86 m); 5.70% (6.0% difference) because the FAC in 1997 is already less in Run 4 than Run 3. ## Discussion In the following sections, we address the central questions motivating this study: 1. What is the importance of horizontal divergence in controlling FAC? 2. How do estimates of the time-evolution of FAC (\\(\\Delta\\)FAC) estimates change by including horizontal divergence in the calculations? 3. Where does horizontal divergence matter for estimating FAC on the Antarctic Ice Sheet? Through investigating these questions, we find that, firstly, neglecting horizontal divergence where divergence rates exceed \\(10^{-4}\\) a\\({}^{-1}\\) will lead to an overestimate in FAC. This is because a firm column in regions with horizontal divergence is stretched horizontally and is thinner, and thus has less air content per unit volume than a firm column in regions without horizontal divergence. Second, accounting for horizontal divergence in FAC estimates results in a smaller calculated mass loss for regions of increasing horizontal divergence through time. This is because, for a given change in surface elevation \\(\\Delta\\)_h_, the time-evolution of FAC (\\(\\Delta\\)FAC) is greater for regions with increasing horizontal divergence through time, which implies that the interpreted \\(\\Delta m\\) is less than estimates that neglect horizontal divergence (Eqn (2)). Lastly, we find that horizontal divergence should be included in FAC estimates in regions entering and within the outlet glaciers and ice shelves of the Antarctic Ice Sheet. Horizontal divergence should be accounted for in estimates of the time-evolution of FAC in these regions, as speeds and thinning rates are currently increasing (Joughin and others, 2012; Mouginot and others, 2014; Smith and others, 2020). How do FAC estimates needed for altimetry studies change by including time-evolving horizontal divergence? Results from our transient runs suggest that including horizontal divergence makes a substantial difference in the calculated AFAC. Decadal-scale climate variability (Run 1) from 2007 to 2016 results in 0.20 m (0.17 m), a 0.77% (0.66%) increase. By comparison, AFAC for Run 2 is 0.08 m (0.12 m), a 0.42% (0.68%) increase. The only difference between Runs 1 and 2 is that Run 2 includes a constant horizontal divergence rate, which makes the total FAC of Run 2 lower than that of Run 1 and means that FAC changes for these two runs are solely a result of the variable climate. Runs 1 Figure 7: Surface boundary conditions, horizontal divergence rates and estimated FAC using the layer-thinning scheme with the LUG firm-compaction model for a location on lower Thrustage (Experiment 4). The model spin up from 1890 to 2007 is shown. Run 1 represents a conventional firm-compaction model run with no horizontal divergence. A constant horizontal divergence rate of 0.015 s\\({}^{-1}\\) is used in Run 2 for runs 3 and 4, after spin up with a constant divergence rate of 0.015 s\\({}^{-1}\\), the model is run from 2007 to 2016 with temporally variable horizontal divergence rates derived from the Mouginot and others (2017) velocity time series. Run 4 also includes a linear ramp between horizontal divergence rates from the 1997 to 2007 values. and 2 contrast Runs 3 and 4, which include the time-variable horizontal divergence and show substantial decreases in FAC through time (Table 2). These changes in FAC for Runs 3 and 4 constitute 16% and 11% of the observed thinning (27 m; Schroder and others, 2019), respectively, for this location on lower THW from 2007 to 2016. Surface lowering on a glacier is due to both mass loss and firm thinning or thickening; for a given observed elevation change \\(\\Delta h\\), if there is more firm thinning and thus higher AFAC, then there is less mass loss \\(\\Delta m\\). ### Where do horizontal divergence rates matter for estimates of FAC on the Antarctic Ice Sheet? To identify where horizontal divergence is important to consider in estimates of FAC, we first determine where the horizontal divergence-rate magnitude is comparable to the vertical strain-rate magnitude. We consider horizontal divergence negligible when the ratio of the magnitude of horizontal divergence rate to vertical strain rate is \\(<0.1\\). Figure 8 shows the Antarctic-wide ratio \\(R\\) of the depth-averaged vertical strain rate \\(\\overline{\\epsilon}_{\\mathrm{zz}}\\) within the firm column, as calculated by the analytic model from Herron and Langway (1980), and the total horizontal divergence rate \\(\\hat{\\epsilon}_{\\mathrm{zz}}\\), as calculated from velocity data from Rignot and others (2017) following Alley and others (2018): \\[R=\\frac{\\hat{\\epsilon}_{\\mathrm{zz}}}{\\epsilon_{\\mathrm{zz}}}. \\tag{9}\\] Not unexpectedly, \\(R\\geq 0.1\\) occurs dominantly (1) in regions where ice enters the outlet glaciers along the margins of the Antarctic Ice Sheet, such as Pine Island Glacier, Thwaites Glacier, and other glaciers in the Amundsen Sea sector, and (2) in regions entering the ice shelves. Regions in the interior of the ice sheet have relatively low values of \\(R\\). Based on this analysis, horizontal divergence can be neglected in broad interior regions of the ice sheet. Some high horizontal divergence rates on ice shelves are associated with ongoing rifting, and may not reflect changes in horizontal divergence rates that are of interest in this study. ### Future work We show, with a simple kinematic layer-thinning scheme in the CFM, that horizontal divergence must be accounted for in estimates of FAC in regions of dynamic ice flow. However, the effects of several additional processes not included in our model require further study: (1) the role of compressive strain in firm-thickness change, (2) densification during horizontal stretching, and (3) the role of brittle failure in reducing the effect of ductile thinning. To include compressive strain rates in our treatment of layer thinning, either better theoretical treatment of effects of compressional ice-flow stresses on firm compaction or intentional model calibration including high ice-flow-stress sites is needed. Most of the horizontal divergence rates in Experiments 2 and 3 are not compressive because these glaciers are not confined outlet glaciers and do not experience significant lateral compression associated with downstream narrowing of glacier extent. For these reasons, we chose to set any compressional strain rates in our runs to zero. We suspect that compressional stresses would not offset reduction in the FAC due to horizontal divergence, but would have an additive effect to increase firm density and lower FAC, because some microphysical mechanisms (e.g., power-law creep) are dependent on the square of the effective stress (Maeno and Ebinuma, 1983; Alley and Bentley, 1988). A dynamic treatment of horizontal extensional stresses in the firm would also contribute to densification during thinning and result in a further decrease in FAC for similar reasons. Previous studies refer to this effect as strain softening, and identify this process as having a greater impact on the firm-density profile than horizontal divergence in some locations (Riverman and others, 2019). Additionally, theoretical and observational evidence suggests that a decrease in the porosity will occur with an increase in the stress state due to increased damage and strain around bubbles (Chawla and Deng, 2005; Alley and Fitzpatrick, 1999). Finally, we ignore the effects of brittle failure in near-surface creussing, which must accommodate some of the horizontal divergence in especially high strain-rate areas (Dududu and Waisman, 2012). Including a constitutive relation for density changes in response to horizontal stresses is an obvious next development. However, a dynamic treatment of horizontal stresses should be created Figure 8: Ratio of the vertical and horizontal divergence rates (R) across the Antarctic ice Sheet. Higher values of \\(R\\) show where horizontal divergence rates are significant in calculations of firm-air content. along with a dynamic treatment of the vertical forces and vertical compression in firm. Additional experimental and field data are necessary for formulating such a model, and therefore is clearly beyond the scope of this paper. Future work should therefore (1) collect more field measurements in regions of high horizontal ice-flow stresses; (2) intentionally calibrate models to high ice-flow-stress sites in an empirical framework; and/or (3) perform lab work investigating how density, viscosity and microstructures change under horizontal ice-flow stresses. Firm-compaction models have been developed using depth-density profiles that are assumed to have negligible thinning due to horizontal ice-flow stresses. This is true for many of those profiles, but the model development processes have not necessarily checked the validity of this assumption for each core; nor have data from high-stress regions been sought out. As a result, firm-compaction models assume there is no dependence on ice-flow stresses due to their functional form. In regions that may indeed have ice-flow stresses, calibrating the models using data from firm density there could lead to error in the calibrated coefficient estimates. However, calibrating additional coefficients in existing models using firm data from identified high ice-flow-stress regions could accommodate some of the impacts of horizontal ice-flow stresses on FAC. Also, new geophysical techniques for measuring time series of firm thickness and density, such as GNSS interferometric reflectometry (Larson and others, 2009; Gutmann and others, 2012) and autonomous phase sensitive radio echo sounding (Corr and others, 2002; Jenkins and others, 2006; Nicholls and others, 2015), are promising low-cost methods for gathering additional firm depth-density data in regions of dynamic ice flow. Work is ongoing to formulate a dynamic, time-dependent expression encompassing the effects of horizontal ice-flow stresses on firm-compaction processes, and new measurements (e.g., micro-CT scans) may provide the necessary data on firm-grain evolution to construct a model based on microstructure evolution. Micro-CT technology is starting to be applied to measure firm properties (Adolph and Albert, 2014; Gregory and others, 2014; Keegan and others, 2019). Further work to characterize the relative roles of ductile and brittle behavior of the firm will also allow for better characterization of firm in high-stress environments. ## Conclusions Estimates of spatially and temporally variable FAC are needed in calculations of ice-sheet mass balance derived from repeat-altimetry observations. Here, we introduced a method that accounts for firm-layer thinning from horizontal divergence into the CFM (Stevens and others, 2020). This scheme consists of (1) densification via an existing firm-compaction model, and (2) thinning of the firm via horizontal stretching due to horizontal divergence. We assessed the spatial and temporal variability of changes in FAC due to horizontal divergence separately. Horizontal divergence becomes most impactful on FAC estimates in the last 100-150 km of the flowlines on Thwaites and Pine Island Glaciers, where horizontal divergence rates can reach \\(10^{-2}\\) a\\({}^{-1}\\) and higher. At the end of the Thwaites flowline, horizontal divergence causes the FAC to be 41% less than FAC estimates from a conventional firm-compaction model with no horizontal divergence. At the end of the PIG flowline, horizontal divergence leads to a 18% less FAC compared to results from a conventional firm-compaction model. For a representative location on lower Thwaites Glacier, a 15% decrease in FAC occurs from 2007 to 2016 due to horizontal divergence, which corresponds to 16% of the observed surface-elevation change. This contrasts output from a conventional firm-compaction model with no horizontal divergence, which estimates a 0.77% increase in FAC, due to climate variability alone. Horizontal divergence is most important to include in FAC estimates in outlet glaciers in the Amundsen Sea Embayment, and in regions entering and within the ice shelves of Antarctica. We find that horizontal divergence is important for both (1) estimates of the steady-state FAC and (2) estimates of FAC variability in time. Neglecting horizontal divergence in FAC estimates where horizontal divergence rates exceed \\(10^{-4}\\) a\\({}^{-1}\\) will lead to an overestimate in the steady-state and time-evolving FAC. Improved FAC estimates will produce better calculations of mass change from repeat surface-elevation observations, as well as more accurate estimates of basal-melt rates that depend on a hydrostatic assumption. Including horizontal divergence in FAC estimates will become more important as regions of the Antarctic Ice Sheet, such as the Amundsen Sea Embayment, continue to experience substantial increases in ice speed (Mouginot and others, 2014). Neglecting horizontal divergence within FAC estimates used in altimetry-derived mass-change calculations in these areas will lead to an overestimate in mass loss. Our work highlights the importance of accounting for horizontal divergence in estimates of FAC, and is a first step toward improving firm products that may consider adding the effects of strain thinning to produce better FAC outputs. ## Supplementary material The supplementary material for this article can be found at [https://doi.org/10.1017/gog2020.105](https://doi.org/10.1017/gog2020.105). ## Code and data availability The CFM code is publicly available at [https://github.com/UWGlaciology/CommunityFirmModel](https://github.com/UWGlaciology/CommunityFirmModel). Documentation for the CFM is online at [https://communityfirmmodel.readthedocs.io/TS5](https://communityfirmmodel.readthedocs.io/TS5). Model output and scripts used to make the figures will be available on the University of Washington ResearchWorks Archive. ## Acknowledgments We thank Brooke Medley for her modified MERRA-2 climate reanalysis data. We also thank Tyler J. Fudge and Michelle Koutnik for their comments on earlier versions of the manuscript. This work was supported by NASA grant NNX16AM01G and NSF grant 0968391. ## Author contributions ANH, KC, EDW and CMS designed the study. ANH implemented the layer-thinning procedure, ran the model experiments and led writing of the paper. CMS aided implementation of the layer-thinning scheme in the CFM and performed some model runs. NH and KC contributed to analysis of ice-velocity data. All authors contributed to paper writing and editing. ## References * (1) * Adolph and Albert (2014)**Adolph A and Albert M** (2014) Gas diffusivity and permeability through the firm column at Summit, Greenland: measurements and comparison to microstructural properties. _The Coysphere_**8**, 319-328. * Alley and others (2018)**Alley KE and 5 others** (2018)**Continent-wide estimates of Antarctic strain rates from Landsat 8-derived velocity grids. _Journal of Glaciology_**64**(244), 321-332. * Alley and others (1988)**Alley RB and Bentley CR** (1988)**Ice-core analysis on the Siple Coast of West Antarctica. _Annals of Glaciology_**11**, 1-7. * Alley and others (1999)**Alley RB and Fitzpatrick H** (1999)**Conditions for bubble elongation in cold ice-sheet ice. _Journal of Glaciology_**45**(149), 147-153. * Arthern et al. (2010)**Arthern BJ, Vaughan DG, Rankin AM, Mulwaney R and Thomas ER** (2010) In situ measurements of Antarctic snow compaction compared with predictions of models. _Journal of Geophysical Research: Earth Surface_**115**(F3), P03011. * Barr et al. (2019)**Barr A, Luisier P, Martin CL and Philip A** (2019) In situ X-ray tomography densification of firm: the role of mechanics and diffusion processes. _Acta Materialia_**167**, 210-220. * Chawla and others (2005)**Chawla N and Deng X** (2005)**Microstructure and mechanical behavior of porous sintered seeds. _Materials Science and Engineering A_**390**(1-2), 98-112. * Christianson and others (2014)**Christianson K and 7 others** (2014)**Dilistant till facilitates ice-stream flow in northeast Greenland. _Earth and Planetary Science Letters_**401**, 57-69. Coble RL** (1970) Diffusion models for hot pressing with surface energy and pressure effects as driving forces. _Journal of Applied Physics_**41**(12), 4798-4807. * Corr _et al._ (2002)**Corr HF, Jenkins A, Nicholls KW and Doake C** (2002) Precise measurement of changes in ice-shell thickness by phase-sensitive radar to determine basal melt rates. _Geophysical Research Letters_**29**(6), 73-71. * Carry and Charles (1961)**Carry A and Charles RW** (1961) Formation of 'blue' glacier ice by horizontal compressive forces. _Journal of Glaciology_**3**(30), 1045-1050. * Deportor and others (2013)**Depotor MA and 6 others** (2013) Calabra fluxes and basal melt rates of Antarctic ice shelves. _Nature_**520**(7469), 89. * Duddu and Wassman (2012)**Duddu R and Wassman H** (2012) A temperature dependent creep damage model for polycrystalline ice. _Mechanics of Materials_**46**, 23-41. * Fausto and others (2018)**Fauso RS and 9 others** (2018) A snow density dataset for improving surface boundary conditions in greenland ice sheet film modeling. _Frontiers in Earth Science_**6**, 51. * Fudge and others (2016)**Fudge T and others** (2016)**Fudge T and others** (2016) Variable relationship between accumulation and temperature in West Antarctica for the past 31 000 years. _Geophysical Research Letters_**43**(8), 3795-3803. * Gagliardini and Meysmour (1997)**Gaglliardini O and Meysmour J** (1997) flow simulation of a firn-covered cold glacier. _Annals of Glaciology_**24**, 242-248. * Gardner and others (2013)**Gardner AS and 9 others** (2013) A reconciled estimate of glacier contributions to sea level rise: 2003 to 2009. _Science_**340**(6134), 852-857. * Gow (1968)**Gow AI** (1968) Bubbles and bubble pressures in Antarctic glacier ice. _Journal of Glaciology_**7**(50), 167-182. * Gow (1969)**Gow AI** (1969) On the rates of growth of grains and crytals in South Polar firm. _Journal of Glaciology_**8**(53), 241-252. * Gregory and Albert and Baker (2014)**Gregory S, Albert M and Baker I** (2014) Impact of physical properties and accumulation rate on pore docs-off in layered firm. _The Cryosphere_**8**, 91-105. * Gutmann _et al._ (2012)**Gutmann ED, Larson KM, Williams MW, Nierush FG and Zavortov V** (2012) Snow measurement by FG5 interferometric reflectometry: an evaluation at Nistor Ridge, Colorado. _Hybridological Process_**26**(19), 2951-2961. * Herron and Languy (1980)**Herron MM and Languy CC** (1980)**Firm densification: an empirical model. _Journal of Glaciology_**25**(93), 373-385. * Howar _et al._ (2019)**Howar IM, Porter C, Smith BE, Noh MJ and Morin P** (2019) The Reference Elevation Model of Antarctica. _Cryosphere_**13**(2), 665-674. * Jenkins _et al._ (2006)**Jenkins A, Corr HF, Nicholls KW, Stewart CL and Doake CS** (2006)**Interactions between ice and ocean observed with phase-sensitive radar radar near an Antarctic ice-shelf grounding line. _Journal of Glaciology_**52**(178), 325-346. * Joughin and others (2012)**Joughin I and 6 others** (2012) Seasonal to decadal scale variations in the surface velocity of Jakobshav Igbrae, Greenland: observation and model-based analysis. _Journal of Geophysical Research: Earth Surface_**117**(F72), F02030. * Joughin and others (2014)**Joughin I, Smith BE and Medley B** (2014) Marine ice sheet collapse potentially under way for the Thwaites Glacier Basin, West Antarctica. _Science_**344**(6185), 735-738. * Kaspari and others (2004)**Kaspari S and 6 others** (2004) Climate variability in West Antarctica derived from annual accumulation-rate records from ITASE firm/ice cores. _Annals of Glaciology_**39**, 585-594. * Keegan _et al._ (2019)**Keegan K, Albert M, McConnell J and Baker I** (2019) Climate effects on firm permeability are preserved within a firm column. _Journal of Geophysical Research: Earth Surface_**124**(3), 380-387. * Kirchner and Bentley (1979)**Kirchner JF, Bentley CR and Robertson JD** (1979) Lateral density differences from seismic measurements at a site on the Ross Ice Shelf, Antarctica. _Journal of Glaciology_**24**(90), 309-312. * Kunjens Munneke and others (2015)**Kunjens Munneke P and 9 others** (2015) Elevation change of the Greenland Ice Sheet due to surface mass balance and firm processes, 1960-2014. _The Cryosphere_**9**(6), 2009-2025. * Larson and others (2009)**Larson KM and 5 others** (2009) Can we measure snow depth with GPS receivers: _Geophysical Research Letters_**36**(17), L17502. * Li and Zavally (2011)**Li J and Zavally H** (2011) Modeling of firm compaction for estimating ice-sheet mass change from observed ice-sheet elevation change. _Annals of Glaciology_**52**(59), 1-7. * Li and Zavally (2015)**Li J and Zavally H** (2015) Response times of ice-sheet surface heights to changes in the rate of Antarctic firm compaction caused by accumulation and temperature variations. _Journal of Glaciology_**61**(230), 1037-1047. * Lighenberg _et al._ (2011)**Lighenberg S, Helson, M, Van den Brooke M** (2011) An improved semi-empirical model for the densification of Antarctic firm. _The Cryosphere_**5**, 809-819. * Lundin and others (2017)**Lundin JM and 9 others** (2017)**Firm Model Intercomparison Experiment (FirmMICE). _Journal of Glaciology_**63**(29), 401-422. * Luft and Funk (2000)**Ludin M and Funk M** (2000)**D**ing ice cores from a high alpine glacier with a flow model for cold firm. _Annals of Glaciology_**31**, 69-79. * Maeno and Ebinuma (1983)**Mano N and Ebinuma T** (1983) Pressure sintering of ice and its implication to the densification of snow at polar glaciers and ice sheets. _The Journal of Physical Chemistry_**87**(21), 4103-4110. * McMillan and others (2016)**McMillan M and 9 others** (2016)**McMillan M and 9 others** (2016) A high-resolution record of Greenland mass balance. _Geophysical Research Letters_**43**(13), 1002-7010. * Molley and others (2013)**Moffley B and 9 others** (2013)**M airborne-radar and ice-core observations of annual now accumulation over Thwaites Glacier, West Antarctica confirm the spatiotemporal variability of global and regional atmospheric models. _Geophysical Research Letters_**40**(10), 3649-3654. * Morris and others (2017)**Morris E and 9 others** (2017) Snow densification and recent accumulation along the ISTAR traverse, Pine Island Glacier, Antarctica. _Journal of Geophysical Research: Earth Surface_**122**(12), 2284-2301. * Morris and Winsman (2014)**Morris E and Winsman D** (2014) Densification of polar snow: measurements, modeling, and implications for altimetry. _Journal of Geophysical Research: Earth Surface_**119**(2), 349-365. * Mouginot and others (2014)**Moginot J, Rignet E and Scheuchl B** (2014) Sustained increase in ice discharge from the Amundunden Sea Employment, West Antarctica, from 1973 to 2013. _Geophysical Research Letters_**41**(5), 1576-1584. * Mouginot and others (2019)**Mouginot J, Rignet E and Scheuchl B** (2019) Continent-wide, interferometric SAR phase, mapping ofarctic ice velocity. _Geophysical Research Letters_**46**(16), 9710-9718. * Mouginot _et al._ (2017)**Mouginot J, Rignet E, Scheuchl B and Millan R** (2017) Comprehensive annual ice sheet velocity mapping using Landsat-8, Sentinel-1, and RADARSAT-2 data. _Remote Sensing_**9**(4), 364. * Nicholls and others (2015)**Nicholls KW and 5 others** (2015) A ground-based radar for measuring vertical strain rates and time-varying basal melt rates in ice sheets and shelves. _Journal of Glaciology_**61**(230), 1079-1087. * Reeh _et al._ (2005)**Reeh N, Fisher DA, Koerner RM and Clausen HB** (2005) An empirical firm-destination model comprising ice lenses. _Annals of Glaciology_**42**, 101-106. * Rignet _et al._ (2014)**Rignet E, Mouginot J, Morighen M, Soreussi H and Scheuchl B** (2014) Widespread, rapid grounding line retreat of Pine Island, Thwaites, Smith, and Kohler glaciers, West Antarctica, from 1992 to 2011. _Geophysical Research Letters_**41**(10), 352-359. * Riverman and others (2019)**Riverman K and 7 others** (2019) Enhanced firm densification in high-accumulation near margins of the NE Greenland Ice Stream. _Journal of Geophysical Research: Earth Surface_**124**(24), 365-382. * Gold (1958)**Roth GJQ** (1958) _Seismic Shooting and Related Investigations, Glaciology III. Norwegian-Britishtish-Swedish Antarctic Expedition_, 1949-52, _Scientific Results_, vol. 5, 134. Oslo: Newk Polarinstitut. * Schroder and others (2019)**Schroder L and 5 others** (2019)**For decades of Antarctic surface dewation changes from multi-mission satellite altimetry. _The Cryosphere_**13**(2), 427-449. * Shepherd and others (2012)**Shepherd A and 9 others** (2012) A reconciled estimate of ice-sheet mass balance. _Scientific_**338**(6111), 118-1189. * Shepherd and others (2019)**Shepherd A and 9 others** (2019) Trends in Antarctic Ice Sheet elevation and mass. _Geophysical Research Letters_**46**(14), 8174-8183. * Smith and others (2020)**Smith B and 9 others** (2020)**Smith B and 9 others** (2020)** **Evasive ice sheet mass reflects competing ocean and atmosphere processes. _Science_**368**(6496), 1239-1242. * Steig and others (2005)**Steig EJ and 9 others** (2005) High-resolution ice forces on U STASE (West Antarctica): development and validation of chronologies and determination of precision and accuracy. _Annals of Glaciology_**41**, 77-84. * Stevens and others (2020)**Stevens CM and 6 others** (2020) The Community Firm Model (CFM) v1.0. _Geoscientific
Ice-sheet mass-balance estimates derived from repeat satellite-altimetry observations require accurate calculation of spatiotemporal variability in firm-air content (FAC). However, firm-compaction models remain a large source of uncertainty within mass-balance estimates. In this study, we investigate one process that is neglected in FAC estimates derived from firm-compaction models: enhanced layer thinning due to horizontal divergence. We incorporate a layer-thinning scheme into the Community Firn Model. At every time step, firm layers first densify according to a firm-compaction model and then thin further due to an imposed horizontal divergence rate without additional density changes. We find that horizontal divergence on Thwaites (THW) and Pine Island Glaciers can reduce local FAC by up to 41% and 18%, respectively. We also assess the impact of temporal variability of horizontal divergence on FAC. We find a 15% decrease in FAC between 2007 and 2016 due to horizontal divergence at a location that is characteristic of lower THW. This decrease accounts for 16% of the observed surface lowering, whereas climate variability alone causes negligible changes in FAC at this location. Omitting transient horizontal divergence in estimates of FAC leads to an overestimation of ice loss via satellite-altimetry methods in regions of dynamic ice flow. Polar firm; ice-sheet mass balance; snow/ice surface processes; snow physics **Author for correspondence:** Annika N. Horlings, E-mail: [email protected]
Summarize the following text.
307
arxiv-format/1210_1594v2.md
# Accuracy and Stability of The Continuous-Time 3DVAR Filter for The Navier-Stokes Equation D. Blomker, K.J.H. Law, A.M. Stuart, K. C. Zygalakis ## 1 Introduction Data assimilation is the problem of estimating the state variables of a dynamical system, given observations of the output variables. It is a challenging and fundamental problem area, of importance in a wide range of applications. A natural framework for approaching such problems is that of Bayesian statistics, since it is often the case that the underlying model and/or the data are uncertain. However, in many real world applications, the dimensionality of the underlying model and the vast amount of available data makes the investigation of the Bayesian posterior distribution of the model state given data computationally infeasible in on-line situations. An example of such an application is the global weather prediction: the computational models used currently involve on the order of \\(\\mathcal{O}(10^{8})\\) unknowns, while a large amount of partial observations of the atmosphere, currently on the order of \\(\\mathcal{O}(10^{6})\\) per day, are used to compensate both the uncertainty in the model and in the initial conditions. In situations like this practitioners typically employ some form of approximation based on both physical insight and computational expediency. Thereare two competing methodologies for data assimilation which are widely implemented in practice, the first being _filters_[23] and the second being _variational methods_[3]. In this paper we focus on the filtering approach. Many of the filtering algorithms implemented in practice are _ad hoc_ and, besides some very special cases, the theoretical understanding of their ability to accurately and reliably estimate the state variables is under-developed. Our goal here is to contribute towards such theoretical understanding. We concentrate on the 3DVAR filter which has its origin in weather forecasting [25] and is prototypical of more sophisticated filters used today. The idea behind filtering is to update the posterior distribution of the system state sequentially at each observation time. This may be performed exactly for linear systems subject to Gaussian noise: the Kalman filter [21]. For the case of non-linear and non-Gaussian scenarios the particle filter [13] can be used and provably approximates the desired probability distribution as the number of particles increases [2]. Nevertheless, standard implementations of this method perform poorly in high dimensional systems [31]. Thus the development of practical filtering algorithms for high dimensional dynamical systems is an active research area and for further insight into this subject the reader may consult [36, 15, 38, 20, 26, 7, 39, 4] and references within. Many of the methods used invoke some form of _ad hoc_ Gaussian approximation and the 3DVAR method which we analyze here is perhaps the simplest example of this idea. These _ad hoc_ filters, 3DVAR included, may also be viewed within the framework of nonlinear control theory and thereby derived directly, without reference to the Bayesian probabilistic interpretation; indeed this is primarily how the algorithms were conceived. In this paper we will study accuracy and stability for the 3DVAR filter. The term _accuracy_ refers to establishing closeness of the filter to the true signal underlying the data, and _stability_ is concerned with studying the distance between two filters, initialized differently, but driven by the same noisy data. Proving filter accuracy and stability results for control systems has a long history and the paper [32] is a fundamental contribution to the subject with results closely related to those developed here. However, as indicated above, the high dimensionality of the problems arising in data assimilation is a significant challenge in the area. In order to confront this challenge we work in an infinite dimensional setting, Derby ensuring that our results are not sensitive to dimensionality. We focus on dissipative dynamical systems, and take the two dimensonal Navier-Stokes equation as a prototype model in this area. Furthermore, we study a data assimilation setting in which data arrives continuously in time which is a natural setting in which to study high time-frequency data subject to significant uncertainty. The study of accuracy and stability of filters for data assimilation has been a developing area over the last few years and the paper [6] contains finite dimensional theory and numerical experiments in a variety of finite and discretized infinite dimensional systems extend the conclusions of the theory. The paper [37] highlights the principle that, roughly speaking, unstable directions must be observed and assimilated into the estimate and, more subtly, that accuracy can be improved by avoiding assimilation of stable directions. In particular the papers [6, 37] both explicitly identify the importance of observing the unstable components of the dynamics, leading to the notion of AUS: _assimilation in the unstable subspace_. The paper [5] describes a theoretical analysis of 3DVAR applied to the Navier-Stokes equation, when the data arrives in dis crete time, and in this paper we address similar questions in the continuous time setting; both papers include the possibility of only partial observations in Fourier space. Taken together, the current paper and [5] provide a significant generalization of the theory in [32] to dissipative infinite dimensional dynamical systems prototypical of the high dimensional problems to which filters are applied in practice; furthermore, through studying partial observations, they give theoretical insight into the idea of AUS as developed in [6, 37]. The infinite dimensional nature of the problem brings fundamental mathematical issues into the problem, not addressed in previous finite dimensional work. We make use of the _squeezing property_ of many dissipative dynamical systems [8, 34], including the Navier-Stokes equation, which drives many theoretical results in this area, such as the ergodicity studies pioneered by Mattingly [27, 19]. In particular our infinite dimensional analysis is motivated by the theory developed in [28] and [22], which are the first papers to study data assimilation directly through PDE analysis, using ideas from the theory of determining modes in infinite dimensional dynamical systems. However, in contrast to those papers, here we allow for noisy observations, and provide a methodology that opens up the possibility of studying more general Gaussian approximate filters such as the Ensemble and the Extended Kalman filter (EnKF and ExKF). Our point of departure for analysis is an ordinary differential equation (ODE) in a Banach space. Working in the limit of high frequency observations we formally derive continuous time filters. This leads to a stochastic differential equation for state estimation, combining the original dynamics with extra terms inducing mean reversion to the noisily observed signal. In the particular case of the Navier-Stokes equation we get a stochastic PDE (SPDE) with additional mean-reversion term, driven by spatially-correlated time-white noise. This SPDE is central to our analysis as it is used to prove accuracy and stability results for the 3DVAR filter. In particular, in the case when enough of the low modes of the Navier-Stokes equation are observed and the model has larger uncertainty than the data in these low modes, a situation known to practitioners as variance inflation, then the filter can lock on to a small neighbourhood of the true signal, recovering from the initial error, if the error in the observed modes is small. The results are formulated in terms of the theory of random and stochastic dynamical systems [1], and both forward and pullback type results are proved, leading to a variety of probabilistic accuracy and stability results, in the mean square, probability and almost sure senses. The paper is organised as follows. In Section 2 we derive the continuous-time limit of the 3DVAR filter applied to a general ODE in a Banach space, by considering the limit of high frequency observations. In Section 3, we focus on the 2D Navier-Stokes equations and present the continuous time 3DVAR filter within this setting. Sections 4 and 5 are devoted, respectively, to results concerning forward accuracy and stability as well as pullback accuracy and stability, for the filter when applied to the Navier-Stokes equation. In Section 6, we present various numerical investigations that corroborate our theoretical results. Finally in Section 7 we present conclusions. ## 2 Continuous-Time Limit of 3DVAR Consider \\(u\\) satisfying the following ODE in a Banach space \\(X:\\) \\[\\frac{du}{dt}=\\mathcal{F}(u),\\quad u(0)=u_{0}\\;. \\tag{1}\\] Our aim is to study online filters which combine knowledge of this dynamical system with noisy observations of \\(u_{n}=u(nh)\\) to estimate the state of the system. This is particularly important in applications where \\(u_{0}\\) is not known exactly, and the noisy data can be used to compensate for this lack of initial knowledge of the system state. In this section we study approximate Gaussian filters in the high frequency limit, leading to stochastic differential equations which combine the dynamical system with data to estimate the state. As the formal derivation of continuous time filters in this section is independent of the precise model under consideration, we employ the general framework of (1). We make some general observations, relating to a broad family of approximate Gaussian filters, but focus mainly on 3DVAR. In subsequent sections, where we study stability and accuracy of the filter, we focus exclusively on 3DVAR, and work in the context of the 2D incompressible Navier-Stokes equation, as this is prototypical of dissipative semilinear partial differential equations. ### Set Up - The Filtering Problem We assume that \\(u_{0}\\sim N(\\widehat{m}_{0},\\widehat{C}_{0})\\) so that the initial data is only known statistically. The objective is to update the estimate of the state of the system sequentially in time, based on data received sequentially in time. We define the flow-map \\(\\Psi:X\\times\\mathbb{R}^{+}\\to X\\) so that the solution to (1) is \\(u(t)=\\Psi(u_{0};t)\\). Let \\(H\\) denote a linear operator from \\(X\\) into another Banach space \\(Y\\), and assume that we observe \\(Hu\\) at equally spaced time intervals: \\[y_{n}=H\\Psi(u_{0};nh)+\\eta_{n}. \\tag{2}\\] Here \\(\\{\\eta_{n}\\}_{n\\in\\mathbb{N}}\\) is an i.i.d sequence, independent of \\(u_{0}\\), with \\(\\eta_{1}\\sim N(0,\\Gamma).\\) If we write \\(u_{n}=\\Psi(u_{0};nh),\\) then \\[u_{n+1}=\\Psi(u_{n};h), \\tag{3}\\] and \\[y_{n}|u_{n}\\sim N\\big{(}Hu_{n},\\Gamma). \\tag{4}\\] We denote the accumulated data up to the time \\(n\\) by \\[Y_{n}=\\{y_{i}\\}_{i=1}^{n}.\\] Our aim is to find \\(\\mathbb{P}(u_{n}|Y_{n}).\\) We will make the Gaussian ansatz that \\[\\mathbb{P}(u_{n}|Y_{n})\\simeq N(\\widehat{m}_{n},\\widehat{C}_{n}). \\tag{5}\\] The key question in designing an approximate Gaussian filter, then, is to find an update rule of the form \\[(\\widehat{m}_{n},\\widehat{C}_{n})\\mapsto(\\widehat{m}_{n+1},\\widehat{C}_{n+1}) \\tag{6}\\]Because of the linear form of the observations in (2), together with the fact that the noise is mean zero-Gaussian, this update rule is determined directly if we impose a further Gaussian ansatz, now on the distribution of \\(u_{n+1}\\) given \\(Y_{n}:\\) \\[u_{n+1}|Y_{n}\\sim N(m_{n+1},C_{n+1}) \\tag{7}\\] With this in mind, the update (6) is usually split into two parts. The first, _prediction_ (or forecast), step is the map \\[(\\widehat{m}_{n},\\widehat{C}_{n})\\mapsto(m_{n+1},C_{n+1}) \\tag{8}\\] The second, _analysis_, step is \\[(m_{n+1},C_{n+1})\\mapsto(\\widehat{m}_{n+1},\\widehat{C}_{n+1}). \\tag{9}\\] For the prediction step we will simply _impose_ the approximation (7) with \\[m_{n+1}=\\Psi(\\widehat{m}_{n};h), \\tag{10}\\] while the choice of \\(C_{n+1}\\) will depend on the choice of the specific filter. For the analysis step, assumptions (4), (7) imply that \\[u_{n+1}|Y_{n+1}\\sim N(\\widehat{m}_{n+1},\\widehat{C}_{n+1}) \\tag{11}\\] and an application of Bayes rule, as applied in the standard Kalman filter update [21], and using (10), gives us the nonlinear map (6) in the form \\[\\widehat{C}_{n+1} = C_{n+1}-C_{n+1}H^{*}(\\Gamma+HC_{n+1}H^{*})^{-1}HC_{n+1} \\tag{12a}\\] \\[\\widehat{m}_{n+1} = \\Psi(\\widehat{m}_{n};h)+C_{n+1}H^{*}(\\Gamma+HC_{n+1}H^{*})^{-1}(y _{n+1}-Hm_{n+1}) \\tag{12b}\\] The mean \\(\\widehat{m}_{n+1}\\) is an element of the Banach space \\(X\\), and \\(\\widehat{C}_{n+1}\\) is a linear symmetric and non-negative operator from \\(X\\) into itself. ### Derivation of The Continuous-Time Limit Together equations (10) and (12), which are generic for _any_ approximate Gaussian filter, specify the update for the mean once the equation determining \\(C_{n+1}\\) is defined. We proceed to derive a continuous-time limit for the mean, in this general setting, assuming that \\(C_{n}\\) arises as an approximation of a continuous process \\(C(t)\\) evaluated at \\(t=nh\\), so that \\(C_{n}\\approx C(nh)\\), and that \\(h\\ll 1\\). Throughout we will assume that \\(\\Gamma=h^{-1}\\Gamma_{0}\\). This scaling implies that the noise variance is inversely proportional to the time between observations and is the relationship which gives a nontrivial stochastic limit as \\(h\\to 0\\). With these scaling assumptions equation (12b) becomes \\[\\widehat{m}_{n+1}=\\Psi(\\widehat{m}_{n};h)+hC_{n+1}H^{*}(\\Gamma_{0}+hHC_{n+1}H^ {*})^{-1}\\big{(}y_{n+1}-H\\Psi(\\widehat{m}_{n};h)\\big{)}.\\] Thus \\[\\frac{\\widehat{m}_{n+1}-\\widehat{m}_{n}}{h}=\\frac{\\Psi(\\widehat{m}_{n};h)- \\widehat{m}_{n}}{h}+C_{n+1}H^{*}(\\Gamma_{0}+hHC_{n+1}H^{*})^{-1}\\big{(}y_{n+1 }-H\\Psi(\\widehat{m}_{n};h)\\big{)}.\\] If we define the sequence \\(\\{z_{n}\\}_{n\\in\\mathbb{Z}^{+}}\\) by \\[z_{n+1}=z_{n}+hy_{n+1},\\quad z_{0}=0\\;,\\]then we can rewrite the previous equation as \\[\\frac{\\widehat{m}_{n+1}-\\widehat{m}_{n}}{h}= \\frac{\\Psi(\\widehat{m}_{n};h)-\\widehat{m}_{n}}{h} \\tag{13}\\] \\[\\qquad+C_{n+1}H^{*}(\\Gamma_{0}+hHC_{n+1}H^{*})^{-1}\\left(\\frac{z_{n +1}-z_{n}}{h}-H\\Psi(\\widehat{m}_{n};h)\\right).\\] Note that \\[\\Psi(\\widehat{m}_{n};h)=\\widehat{m}_{n}+h\\mathcal{F}(\\widehat{m}_{n})+ \\mathcal{O}(h^{2}).\\] This is an Euler-Maruyama-like discretization of a stochastic differential equation which, if we pass to the limit of \\(h\\to 0\\) in (13), noting that we have assumed that \\(C_{n}\\approx C(nh)\\) for some continuous covariance process, is seen to be \\[\\frac{d\\widehat{m}}{dt}=\\mathcal{F}(\\widehat{m})+CH^{*}\\Gamma_{0}^{-1}\\left( \\frac{dz}{dt}-H\\widehat{m}\\right),\\quad\\widehat{m}(0)=\\widehat{m}_{0}. \\tag{14}\\] Equation (14) is similar to the observer equation in the nonlinear control literature [32]. Our objective in this paper is to study the stability and accuracy properties of this stochastic model. Here stability refers to the contraction of two different trajectories of the filter (14), started at two different points, but driven by the same observed data; and accuracy refers to estimating the difference between the true trajectory of (1) which underlies the data, and the output of the filter (14). Similar questions are studied in finite dimensions in [32]. However, the infinite dimensional nature of our problem, coupled with the fact that we study situations where the state is only partially observed (\\(H\\) is not invertible on \\(X\\)) mean that new techniques of analysis are required, building on the theory of semilinear dissipative PDEs and infinite dimensional dynamical systems. We now express the observation signal \\(z\\) in terms of the truth \\(u\\) in order to facilitate study of filter stability and accuracy. In particular, we have that \\[\\left(\\frac{z_{n+1}-z_{n}}{h}\\right)=y_{n+1}=Hu_{n+1}+\\frac{\\sqrt{\\Gamma_{0}}} {\\sqrt{h}}\\Delta w_{n+1},\\] where \\(\\{\\Delta w_{n}\\}_{n\\in\\mathbb{N}}\\) is an i.i.d sequence and \\(\\Delta w_{1}\\sim N(0,I)\\) in \\(Y\\). This corresponds to the Euler-Maruyama discretization of the SDE \\[\\frac{dz}{dt}=Hu+\\sqrt{\\Gamma_{0}}\\frac{dW}{dt},\\quad z(0)=0. \\tag{15}\\] Expressed in terms of the true signal \\(u\\), equation (14) becomes \\[\\frac{d\\widehat{m}}{dt}=\\mathcal{F}(\\widehat{m})+CH^{*}\\Gamma_{0}^{-1}H\\left( u-\\widehat{m}\\right)+CH^{*}\\Gamma_{0}^{-1/2}\\frac{dW}{dt}. \\tag{16}\\] We complete the study of the continuous limit with the specific example of 3DVAR. This is the simplest filter of all in which the prediction step is found by simply setting \\(C_{n+1}=\\widehat{C}\\) for some _fixed_ covariance operator \\(\\widehat{C}\\), independent of \\(n\\). Then equation (12) shows that \\(\\widehat{C}_{n+1}=\\widehat{C}+\\mathcal{O}(h)\\) and we deduce that the limiting covariance is simply constant: \\(C(t)=\\widehat{C}(t)=\\widehat{C}\\) for all \\(t\\geq 0\\). The present work will focus on this case and hence study (16) in the case where \\(C=\\widehat{C}\\), a constant in time. Continuous-Time 3DVAR for Navier-Stokes In this section we describe application of the 3DVAR algorithm to the two dimensional Navier-Stokes equation. This will form the focus of the remainder of the paper. In subsection 3.1 we describe the forward model itself, namely we specify equation (1), and then in subsection 3.2 we describe how data is incorporated into the model, and specify equation (16), and the choices of the (constant in time) operators \\(C=\\widehat{C}\\) and \\(\\Gamma_{0}\\) which appear in it. ### Forward Model Let \\(\\mathbb{T}^{2}\\) denote the two-dimensional torus of side \\(L:[0,L)\\times[0,L)\\) with periodic boundary conditions. We consider the equations \\[\\begin{array}{rcl}\\partial_{t}u(x,t)-\ u\\Delta u(x,t)+u(x,t)\\cdot\ abla u(x,t )+\ abla p(x,t)&=&f(x)\\\\ \ abla\\cdot u(x,t)&=&0\\\\ u(x,0)&=&u_{0}(x)\\end{array}\\] for all \\(x\\in\\mathbb{T}^{2}\\) and \\(t\\in(0,\\infty)\\). Here \\(u\\colon\\mathbb{T}^{2}\\times(0,\\infty)\\to\\mathbb{R}^{2}\\) is a time-dependent vector field representing the velocity, \\(p\\colon\\mathbb{T}^{2}\\times(0,\\infty)\\to\\mathbb{R}\\) is a time-dependent scalar field representing the pressure and \\(f\\colon\\mathbb{T}^{2}\\to\\mathbb{R}^{2}\\) is a vector field representing the forcing which we take as time-independent for simplicity. The parameter \\(\ u\\) represents the viscosity. We assume throughout that \\(u_{0}\\) and \\(f\\) have average zero over \\(\\mathbb{T}^{2}\\); it then follows that \\(u(\\cdot,t)\\) has average zero over \\(\\mathbb{T}^{2}\\) for all \\(t>0\\). Define \\[\\mathsf{T}:=\\left\\{\\operatorname{trigonometric\\,polynomials}u:\\mathbb{T}^{2} \\to\\mathbb{R}^{2}\\,\\Big{|}\\,\ abla\\cdot u=0,\\,\\int_{\\mathbb{T}^{2}}u(x)\\, \\mathrm{d}x=0\\right\\}\\] and \\(\\mathcal{H}\\) as the closure of \\(\\mathsf{T}\\) with respect to the norm in \\((L^{2}(\\mathbb{T}^{2}))^{2}=L^{2}(\\mathbb{T}^{2},\\mathbb{R}^{2})\\). We let \\(P:(L^{2}(\\mathbb{T}^{2}))^{2}\\to\\mathcal{H}\\) denote the Leray-Helmholtz orthogonal projector. Given \\(k=(k_{1},k_{2})^{\\mathrm{T}}\\), define \\(k^{\\perp}:=(k_{2},-k_{1})^{\\mathrm{T}}\\). Then an orthonormal basis for \\(\\mathcal{H}\\) is given by \\(\\psi_{k}\\colon\\mathbb{R}^{2}\\to\\mathbb{R}^{2}\\), where \\[\\psi_{k}(x):=\\frac{k^{\\perp}}{|k|}\\exp\\Bigl{(}\\frac{2\\pi ik\\cdot x}{L}\\Bigr{)} \\tag{17}\\] for \\(k\\in\\mathbb{Z}^{2}\\setminus\\{0\\}\\). Thus for \\(u\\in\\mathcal{H}\\) we may write \\[u(x,t)=\\sum_{k\\in\\mathbb{Z}^{2}\\setminus\\{0\\}}u_{k}(t)\\psi_{k}(x)\\] where, since \\(u\\) is a real-valued function, we have the reality constraint \\(u_{-k}=-\\overline{u_{k}}.\\) We define the projection operators \\(P_{\\lambda}:\\mathcal{H}\\to\\mathcal{H}\\) and \\(Q_{\\lambda}:\\mathcal{H}\\to\\mathcal{H}\\) for \\(\\lambda\\in\\mathbb{N}\\cup\\{\\infty\\}\\) by \\[P_{\\lambda}u(x,t)=\\sum_{|2\\pi k|^{2}<\\lambda L^{2}}u_{k}(t)\\psi_{k}(x),\\quad Q _{\\lambda}=I-P_{\\lambda}.\\] Below we will choose the observation operator \\(H\\) to be \\(P_{\\lambda}\\). We define \\(A=-\\frac{L^{2}}{4\\pi^{2}}P\\Delta\\), the Stokes operator, and, for every \\(s\\in\\mathbb{R}\\), define the Hilbert spaces \\(\\mathcal{H}^{s}\\) to be the domain of \\(A^{s/2}.\\) We note that \\(A\\) is diagonalizedin \\(\\mathcal{H}\\) in the basis comprised of the \\(\\{\\psi_{k}\\}_{k\\in\\mathbb{Z}^{2}\\setminus\\{0\\}}\\) and that, with the normalization employed here, the smallest eigenvalue of \\(A\\) is \\(\\lambda_{1}=1.\\) We use the norm \\(\\|\\cdot\\|_{z}^{2}:=\\langle\\cdot,A^{s}\\cdot\\rangle,\\) the abbreviated notation \\(\\|u\\|\\) for the norm on \\(\\mathcal{V}:=\\mathcal{H}^{1},\\) and \\(|\\cdot|\\) for the norm on \\(\\mathcal{H}:=\\mathcal{H}^{0}.\\) Applying the projection \\(P\\) to the Navier-Stokes equation we may write it as an ODE in \\(\\mathcal{H}\\): \\[\\frac{\\mathrm{d}u}{\\mathrm{d}t}+\\delta Au+\\mathcal{B}(u,u)=f,\\quad u(0)=u_{0}. \\tag{18}\\] Here \\(\\delta=4\\pi^{2}\ u/L^{2}\\) and the term \\(\\mathcal{B}(u,v)\\) is the _symmetric_ bilinear form defined by \\[\\mathcal{B}(u,v)=\\frac{1}{2}P(u\\cdot\ abla v)+\\frac{1}{2}P(v\\cdot\ abla u)\\] for all \\(u,v\\in\\mathcal{V}\\). Finally, with abuse of notation, \\(f\\) is the original forcing, projected into \\(\\mathcal{H}\\). Equation (18) is in the form of equation (1) with \\[\\mathcal{F}(u)=-\\delta Au-\\mathcal{B}(u,u)+f. \\tag{19}\\] See [8] for details of this formulation of the Navier-Stokes equation as an ODE in \\(\\mathcal{H}\\). The following proposition is a classical result which implies the existence of a dissipative semigroup for the ODE (18). See Theorems 9.5 and 12.5 in [29] for a concise overview and [33, 34] for further details. **Proposition 3.1**.: _Assume that \\(u_{0}\\in\\mathcal{H}^{1}\\) and \\(f\\in\\mathcal{H}\\). Then (18) has a unique strong solution on \\(t\\in[0,T]\\) for any \\(T>0:\\)_ \\[u\\in L^{\\infty}\\big{(}(0,T);\\mathcal{H}^{1}\\big{)}\\cap L^{2}\\big{(}(0,T); \\mathcal{H}^{2}\\big{)},\\quad\\frac{du}{dt}\\in L^{2}\\big{(}(0,T);\\mathcal{H} \\big{)}.\\] _Furthermore the equation has a global attractor \\(\\mathcal{A}\\) and there is \\(R\\in(0,\\infty)\\) such that, if \\(u_{0}\\in\\mathcal{A}\\), then the solution from this initial condition exists for all \\(t\\in\\mathbb{R}\\) and \\(\\sup_{t\\in\\mathbb{R}}\\|u(t)\\|^{2}=R.\\)_ We let \\(\\{\\Psi(\\cdot,t):\\mathcal{H}^{1}\\to\\mathcal{H}^{1}\\}_{t\\geq 0}\\) denote the semigroup of solution operators for the equation (18) through \\(t\\) time units. We note that by working with weak solutions, \\(\\Psi(\\cdot,t)\\) can be extended to act on larger spaces \\(\\mathcal{H}^{s}\\), with \\(s\\in[0,1)\\), under the same assumption on \\(f\\); see Theorem 9.4 in [29]. ### 3Dvar We apply the analysis of the previous section to write down the continuous time 3DVAR filter, namely (16) with \\(C(t)=\\widehat{C}\\) constant in time, for the Navier-Stokes equation. We take \\(X=\\mathcal{H}\\) and throughout we assume that the data is found by observing \\(P_{\\lambda}u\\) at discrete times, so that \\(H^{*}=H=P_{\\lambda}\\) and \\(Y=P_{\\lambda}\\mathcal{H}\\). We assume that \\(A\\), \\(\\Gamma_{0}\\) and \\(\\widehat{C}\\) commute and, for simplicity of presentation, suppose that \\[\\widehat{C}=\\omega\\sigma_{0}^{2}A^{-2\\zeta},\\quad\\Gamma_{0}=\\sigma_{0}^{2}A^{ -2\\beta}P_{\\lambda}. \\tag{20}\\] We set \\(\\alpha=\\zeta-\\beta\\). These assumptions correspond to those made in [5] where discrete time filters are studied. Note that \\(A^{s/2}\\) is defined on \\(\\mathcal{H}^{s}\\); it is also defined on \\(Y\\) for every \\(s\\in\\mathbb{R}\\), provided that \\(\\lambda\\) is finite. From equations (16), using (19) and the choices for \\(\\widehat{C}\\) and \\(\\Gamma_{0}\\) we obtain \\[\\frac{\\mathrm{d}\\widehat{m}}{\\mathrm{d}t}+\\delta A\\widehat{m}+\\mathcal{B}( \\widehat{m},\\widehat{m})+\\omega A^{-2\\alpha}P_{\\lambda}(\\widehat{m}-u)=f+ \\omega\\sigma_{0}A^{-2\\alpha-\\beta}P_{\\lambda}\\frac{\\mathrm{d}W}{\\mathrm{d}t}, \\quad\\widehat{m}(0)=\\widehat{m}_{0} \\tag{21}\\] where \\(W\\) is a cylindrical Brownian motion in \\(Y\\). In the following we consider the cases of finite \\(\\lambda\\), where the data is in a finite dimensional subspace of \\(\\mathcal{H}\\), and infinite \\(\\lambda\\), where \\(P_{\\lambda}=I\\) and the whole solution is observed. **Lemma 3.2**.: _For \\(\\lambda=\\infty\\) assume that \\(4\\alpha+2\\beta>1.\\) Then the stochastic convolution_ \\[W_{A}(t)=\\int_{0}^{t}e^{\\delta(t-s)A}A^{-2\\alpha-\\beta}P_{\\lambda}dW(s)\\] _has a continuous version in \\(C^{0}([0,T],\\mathcal{V})\\) with all moments \\(\\mathbb{E}\\sup_{[0,T]}\\|W_{A}\\|^{p}\\) finite for all \\(T>0\\) and \\(p>1\\)._ Proof.: If \\(\\lambda<\\infty\\) then the covariance of the driving noise is automatically trace-class as it is finite dimensional; since \\(4\\alpha+2\\beta>1\\) it follows that the covariance of the driving noise is also trace-class when \\(\\lambda=\\infty.\\) The desired result follows from Theorem 5.16 in [12]. For the moments see [12, (5.23)]. It is only in the case of full observations (i.e., \\(\\lambda=\\infty\\)) that we need the additional regularity condition \\(4\\alpha+2\\beta>1\\). This may be rewritten as \\(\\zeta>\\frac{1}{4}+\\frac{1}{2}\\beta\\) and relates the rate of decay, in Fourier space, of the model variance to the observational variance. Although a key driver for our accuracy and stability results (see Remark 4.4 below) will be variance inflation, meaning that the observational variance is smaller than the model variance in the low Fourier modes, this condition on \\(\\zeta\\) allows regimes in which, for high Fourier modes, the situation is reversed. **Proposition 3.3**.: _Assume that \\(u_{0}\\in\\mathcal{A}\\) and let \\(u\\) be the corresponding solution of (18) on the global attractor \\(\\mathcal{A}\\). For \\(\\lambda=\\infty\\) suppose \\(4\\alpha+2\\beta>1\\) and \\(\\alpha>-\\frac{1}{2}\\). Then for any initial condition \\(\\widehat{m}(0)\\in\\mathcal{H}\\) there is a stochastic process \\(\\widehat{m}\\) which is the unique strong solution of (21) in the spaces_ \\[\\widehat{m}\\in L^{2}\\big{(}(0,T),\\mathcal{V}\\big{)}\\cap C^{0}\\big{(}[0,T], \\mathcal{H}\\big{)}\\] _for all \\(T>0\\). Moreover,_ \\[\\mathbb{E}\\|\\widehat{m}\\|_{L^{\\infty}\\big{(}(0,T),\\mathcal{H}\\big{)}}^{2}+ \\mathbb{E}\\|\\widehat{m}\\|_{L^{2}\\big{(}(0,T),\\mathcal{V}\\big{)}}^{2}<\\infty.\\] _To be more precise_ \\[\\widehat{m}\\in L^{2}\\Big{(}\\Omega,C^{0}_{\\mathrm{loc}}\\big{(}[0,\\infty), \\mathcal{H}\\big{)}\\Big{)}\\cap L^{2}\\Big{(}\\Omega,L^{2}_{\\mathrm{loc}}\\big{(}[ 0,\\infty),\\mathcal{V}\\big{)}\\Big{)}\\;.\\] Proof.: The proof of this theorem is well known without the function \\(u\\) and the additional linear term. See for example Theorem 15.3.1 in the book [11] using fixed point arguments based on the mild solution. Another reference is [16, Theorem 3.1] based on spectral Galerkin methods. See also [18] or [30]. Nevertheless, our theorem is a straightforward modification of their arguments. For simplicity of presentation we refrain from giving a detailed argument here. The existence and uniqueness is established either by Galerkin methods or fixed-point arguments. The continuity of solutions follows from the standard fixed-point arguments for the mild formulation in the space \\(C^{0}\\big{(}[0,T],\\mathcal{H}\\big{)}\\). Finally, as we assume the covariance of the Wiener process to be trace-class, the bounds on the moments are a straightforward application of Ito's formula (cf. [12, Theorem 4.17]) to \\(|\\widehat{m}|^{2}\\) and to \\(\\|\\widehat{m}\\|^{2}\\), in order to derive standard a-priori estimates. This is very similar to the method of proof that we use to study mean square stability in Section 4.1. The additional linear term \\(\\omega A^{-2\\alpha}P_{\\lambda}\\widehat{m}\\) does not change the result in any substantive fashion. If \\(\\lambda<\\infty\\) then the proof is essentially identical, as the additional term is a lower order perturbation of the Stokes operator. If \\(\\lambda=\\infty\\) then minor modifications of the proof are necessary, but do not change the proof significantly. This is since, for \\(\\alpha>-\\frac{1}{2}\\), the additional term \\(\\omega A^{-2\\alpha}\\widehat{m}\\) is a compact perturbation of the Stokes operator. The additional forcing term, depending on \\(u\\), is always sufficiently regular for our argument, as we assume \\(u\\) to be on the attractor (see Proposition 3.1). **Remark 3.4**.: _For \\(\\lambda=\\infty\\) it is possible to extend the preceding result to other ranges of \\(\\alpha\\), but this will change the proof. Hence, for simplicity, for \\(\\lambda=\\infty\\) we always assume that \\(\\alpha>-\\frac{1}{2}\\)._ _We comment later on the fact that the solutions to (21) generate a stochastic dynamical system. As we need two-sided Wiener-processes for this we postpone the discussion to Section 5._ ## 4 Forward Accuracy and Stability We wish to study conditions under which two filters, starting from different points but driven by the same observations, converge (stability); and conditions under which the filter will, asymptotically, track the true signal (accuracy). Establishing such results has been the object of study in control theory for some time, and the paper [32] contains foundational work in both the discrete and continuous time settings. However the infinite dimensional nature of the problem at hand brings significant new challenges to the analysis. The key idea driving the proofs is that, although the Navier-Stokes equations themselves may admit exponentially diverging trajectories, the observations can counteract this instability, provided the observation space is large enough. Roughly speaking the exponential divergence of the Navier-Stokes equations is dominated by a finite set low Fourier modes, whilst the rest of the space contracts. If the observations provide information about enough of the low Fourier modes, then this can counteract the instability. This basic idea underlies the accuracy and stability results proved in subsections 4.1 and 4.2. A key technical estimate in what follows is the following (see [35]): **Lemma 4.1**.: _For the symmetric bilinear map_ \\[\\mathcal{B}(u,v)=\\tfrac{1}{2}P(u\\cdot\ abla v)+\\tfrac{1}{2}P(v\\cdot\ abla u)\\] _there is constant \\(K^{\\prime}\\geq 1\\) such that for all \\(v,w\\in\\mathcal{V}\\)_ \\[\\langle\\mathcal{B}(v,v)-\\mathcal{B}(w,w),v-w\\rangle\\leq K^{\\prime}\\|w\\|\\|v-w \\|\\cdot|v-w|, \\tag{22}\\]\\[|\\langle\\mathcal{B}(w,v),v\\rangle|\\leq K^{\\prime}\\|w\\|\\|v\\|v\\|\\quad\\text{and} \\quad|\\langle\\mathcal{B}(w,v),z\\rangle|\\leq K^{\\prime}\\|v\\|\\|w\\|\\|z\\|\\;. \\tag{23}\\] _Furthermore, for all \\(v\\in\\mathcal{V}\\),_ \\[\\langle\\mathcal{B}(v,v),v\\rangle=0 \\tag{24}\\] Notice that (22) implies that, for \\(K=(K^{\\prime})^{2}/\\delta\\), \\[\\langle\\mathcal{B}(v,v)-\\mathcal{B}(w,w),v-w\\rangle\\leq\\tfrac{1}{2}K\\|w\\|^{2} |v-w|^{2}+\\tfrac{1}{2}\\delta\\|v-w\\|^{2},\\quad\\forall v,w\\in\\mathcal{V}. \\tag{25}\\] This estimate will be used to control the possible exponential divergence of Navier-Stokes trajectories which needs to be compensated for by means of observations. Proof of Lemma 4.1.: We give a brief overview of the main ideas required to prove this well known result. First notice that the assumption \\(K^{\\prime}\\geq 1\\) is without loss of generality. We need this later for simplicity of presentation. Then (22) is a direct consequence of (23) and (24), by using the identity \\[\\mathcal{B}(v,v)-\\mathcal{B}(w,w)=\\mathcal{B}(v+w,v-w)=\\mathcal{B}(v-w,v-w)+2 \\mathcal{B}(w,v-w)\\] For simplicity of presentation, we use the same constant in (22) and (23). For Navier-Stokes it is well-known that \\(\\langle(w\\cdot\ abla)v,v\\rangle=0\\), as the divergence of \\(w\\) is \\(0\\). Thus there is constant \\(c_{1}\\) such that \\[2|\\langle\\mathcal{B}(w,v),v\\rangle|=\\langle(v\\cdot\ abla)w,v\\rangle|\\leq c_{1} \\|w\\|\\|v\\|_{L^{4}}^{2}\\;.\\] Since, in two dimensions \\(H^{1/2}=D(A^{1/4})\\) is embedded into \\(L^{4}\\), there is constant \\(c_{2}\\) such that \\(|\\langle\\mathcal{B}(w,v),z\\rangle|\\leq c_{2}\\|w\\|\\|v\\|_{H^{\\frac{1}{2}}}\\|z\\|_ {H^{\\frac{1}{2}}}\\;\\). The first result in (23) then follows from the interpolation inequality \\(\\|v\\|_{H^{\\frac{1}{2}}}^{2}\\leq c|v\\|\\|v\\|\\) and the second from the embedding \\(\\|v\\|_{H^{\\frac{1}{2}}}\\leq c\\|v\\|\\). Finally, in the following a key role will be played by the constant \\(\\gamma\\) defined as follows: **Assumption 4.2**.: _Let \\(\\gamma\\) be the largest positive constant such that_ \\[\\frac{1}{2}\\gamma|h|^{2}\\leq\\langle\\omega A^{-2\\alpha}P_{\\lambda}h,h\\rangle+ \\frac{1}{2}\\delta\\|h\\|^{2}\\qquad\\text{for all }h\\in\\mathcal{V}. \\tag{26}\\] It is clear that such a \\(\\gamma\\) always exists, and indeed that \\(\\gamma\\geq\\delta\\), as one has \\(\\langle A^{-2\\alpha}P_{\\lambda}h,h\\rangle\\geq 0\\). We will study how \\(\\gamma\\) depends on \\(\\lambda\\) and \\(\\omega\\) in subsequent discussions where we show that, by choosing \\(\\lambda\\) and \\(\\omega\\) large enough, \\(\\gamma\\) can be made arbitrarily large. ### Forward Mean Square Accuracy **Theorem 4.3** (**Accuracy)**.: _Let \\(\\widehat{m}\\) solve (21), and let \\(u\\) solve (18) with initial condition on the global attractor \\(\\mathcal{A}\\). For \\(\\lambda=\\infty\\) assume \\(4\\alpha+2\\beta>1\\) and \\(\\alpha>-\\frac{1}{2}\\). Suppose that \\(\\gamma\\), the largest positive number such that (26) holds, satisfies_ \\[\\gamma=KR+\\gamma_{0}\\qquad\\text{for some }\\gamma_{0}>0,\\]_where \\(K\\) is the constant appearing in (25) and \\(R\\), recall, is defined by \\(R=\\sup_{t\\in\\mathbb{R}}\\|u(t)\\|^{2}\\). Then_ \\[\\mathbb{E}|\\widehat{m}(t)-u(t)|^{2}\\leq\\mathrm{e}^{-\\gamma_{0}t}|\\widehat{m}(0)- u(0)|^{2}+\\omega^{2}\\sigma_{0}^{2}\\int_{0}^{t}\\mathrm{e}^{-\\gamma_{0}(t-s)} \\mathrm{trace}_{\\mathcal{H}}\\big{(}A^{-4\\alpha-2\\beta}P_{\\lambda}\\big{)}ds.\\] _As a consequence_ \\[\\limsup_{t\\to\\infty}\\mathbb{E}|\\widehat{m}(t)-u(t)|^{2}\\leq\\tfrac{1}{\\gamma_{ 0}}\\omega^{2}\\sigma_{0}^{2}\\mathrm{trace}_{\\mathcal{H}}\\big{(}A^{-4\\alpha-2 \\beta}P_{\\lambda}\\big{)}.\\] Proof.: Define the error \\(e=\\widehat{m}-u\\) and subtract equation (18) from (21) to obtain \\[de+\\delta Ae=\\Big{(}\\mathcal{B}(u,u)-\\mathcal{B}(\\widehat{m},\\widehat{m})- \\omega A^{-2\\alpha}P_{\\lambda}e\\Big{)}dt+\\omega\\sigma_{0}A^{-2\\alpha-\\beta}P_ {\\lambda}dW\\;.\\] Using the Ito formula from Theorem 4.17 of [12], together with (25), yields \\[\\tfrac{1}{2}d|e|^{2}\\leq \\Big{(}{-\\tfrac{1}{2}\\delta}\\|e\\|^{2}+\\tfrac{1}{2}K\\|u(t)\\|^{2}|e |^{2}-\\langle\\omega A^{-2\\alpha}P_{\\lambda}e,e\\rangle\\Big{)}dt\\] \\[\\qquad+\\langle e,\\omega\\sigma_{0}A^{-2\\alpha-\\beta}P_{\\lambda}dW \\rangle+\\tfrac{1}{2}\\omega^{2}\\sigma_{0}^{2}\\mathrm{trace}_{\\mathcal{H}}\\big{(} A^{-4\\alpha-2\\beta}P_{\\lambda}\\big{)}dt\\;.\\] Here we have used the fact that the projection \\(P_{\\lambda}\\) and \\(A\\) commute. Applying (26) and taking expectations we obtain \\[\\frac{d}{dt}\\mathbb{E}|e(t)|^{2}\\leq-\\big{(}\\gamma-K\\|u(t)\\|^{2}\\big{)}\\cdot \\mathbb{E}|e(t)|^{2}+\\omega^{2}\\sigma_{0}^{2}\\mathrm{trace}_{\\mathcal{H}} \\big{(}A^{-4\\alpha-2\\beta}P_{\\lambda}\\big{)}\\;,\\] But \\(\\sup_{t\\geq 0}\\|u(t)\\|^{2}=R<\\infty\\) by Proposition 3.1 and hence, by assumption on \\(\\gamma\\), \\[\\frac{d}{dt}\\mathbb{E}|e(t)|^{2}\\leq-\\gamma_{0}\\cdot\\mathbb{E}|e(t)|^{2}+ \\omega^{2}\\sigma_{0}^{2}\\mathrm{trace}_{\\mathcal{H}}\\big{(}A^{-4\\alpha-2 \\beta}P_{\\lambda}\\big{)}.\\;,\\] The result follows from a Gronwall argument. **Remark 4.4**.: _We now briefly discuss the choice of parameters to ensure satisfaction of the conditions of Theorem 4.3, and its implications. To this end, notice that \\(K\\) and \\(R\\) are independent of the parameters of the filter, being determined entirely by the Navier-Stokes equation (18) itself. To apply the theorem we need to ensure that \\(\\gamma\\) defined by (26) exceeds \\(KR\\). Notice that_ \\[\\frac{1}{2}\\gamma|h|^{2}\\leq\\langle\\omega A^{-2\\alpha}P_{\\lambda}h,h\\rangle+ \\frac{1}{2}\\delta\\|h\\|^{2}\\qquad\\text{for all }h\\in P_{\\lambda}\\mathcal{V}\\] _requires that_ \\[\\frac{1}{2}\\gamma\\leq\\frac{\\omega}{|k|^{4\\alpha}}+\\frac{1}{2}\\delta|k|^{2} \\qquad\\text{for all }|k|^{2}<\\lambda _Since the global minimum of the function \\(x\\in\\mathbb{R}^{+}\\mapsto\\omega x^{-2\\alpha}+\\frac{1}{2}\\delta x\\) occurs at a point \\(c\\delta^{2\\alpha/2\\alpha+1}\\omega^{1/2\\alpha+1}\\) we see that the maximum value of \\(\\gamma\\) such that (26) holds, \\(\\gamma_{\\max}\\), is_ \\[\\gamma_{\\max}=\\min\\Big{\\{}\\frac{\\delta\\lambda L^{2}}{4\\pi^{2}},c(\\delta^{2 \\alpha}\\omega)^{1/(2\\alpha+1)}\\Big{\\}}.\\] _This demonstrates that, provided \\(\\lambda\\) is large enough, and \\(\\omega\\) is large enough, then the conditions of the theorem are satisfied._ _In summary, these conditions are satisfied provided that enough of the low Fourier modes are observed (\\(\\lambda\\) large enough), and provided that the ratio of the scale of the covariance for the model to that for the observations, \\(\\omega\\), is sufficiently large. Ensuring that the latter is achieved is often termed variance inflation in the applied literature and our theory provides concrete analytical insight into the mechanisms behind it. Furthermore, notice that once \\(\\lambda\\) and \\(\\omega\\) are chosen to ensure this, then the asymptotic mean square error will be small, provided \\(\\epsilon:=\\omega\\sigma_{0}\\) is small that is, provided the observational noise is sufficiently small. In this situation the theorem establishes a form of accuracy of the filter since, regardless of the starting point of the filter,_ \\[\\limsup_{t\\to\\infty}\\mathbb{E}|\\widehat{m}(t)-u(t)|^{2}\\leq\\frac{1}{\\gamma_{ 0}}\\epsilon^{2}\\mathrm{trace}_{\\mathcal{H}}\\big{(}A^{-4\\alpha-2\\beta}P_{ \\lambda}\\big{)}.\\] ### Forward Stability in Probability The aim of this section is to prove that two different solutions of the continuous 3DVAR filter will converge to one another in probability as \\(t\\to\\infty.\\) Almost sure and mean square convergence is out of reach in forward time. However, almost sure pullback convergence is possible and we study this in the next section. Throughout this section we define, for \\(u\\) on the attractor, \\[R^{\\prime}=\\sup_{t\\in\\mathbb{R}}|f+\\omega A^{-2\\alpha}P_{\\lambda}u|_{-1}^{2} \\tag{29}\\] and we assume that \\(R^{\\prime}<\\infty.\\) From this we define \\[R^{\\prime\\prime}=\\frac{K}{\\delta^{2}}R^{\\prime}+\\frac{K}{\\delta}\\omega^{2} \\sigma_{0}^{2}\\mathrm{trace}_{\\mathcal{H}}\\big{(}A^{-4\\alpha-2\\beta}P_{ \\lambda}\\big{)}. \\tag{30}\\] **Theorem 4.5**.: _Let \\(\\widehat{m}_{i}\\) solve (21) with initial condition \\(\\widehat{m}(0)=\\widehat{m}_{i}(0)\\) and let \\(u\\) solve (18) with initial condition on the global attractor \\(\\mathcal{A}\\). For \\(\\lambda=\\infty\\) assume that \\(4\\alpha+2\\beta>1\\) and \\(\\alpha>-\\frac{1}{2}\\). Let \\(R^{\\prime\\prime}\\) be defined as above, and suppose \\(\\gamma\\), the largest positive number such that (26) holds, satisfies \\(\\gamma=R^{\\prime\\prime}+\\gamma_{0}\\) for some \\(\\gamma_{0}>0\\)._ _Then for all \\(\\eta\\in(0,\\gamma_{0})\\)_ \\[|\\widehat{m}_{1}(t)-\\widehat{m}_{2}(t)|e^{\\eta t}\\to 0\\qquad\\text{in probability as }t\\to\\infty.\\] Proof.: It follows from Lemma 4.6 below that, for any fixed \\(t>0,\\) \\[\\mathbb{P}\\Big{(}|\\widehat{m}_{1}(t)-\\widehat{m}_{2}(t)|^{2}\\leq|\\widehat{m} _{1}(0)-\\widehat{m}_{2}(0)|^{2}e^{-2\\gamma_{0}t}\\Big{)}\\geq\\mathbb{P}\\Big{(} \\frac{1}{t}\\int_{0}^{t}K\\|\\widehat{m}_{2}(s)\\|^{2}ds\\leq\\gamma-\\gamma_{0} \\Big{)}\\;. \\tag{31}\\] Thus to establish the desired convergence in probability, it suffices to establish that the right hand side converges to \\(1\\) as \\(t\\to\\infty.\\)Taking the inner-product of equation (21) with \\(\\widehat{m}\\), applying (24) and using the Ito formula from Theorem 4.17 of [12], we obtain \\[\\tfrac{1}{2}d|\\widehat{m}|^{2}\\leq \\Big{(}-\\delta\\|\\widehat{m}\\|^{2}-\\langle\\omega A^{-2\\alpha}P_{ \\lambda}\\widehat{m},\\widehat{m}\\rangle+\\langle f+\\omega A^{-2\\alpha}P_{ \\lambda}u,\\widehat{m}\\rangle\\Big{)}dt\\] \\[\\qquad\\qquad+\\langle\\widehat{m},\\omega\\sigma_{0}A^{-2\\alpha- \\beta}P_{\\lambda}dW\\rangle+\\tfrac{1}{2}\\omega^{2}\\sigma_{0}^{2}\\mathrm{trace}_ {\\mathcal{H}}\\big{(}A^{-4\\alpha-2\\beta}P_{\\lambda}\\big{)}dt\\] \\[\\leq \\Big{(}-\\frac{1}{2}\\delta\\|\\widehat{m}\\|^{2}+\\frac{1}{2\\delta}|f +\\omega A^{-2\\alpha}P_{\\lambda}u|_{-1}^{2}\\Big{)}dt\\] \\[\\qquad\\qquad+\\langle\\widehat{m},\\omega\\sigma_{0}A^{-2\\alpha- \\beta}P_{\\lambda}dW\\rangle+\\tfrac{1}{2}\\omega^{2}\\sigma_{0}^{2}\\mathrm{trace}_ {\\mathcal{H}}\\big{(}A^{-4\\alpha-2\\beta}P_{\\lambda}\\big{)}dt\\;.\\] Notice that, from the Poincare inequality, we have that \\[d|\\widehat{m}|^{2}\\leq\\Big{(}-\\delta|\\widehat{m}|^{2}+\\frac{1}{\\delta}R^{ \\prime}+\\omega^{2}\\sigma_{0}^{2}\\mathrm{trace}_{\\mathcal{H}}\\big{(}A^{-4 \\alpha-2\\beta}P_{\\lambda}\\big{)}\\Big{)}dt+2\\langle\\widehat{m},\\omega\\sigma_{0 }A^{-2\\alpha-\\beta}P_{\\lambda}dW\\rangle\\;. \\tag{32}\\] From this inequality we can deduce two facts. Taking expectations gives \\[d\\big{(}\\mathbb{E}|\\widehat{m}(t)|^{2}\\big{)}\\leq\\Big{(}-\\delta\\mathbb{E}| \\widehat{m}|^{2}+\\frac{1}{\\delta}R^{\\prime}\\Big{)}dt+\\omega^{2}\\sigma_{0}^{2} \\mathrm{trace}_{\\mathcal{H}}\\big{(}A^{-4\\alpha-2\\beta}P_{\\lambda}\\big{)}dt;,\\] and thus with \\(R^{\\prime\\prime}\\) from (30) \\[\\limsup_{t\\to\\infty}\\mathbb{E}|\\widehat{m}(t)|^{2}\\leq\\frac{1}{\\delta^{2}}R^{ \\prime}+\\frac{1}{\\delta}\\omega^{2}\\sigma_{0}^{2}\\mathrm{trace}_{\\mathcal{H}} \\big{(}A^{-4\\alpha-2\\beta}P_{\\lambda}\\big{)}=\\frac{R^{\\prime\\prime}}{K}. \\tag{33}\\] We also see that \\[\\frac{1}{t}\\int_{0}^{t}|\\widehat{m}(s)|^{2}ds\\leq\\frac{R^{\\prime\\prime}}{K}+ \\frac{1}{t}|\\widehat{m}(0)|^{2}+I(t)\\;, \\tag{34}\\] where we have defined \\[I(t)=\\frac{2}{t}\\int_{0}^{t}\\bigl{\\langle}\\widehat{m}(s),\\omega\\sigma_{0}A^{ -2\\alpha-\\beta}P_{\\lambda}dW(s)\\bigr{\\rangle}\\] Observe that, by the Ito formula, \\[\\mathbb{E}|I(t)|^{2}\\leq\\frac{c}{t^{2}}\\int_{0}^{t}\\mathbb{E}|\\widehat{m}(s)| ^{2}ds\\] for the positive constant \\(c=\\omega^{2}\\sigma_{0}^{2}\\|A^{-2\\alpha-\\beta}P_{\\lambda}\\|_{\\mathcal{L}( \\mathcal{H})}^{2}\\). Using (33) we deduce that \\(I(t)\\to 0\\) in mean square and hence in probability. As a consequence we deduce that (34) implies that \\[\\mathbb{P}\\Big{(}\\frac{1}{t}\\int_{0}^{t}K\\|\\widehat{m}(s)\\|^{2}ds\\leq R^{ \\prime\\prime}\\Big{)}\\to 1\\quad\\text{for }t\\to\\infty\\;.\\] This completes the proof. **Lemma 4.6**.: _Let \\(\\widehat{m}_{i}\\) solve (21) with the same \\(u\\) on the attractor but with different initial conditions \\(\\widehat{m}_{i}(0)\\). For \\(\\lambda=\\infty\\) assume that \\(4\\alpha+2\\beta>1\\). Fix \\(t>0\\) and recall that \\(K\\) is the constant appearing in (25). Suppose that \\(\\gamma\\), the largest positive number such that (26) holds, satisfies_ \\[\\frac{1}{t}\\int_{0}^{t}K\\|\\widehat{m}_{2}(s)\\|^{2}ds+\\gamma_{0}\\leq\\gamma\\] _for some \\(\\gamma_{0}>0.\\) Then_ \\[|\\widehat{m}_{1}(t)-\\widehat{m}_{2}(t)|^{2}\\leq e^{-2\\gamma_{0}t}|\\widehat{m }_{1}(0)-\\widehat{m}_{2}(0)|^{2}.\\]Proof.: We define the error \\(e=\\widehat{m}_{1}-\\widehat{m}_{2}\\), subtract equation (21) from itself and take the inner-product with \\(e\\) to obtain, using (25), \\[\\tfrac{1}{2}\\frac{d}{dt}|e|^{2} = \\langle\\mathcal{F}(\\widehat{m}_{1})-\\mathcal{F}(\\widehat{m}_{2}), e\\rangle-\\langle\\omega A^{-2\\alpha}P_{\\lambda}e,e\\rangle\\] \\[\\leq -\\delta\\|e\\|^{2}+K\\|\\widehat{m}_{2}(t)\\|^{2}|e|^{2}-\\langle\\omega A ^{-2\\alpha}P_{\\lambda}e,e\\rangle\\;.\\] Applying (26) we obtain \\[\\tfrac{1}{2}\\frac{d}{dt}|e(t)|^{2}\\leq(K\\|\\widehat{m}_{2}(t)\\|^{2}-\\gamma) \\cdot|e(t)|^{2}\\;.\\] Integrating this inequality yields \\[|e(t)|^{2}\\leq\\exp\\Bigl{(}2\\int_{0}^{t}(K\\|\\widehat{m}_{2}(t)\\|^{2}-\\gamma)ds \\Bigr{)}\\cdot|e(0)|^{2}\\;.\\] This gives the desired result. **Remark 4.7**.: _Satisfying the condition on \\(\\gamma\\) for the stability Theorem 4.5 is harder than for the accuracy Theorem 4.3. This is because \\(R^{\\prime\\prime}\\) can grow with \\(\\omega\\) and so analogous arguments to those used at the end of the previous subsection may fail. However different proofs can be developed, in the case where \\(\\sigma_{0}\\) is sufficiently small, to overcome this effect._ ## 5 Pullback Accuracy and Stability In this section we consider almost sure accuracy and stability results for the 3DVAR algorithm applied to the 2D-Navier-Stokes equation. We use the notion of pullback convergence as pioneered in the theory of stochastic dynamical systems, as we do not expect almost sure results to hold forward in time. Indeed, as shown in the previous section, convergence in probability is typically the result of forward studies of stability. The methodology that we employ derives from the study of semilinear equations driven by additive noise; in particular the Ornstein-Uhlenbeck (OU) process constructed from a modification of the Stokes' equation plays a central role. Properties of this process are described in subsection 5.1, and the necessary properties of the 3DVAR Navier-Stokes filter are discussed in subsection 5.2. In both subsections a key aspect of the analysis concerns the extension of solutions to the whole real line \\(t\\in\\mathbb{R}\\). Subsections 5.3 and 5.4 then concern accuracy and stability for the filter, in the pull-back sense. In the following we define the Wiener process \\[\\mathcal{W}:=\\omega\\sigma_{0}A^{-2\\alpha-\\beta}P_{\\lambda}W,\\] and recall that when \\(\\lambda=\\infty\\) we assume \\(4\\alpha+2\\beta>1\\) and \\(\\alpha>-\\tfrac{1}{2}\\). In this section the driving Brownian motion is considered to be two-sided: \\(\\mathcal{W}\\in C(\\mathbb{R},\\mathcal{H})\\). This enables us to study notions of pullback attraction and stability. With this definition, 3DVAR for (18), namely equation (21)), may be written \\[\\frac{d\\widehat{m}}{dt}+\\delta A\\widehat{m}+\\mathcal{B}(\\widehat{m},\\widehat {m})+\\omega A^{-2\\alpha}P_{\\lambda}(\\widehat{m}-u)=f+\\frac{d\\mathcal{W}}{dt},\\quad\\widehat{m}(0)=\\widehat{m}_{0}\\;. \\tag{35}\\] We employ the same notations from the previous sections for the nonlinearity \\(\\mathcal{F}(u)\\), the Stokes operator \\(A\\), the bilinear form \\(\\mathcal{B}\\), and the spaces \\(\\mathcal{H}\\) and \\(\\mathcal{V}\\). ### Stationary Ornstein-Uhlenbeck Processes Let \\(\\phi\\geq 0\\) and define the stationary ergodic OU process \\(Z_{\\phi}\\) as follows, using integration by parts to find the second expression: \\[Z_{\\phi}(t) :=\\int_{-\\infty}^{t}e^{-(t-s)(\\delta A+\\phi)}d\\mathcal{W}(s) \\tag{36a}\\] \\[=\\mathcal{W}(t)-\\int_{-\\infty}^{t}(\\delta A+\\phi)e^{-(t-s)( \\delta A+\\phi)}\\mathcal{W}(s)ds\\;. \\tag{36b}\\] Note that \\(Z_{\\phi}\\) satisfies \\[\\partial_{t}Z_{\\phi}+(\\delta A+\\phi)Z_{\\phi}=\\partial_{t}\\mathcal{W}. \\tag{37}\\] With a slight abuse of notation we rewrite the random variable \\(Z_{\\phi}(0)\\) as \\(Z_{\\phi}(\\mathcal{W})\\), a function of the whole Wiener path \\(t\\mapsto\\mathcal{W}(t)\\). Thus \\(Z_{\\phi}(t)=Z_{\\phi}(\\theta_{t}\\mathcal{W})\\), where \\(\\theta_{t}\\) is the stationary ergodic shift on Wiener space defined by \\[\\theta_{t}\\mathcal{W}(s)=\\mathcal{W}(t+s)-\\mathcal{W}(t)\\qquad\\text{for all }t,s\\in\\mathbb{R}.\\] The noise is always of trace-class, in case either \\(\\lambda<\\infty\\) or \\(4\\alpha+2\\beta>1\\). Recall that by Lemma 3.2, the OU-process \\(Z_{\\phi}\\) has a version with continuous paths in \\(\\mathcal{V}\\). We will always assume this in the following. It is well known that \\(Z_{\\phi}\\) satisfies the Birkhoff ergodic theorem, because it is a stationary ergodic process; we now formulate this fact in the pullback sense. **Theorem 5.1** (**Birkhoff Ergodic Theorem)**.: _For \\(\\lambda=\\infty\\) assume that \\(4\\alpha+2\\beta>1\\). Then_ \\[\\limsup_{s\\to\\infty}\\frac{1}{s}\\int_{-s}^{0}\\|Z_{\\phi}(\\tau)\\|^{2}d\\tau= \\mathbb{E}\\|Z_{\\phi}(0)\\|^{2}\\;.\\] Proof.: Just note that \\(Z_{\\phi}(\\tau)=Z_{\\phi}(\\theta_{\\tau}\\mathcal{W})\\), and thus \\[\\frac{1}{s}\\int_{-s}^{0}\\|Z_{\\phi}(\\tau)\\|^{2}d\\tau=\\frac{1}{s}\\int_{0}^{s}\\|Z _{\\phi}(\\theta_{-\\tau}\\mathcal{W})\\|^{2}d\\tau\\to\\mathbb{E}\\|Z_{\\phi}(\\mathcal{ W})\\|^{2}\\quad\\text{for }s\\to\\infty\\] by the classical version of the Birkhoff ergodic theorem, as \\(\\theta_{-\\tau}\\mathcal{W}\\), \\(\\tau\\geq 0\\) is stationary and ergodic. We can reformulate the implications of the ergodic theorem in several ways. **Corollary 5.2**.: _For \\(\\lambda=\\infty\\) assume that \\(4\\alpha+2\\beta>1\\). There exists a random constant \\(C(\\mathcal{W})\\) such that_ \\[\\frac{1}{|s|}\\int_{s}^{0}\\|Z_{\\phi}(\\tau)\\|^{2}d\\tau\\leq C(\\mathcal{W})\\qquad \\text{ for all }s<0.\\] _Furthermore, for any \\(\\epsilon>0\\) there is a random time \\(t_{\\epsilon}(\\mathcal{W})<0\\) such that_ \\[\\frac{1}{|s|}\\int_{s}^{0}\\|Z_{\\phi}(\\tau)\\|^{2}d\\tau\\leq(1+\\epsilon)\\mathbb{E} \\|Z_{\\phi}(0)\\|^{2}\\qquad\\text{ for all }s<t_{\\epsilon}(\\mathcal{W})<0.\\]This result immediately implies \\[\\frac{1}{t-s}\\int_{s}^{t}\\|Z_{\\phi}(\\theta_{\\tau}\\mathcal{W})\\|^{2}d\\tau=\\frac{1 }{t-s}\\int_{s-t}^{0}\\|Z_{\\phi}(\\theta_{\\tau+\\ell}\\mathcal{W})\\|^{2}d\\tau\\leq C( \\theta_{t}\\mathcal{W})\\;.\\] Finally we observe that it is well-known that the Ornstein-Uhlenbeck process \\(Z_{\\phi}\\) is a tempered random variable, which means that \\(Z_{\\phi}(\\theta_{s}\\mathcal{W})\\) grows sub-exponentially for \\(s\\to-\\infty\\), and in fact it grows slower that any polynomial. We now state this precisely. **Lemma 5.3**.: _For \\(\\lambda=\\infty\\) assume that \\(4\\alpha+2\\beta>1\\). Then on a set of measure one_ \\[\\lim_{s\\to-\\infty}\\|Z_{\\phi}(s)\\|\\cdot|s|^{-\\epsilon}=0\\qquad\\text{for all } \\epsilon>0\\;.\\] Proof.: The claim follows from Proposition 4.1.3 of [1] which states that for any positive functional \\(h\\) on Wiener paths such that \\(\\mathbb{E}\\sup_{t\\in[0,1]}h(\\theta_{t}\\mathcal{W})<\\infty\\) one has \\(\\lim_{t\\to\\infty}\\frac{1}{t}h(\\theta_{t}\\mathcal{W})=0\\). Here \\(h(\\mathcal{W})=\\|Z_{\\phi}(\\mathcal{W})\\|^{p}\\), where the moment is finite due to Lemma 3.2. In addition to the preceding almost sure result, the following moment bound on \\(Z_{\\phi}\\) is also useful. It shows that \\(Z_{\\phi}\\) is of order \\(\\sigma_{0}\\) and converges to \\(0\\) for \\(\\phi\\to\\infty\\). In the following it may be useful to play with \\(\\phi\\), and even to use random \\(\\phi\\), as our estimates hold path-wise for all \\(\\phi\\). **Lemma 5.4**.: _For \\(\\lambda=\\infty\\) assume that \\(4\\alpha+2\\beta>1\\). Then, for all \\(p>1\\) there is a constant \\(C_{p}>0\\) such that_ \\[\\Big{(}\\mathbb{E}\\|Z_{\\phi}(t)\\|^{2p}\\Big{)}^{1/p}\\leq C_{p}\\omega^{2}\\sigma_ {0}^{2}\\cdot\\mathrm{trace}\\{(\\delta A+\\phi)^{-1}A^{1-4\\alpha-2\\beta}P_{ \\lambda}\\},\\qquad\\forall\\;t\\in\\mathbb{R}.\\] Proof.: Due to stationarity it is sufficient to consider \\(\\mathbb{E}\\|Z_{\\phi}(0)\\|^{2p}\\). Due to Gaussianity it is enough to consider \\(p=1\\). \\[\\mathbb{E}\\|Z_{\\phi}(0)\\|^{2}=\\mathbb{E}|A^{1/2}Z_{\\phi}(0)|^{2}=\\omega^{2} \\sigma_{0}^{2}\\mathbb{E}\\Big{|}A^{1/2}\\int_{-\\infty}^{0}e^{s(\\delta A+\\phi)} A^{-2\\alpha-\\beta}P_{\\lambda}dW(s)\\Big{|}^{2}\\;.\\] Thus by the Ito-Isometry we obtain (projection \\(P_{\\lambda}\\) commutes with \\(A\\)) \\[\\mathbb{E}\\|Z_{\\phi}(0)\\|^{2} = \\omega^{2}\\sigma_{0}^{2}\\cdot\\mathrm{trace}\\Big{(}\\int_{-\\infty }^{0}e^{2s(\\delta A+\\phi)}A^{1-4\\alpha-2\\beta}P_{\\lambda}ds\\Big{)}\\] \\[= \\frac{1}{2}\\omega^{2}\\sigma_{0}^{2}\\cdot\\mathrm{trace}\\Big{(}( \\delta A+\\phi)^{-1}A^{1-4\\alpha-2\\beta}P_{\\lambda}\\Big{)}\\;.\\] **Remark 5.5**.: _A key conclusion of the preceding lemma is that, if \\(\\epsilon:=\\omega\\sigma_{0}\\) (as defined in Remark 4.4) is small, then all moments of the OU process \\(Z_{\\phi}\\) are small. Furthermore, the parameter \\(\\phi\\) can be tuned to make these moments as small as desired._ ### Solutions Continuous Time 2D Navier-Stokes Filter In the following we denote the solution of (35) with initial condition \\(\\widehat{m}(s)=\\widehat{m}_{0}\\) and given Wiener path \\(\\mathcal{W}\\) by \\(S(t,s,\\mathcal{W})\\widehat{m}_{0}\\). This object forms a stochastic dynamical system (SDS); see [10, 9]. We cannot use directly the notion of a random dynamical system, as in [1], because of the non-autonomous forcing \\(u\\) in (35). The fact that the solution of the SPDE (35) can be defined path-wise for every fixed path of \\(\\mathcal{W}\\), can be seen from the well-known method of changing to the variable \\(v:=\\widehat{m}-Z_{\\phi}\\). (see Section 7 of [10] or Chapter 15 of [11], for example). Now, since \\(Z_{\\phi}\\) satisfies (37), subtraction from (35) shows that \\(v\\) solves the random PDE \\[\\frac{d}{dt}v+\\delta Av+\\mathcal{B}(v,v)+2\\mathcal{B}(v,Z_{\\phi})+\\mathcal{B} (Z_{\\phi},Z_{\\phi})+\\omega A^{-2\\alpha}(v+Z_{\\phi}-u)-\\phi Z_{\\phi}=f\\;. \\tag{38}\\] This can be solved for each given path of \\(\\mathcal{W}\\) with methods similar to the ones used for Proposition 3.1 (see also Proposition 3.3). Once, the solution is defined path-wise, the generation of a stochastic dynamical system is straightforward. Let us summarize this in a theorem: **Theorem 5.6** (**Solutions)**.: _For all \\(u_{0}\\) on the attractor \\(\\mathcal{A}\\) the Navier-Stokes equation (18) has a solution \\(u\\in L^{\\infty}(\\mathbb{R},\\mathcal{V}).\\) Now consider the 3DVAR filter written in the form of equation (38). In the case \\(\\lambda=\\infty\\) assume that \\(\\alpha>-\\frac{1}{2}\\) and \\(4\\alpha+2\\beta>1\\). For any \\(s\\in\\mathbb{R}\\), any path of the Wiener process \\(\\mathcal{W}\\), and any initial condition \\(v(s)=\\widehat{m}(s)-Z_{\\phi}(s)\\in\\mathcal{H}\\) equation (38) has a unique solution_ \\[v\\in C^{0}_{\\mathrm{loc}}([s,\\infty),\\mathcal{H})\\cap L^{2}_{\\mathrm{loc}}([s,\\infty),\\mathcal{V})\\;.\\] _This implies the existence of a stochastic dynamical system \\(S\\) for (35)._ Proof.: The first statement follows directly from Proposition 3.1 if we take a solution on the attractor; in that case it follows that, in fact, \\(u\\in L^{\\infty}(\\mathbb{R},\\mathcal{V})\\). Proof of the second statement is discussed prior to the theorem statement. ### Pullback Accuracy Here we show that in the pullback sense solutions \\(\\widehat{m}\\) for large times stay close to \\(u\\), where the error scales with the observational noise strength \\(\\sigma_{0}\\). Recall \\(K\\) and \\(K^{\\prime}\\) defined in Lemma 4.1 and \\(R\\) the uniform bound on \\(u\\) from Proposition 3.1. **Theorem 5.7** (**Pullback Accuracy)**.: _Let \\(\\widehat{m}\\) solve (21), and let \\(u\\) solve (18) with initial condition on the global attractor \\(\\mathcal{A}\\). In the case \\(\\lambda=\\infty\\) assume additionally that \\(4\\alpha+2\\beta>1\\) and \\(\\alpha>-\\frac{1}{2}\\). Suppose that \\(\\gamma\\) from (26) is sufficiently large so that_ \\[K(17\\mathbb{E}\\|Z_{\\phi}\\|^{2}+16R)<\\gamma\\;. \\tag{39}\\] _Then there is a random constant \\(r(\\mathcal{W})>0\\) such that for any initial condition \\(\\widehat{m}_{0}\\)_ \\[\\limsup_{s\\to-\\infty}|S(t,s,\\mathcal{W})\\widehat{m}_{0}-u(t)-Z_{\\phi}(\\theta_ {t}\\mathcal{W})|^{2}\\leq r(\\theta_{t}\\mathcal{W})\\;.\\]_with a finite constant_ \\[r(\\mathcal{W})=\\frac{4}{\\delta}\\int_{-\\infty}^{0}\\exp\\Bigl{(}\\int_{\\tau}^{0} \\bigl{(}16K(\\|Z_{\\phi}\\|^{2}+R)-\\gamma\\bigr{)}d\\eta\\Bigr{)}\\mathcal{T}^{2}d\\tau\\;,\\] _where \\(\\mathcal{T}:=K^{\\prime}\\|Z_{\\phi}\\|(\\|Z_{\\phi}\\|+2\\|u\\|)+\\phi|Z_{\\phi}|+\\omega|A ^{-2\\alpha}P_{\\lambda}Z_{\\phi}|\\) where \\(K^{\\prime}\\) and \\(K:=(K^{\\prime})^{2}/\\delta\\) are as defined in Lemma 4.1._ **Remark 5.8**.: _Regarding Theorem 5.7 we make the following obervations:_ * _In (_39_) the contribution_ \\(\\mathbb{E}\\|Z_{\\phi}\\|^{2}\\) _can be made arbitrarily small by choosing_ \\(\\phi\\) _sufficiently large, or is small if_ \\(\\epsilon:=\\omega\\sigma_{0}\\) _is sufficiently small; see Lemma_ 5.4_. Thus_ \\(16RK<\\gamma\\) _is sufficient for accuracy. With a more careful application of Young's inequality, we could also get rid of several factors of_ \\(2\\)_, recovering the condition_ \\(RK<\\gamma\\) _from the forward accuracy result of Theorem_ 4.3_._ * _The assumption that_ \\(u\\) _lies on the attractor could we weakened to a condition on the limsup of_ \\(u\\)_. We state the stronger condition for simplicity of proofs._ * _In the language of random dynamical systems, the theorem states that the stochastic dynamical system_ \\(S(t,s\\mathcal{W})\\widehat{m}_{0}\\) _has is a random pullback absorbing ball centered around_ \\(u(t)\\) _with radius scaling with the size of the stochastic convolution_ \\(Z_{\\phi}\\)_. By Lemma_ 5.4_, this scales as_ \\(\\mathcal{O}(\\epsilon)\\) _for_ \\(\\epsilon=\\omega\\sigma_{0}\\) _sufficiently small; thus we have derived an accuracy result, in the pullback sense._ Proof.: _(Theorem 5.7)._ Consider the difference \\(d=\\widehat{m}-u\\), where \\(\\widehat{m}(t)=S(t,s)\\widehat{m}_{0}\\). This solves \\[\\partial_{t}d+\\delta Ad+\\mathcal{B}(d,d)+2\\mathcal{B}(u,d)+\\omega A^{-2\\alpha }P_{\\lambda}d=\\partial_{t}\\mathcal{W}\\;. \\tag{40}\\] In order to get rid of the noise, define \\(\\psi=d-Z_{\\phi}=\\widehat{m}-u-Z_{\\phi}\\), where \\(Z_{\\phi}\\) is the stationary stochastic convolution. Since \\(Z_{\\phi}\\) solves (37) the process \\(\\psi\\) solves \\[\\partial_{t}\\psi+\\delta A\\psi+\\mathcal{B}(\\psi+Z_{\\phi},\\psi+Z_{\\phi})+2 \\mathcal{B}(u,\\psi+Z_{\\phi})+\\omega A^{-2\\alpha}P_{\\lambda}(\\psi+Z_{\\phi})- \\phi Z_{\\phi}=0\\;. \\tag{41}\\] From this random PDE, we can take the scalar product with \\(\\psi\\) to obtain, using (24), \\[\\tfrac{1}{2}\\partial_{t}|\\psi|^{2}+\\delta\\|\\psi\\|^{2} =-\\langle 2\\mathcal{B}(Z_{\\phi},\\psi)+\\mathcal{B}(Z_{\\phi},Z_{\\phi})+2 \\mathcal{B}(u,\\psi+Z_{\\phi}),\\psi\\rangle\\] \\[\\qquad-\\langle\\omega A^{-2\\alpha}P_{\\lambda}(\\psi+Z_{\\phi})-\\phi Z _{\\phi},\\psi\\rangle\\;.\\] Using (26) and Lemma 4.1 we obtain \\[\\tfrac{1}{2}\\partial_{t}|\\psi|^{2}+\\tfrac{\\delta}{2}\\|\\psi\\|^{2} +\\tfrac{\\gamma}{2}|\\psi|^{2} \\leq-\\langle 2\\mathcal{B}(Z_{\\phi},\\psi)+\\mathcal{B}(Z_{\\phi},Z_{ \\phi})+2\\mathcal{B}(u,\\psi+Z_{\\phi}),\\psi\\rangle\\] \\[\\qquad-\\langle\\omega A^{-2\\alpha}P_{\\lambda}Z_{\\phi}-\\phi Z_{\\phi },\\psi\\rangle\\] \\[\\leq 2K^{\\prime}\\left(\\|Z_{\\phi}\\|+\\|u\\|\\right)\\cdot\\|\\psi\\|\\cdot |\\psi|\\] \\[\\qquad+K^{\\prime}\\|Z_{\\phi}\\|\\cdot(\\|Z_{\\phi}\\|+2\\|u\\|)\\cdot\\|\\psi\\|\\] \\[\\qquad+\\bigl{(}\\phi|Z_{\\phi}|+\\omega|A^{-2\\alpha}P_{\\lambda}Z_{ \\phi}|\\bigr{)}\\cdot|\\psi|\\;.\\]Recall that \\[\\mathcal{T}=K^{\\prime}\\|Z_{\\phi}\\|(\\|Z_{\\phi}\\|+2\\|u\\|)+\\phi|Z_{\\phi}|+\\omega|A^{- 2\\alpha}P_{\\lambda}Z_{\\phi}|\\.\\] Thus we have, using the Young inequality in the form \\(ab\\leq\\frac{1}{\\delta}a^{2}+\\frac{\\delta}{4}b^{2}\\) twice, \\[\\tfrac{1}{2}\\partial_{t}|\\psi|^{2}+\\tfrac{\\delta}{2}\\|\\psi\\|^{2}+ \\tfrac{\\gamma}{2}|\\psi|^{2} \\leq 2K^{\\prime}(\\|Z_{\\phi}\\|+\\|u\\|)\\|\\psi\\|\\psi+\\mathcal{T}\\cdot \\|\\psi\\|\\] \\[\\leq 4K(\\|Z_{\\phi}\\|+\\|u\\|)^{2}|\\psi|^{2}+\\tfrac{1}{\\delta} \\mathcal{T}^{2}+\\tfrac{\\delta}{2}\\|\\psi\\|^{2},\\] since \\(K=(K^{\\prime})^{2}/\\delta.\\) Hence \\[\\partial_{t}|\\psi|^{2}+\\gamma|\\psi|^{2}\\leq 8K(\\|Z_{\\phi}\\|+\\|u\\|)^{2}\\cdot| \\psi|^{2}+\\tfrac{2}{\\delta}\\mathcal{T}^{2}\\.\\] Comparison principle with \\(\\psi(s)=\\widehat{m}_{0}-u(s)-Z_{\\phi}(s)\\) yields (using the bound on \\(u\\) and \\((a+b)^{2}\\leq 2a^{2}+2b^{2}\\)) \\[|\\psi(t)|^{2} \\leq|\\widehat{m}_{0}-u(s)-Z_{\\phi}(s)|^{2}\\exp\\Big{(}\\int_{s}^{ t}[16K(\\|Z_{\\phi}\\|^{2}+R)-\\gamma]dr\\Big{)}\\] \\[+\\frac{2}{\\delta}\\int_{s}^{t}\\exp\\Big{(}\\int_{r}^{t}[16K(\\|Z_{ \\phi}\\|^{2}+R)-\\gamma]d\\tau\\Big{)}\\mathcal{T}^{2}dr\\] Thus, we can now use Birkhoffs theorem and the sub-exponential growth for \\(Z_{\\phi}\\). We obtain for \\(\\gamma\\) sufficiently large (as asserted by the Theorem), that there is a random time \\(t_{0}(\\mathcal{W})<0\\) such that for all \\(s<t_{0}(\\mathcal{W})<0\\) \\[|\\psi(t)|^{2}\\leq\\frac{4}{\\delta}\\int_{s}^{t}\\exp\\Big{(}\\int_{r}^{t}[16K(\\|Z_ {\\phi}\\|^{2}+R)-\\gamma]d\\tau\\Big{)}\\mathcal{T}^{2}dr\\.\\] Recall that \\(\\psi(t)=S(t,s,\\mathcal{W})\\widehat{m}_{0}-Z(t)-u(t)\\). This finishes the proof, as the right hand side is almost surely a finite random constant, due to Birkhoffs ergodic theorem and sub-exponential growth of \\(Z_{\\phi}\\) and hence \\(\\mathcal{T}^{2}\\) (see Lemma 5.3). ### Pullback Stability Now we verify that under suitable conditions all solutions of (35) pullback converge exponentially fast towards each other. We make the assumption that Birkhoff bounds hold for the solution \\(\\widehat{m}\\) (see theorem statement below to make this assumption precise). These bounds do not follow directly from Birkhoffs ergodic theorem, as the equation is non-autonomous due to the presence of \\(u\\). Whilst it should be possible to establish such bounds, using the techniques in [17] or [14], doing so is technically involved, as one needs to use random \\(\\phi\\)'s in the definition of \\(Z_{\\phi}\\). In order to keep the presentation at a reasonable level, we refrain from giving details on this point. **Theorem 5.9** (Exponential Stability).: _Assume there is one initial condition \\(\\widehat{m}_{0}^{(1)}\\) such that the corresponding solution \\(S(t,s,\\mathcal{W})\\widehat{m}_{0}^{(1)}\\) satisfies Birkhoff-bounds. To be more precise, we assume that \\(\\gamma\\) from (26) is sufficiently large that, for some \\(\\eta>0\\) and \\(K=(K^{\\prime})^{2}/\\delta\\) from Lemma 4.1,_ \\[\\limsup_{s\\to-\\infty}\\frac{4K}{t-s}\\int_{s}^{t}\\|S(\\tau,s,\\mathcal{W})\\widehat {m}_{0}^{(1)}\\|^{2}d\\tau<\\gamma-2\\eta. \\tag{42}\\] _Let \\(\\widehat{m}_{0}^{(2)}\\) be any other initial condition. Then_ \\[\\lim_{s\\to-\\infty}|S(t,s,\\mathcal{W})\\widehat{m}_{0}^{(1)}-S(t,s,\\mathcal{W}) \\widehat{m}_{0}^{(2)}|\\cdot\\mathrm{e}^{\\eta(t-s)}=0.\\]Recall we verified in Theorem 5.7 that (35) has a random pullback absorbing set in \\(L^{2}\\) centered around \\(u(t)\\). Together with Theorem 5.9 this immediately implies that Equation (35) has a random pullback attractor in \\(L^{2}\\) consisting of a single point that attracts all solutions. Let us remark, that we did not show that the attractor also pullback-attracts tempered bounded sets, but this is a straightforward modification. Proof.: _(Theorem 5.9)_ Define here \\(v=\\widehat{m}_{1}-\\widehat{m}_{2}\\), where \\(\\widehat{m}_{i}(t)=S(t,s,\\mathcal{W})\\widehat{m}_{0}^{(i)}\\) are solutions of (35) with different initial conditions. It is easy to see by the symmetry of \\(\\mathcal{B}\\) that \\[\\partial_{t}v+\\delta Av+\\mathcal{B}(\\widehat{m}_{1}+\\widehat{m}_{2},v)+ \\omega A^{-2\\alpha}P_{\\lambda}v=0\\] or \\[\\partial_{t}v+\\delta Av+2\\mathcal{B}(\\widehat{m}_{1},v)-\\mathcal{B}(v,v)+ \\omega A^{-2\\alpha}P_{\\lambda}v=0\\;.\\] Thus \\[\\frac{1}{2}\\partial_{t}|v|^{2}+\\delta\\|v\\|^{2}+\\omega\\langle A^{-2\\alpha}P_{ \\lambda}v,v\\rangle\\leq 2K^{\\prime}\\|\\widehat{m}_{1}\\|\\|v\\|\\;.\\] By (26) \\[\\partial_{t}|v|^{2}+\\delta\\|v\\|^{2}+\\gamma|v|^{2}\\leq 4K^{\\prime}\\|\\widehat{m }_{1}\\|\\|v\\|\\;.\\] Hence, using Young's inequality (\\(ab\\leq\\frac{1}{4\\delta}a^{2}+\\delta b^{2}\\)) with \\(K=(K^{\\prime})^{2}/\\delta\\) \\[\\partial_{t}|v(t)|^{2}+\\gamma|v|^{2}\\leq 4K\\|\\widehat{m}_{1}\\|^{2}|v|^{2}\\;.\\] Thus, using the comparison principle, \\[|v(t)|^{2}\\leq|v(s)|^{2}\\exp\\bigl{(}\\int_{s}^{t}[4K\\|\\widehat{m}_{1}\\|^{2}- \\gamma]dr\\bigr{)}.\\] This converges to \\(0\\) exponentially fast, provided \\(\\gamma\\) is sufficiently large, as \\(v(s)=\\widehat{m}_{0}^{(1)}-\\widehat{m}_{0}^{(1)}\\). Moreover, \\[|v(t)|^{2}\\mathrm{e}^{2\\eta(t-s)}\\leq|v(s)|^{2}\\exp\\bigl{(}(t-s)r(t,s)\\bigr{)}.\\] with \\(r(t,s)=\\frac{1}{t-s}\\int_{s}^{t}[4K\\|\\widehat{m}_{1}\\|^{2}d\\tau-\\gamma+2\\eta\\) and \\(\\limsup_{s\\to-\\infty}r(t,s)<0\\) by assumption. This implies the claim of the theorem. ## 6 Numerical Results In this section we study the SPDE (21) by means of numerical experiments, illustrating the results of the previous sections. We invoke a split-step scheme to solve equation (21), in which we compose numerical integration of the Navier-Stokes equation (18) with numerical solution of the Ornstein-Uhlenbeck process \\[\\frac{d\\widehat{m}}{dt}+\\omega A^{-2\\alpha}(\\widehat{m}-u)=\\omega\\sigma_{0}A^ {-2\\alpha-\\beta}\\frac{dW}{dt},\\quad\\widehat{m}(0)=\\widehat{m}_{0}, \\tag{43}\\] at each step. The Navier-Stokes equation (18) itself is solved by a pseudo-spectral method based on the Fourier basis defined through (17), whilst the Ornstein-Uhlenbeck process is approximated by the Euler-Maruyama scheme [24]. All the examples concern the case \\(\\lambda=\\infty\\) only; however similar results are obtained for finite, but sufficiently large, \\(\\lambda\\). ### Forward Accuracy In this section, we will illustrate the results of Theorem 4.3. We will let \\(\\alpha=1/2\\) throughout; since \\(\\beta\\) is always non-negative the trace-class noise condition \\(4\\alpha+2\\beta>1\\) is always satisfied. Notice that the parameter \\(\\omega\\) sets a time-scale for relaxation towards the true signal, and \\(\\sigma_{0}\\) sets a scale for the size of fluctuations about the true signal. The parameter \\(\\beta\\) rescales the fluctuation size in the observational noise at different wavevectors with respect to the relaxation time. First we consider setting \\(\\beta=0.\\) In Fig. 1 we show numerical experiments with \\(\\omega=100\\) and \\(\\sigma_{0}=0.05.\\) We see that the noise level on top of the signal in the low modes is almost \\(O(1),\\) and that the high modes do not synchronize at all; the total error remains \\(O(1)\\) although trends in the signal are followed. On the other hand, for the smaller value of \\(\\sigma_{0}=0.005,\\) still with \\(\\omega=100,\\) the noise level on the signal in the low modes is moderate, the high modes synchronize sufficiently well, and the total error is small; this is shown in Fig. 2. Now we consider the case \\(\\beta=1\\). Again we take \\(\\omega=100\\) and \\(\\sigma_{0}=0.05\\) and \\(0.005\\) in Figures 3 and 4, respectively. The synchronization is stronger than that observed for \\(\\beta=0\\) in each case. This is because the forcing noise decays more rapidly for large wavevectors when \\(\\beta\\) is increased, as can be observed in the relatively smooth trajectories of the high modes of the estimator. For the case when \\(\\sigma_{0}=0\\) we recover a (non-stochastic) PDE for the estimator \\(\\widehat{m}\\). The values of \\(\\sigma_{0}\\) and \\(\\beta\\) are irrelevant. The value of \\(\\omega\\) is the critical parameter in this case. For values of \\(\\omega\\) of \\(O(100)\\) the convergence is exponentially fast to machine precision. For values of \\(\\omega\\) of \\(O(1)\\) the estimator does not exhibit stable behaviour. For intermediate values, the estimator may approach the signal and remain bounded and still an \\(O(1)\\) distance away (see the case \\(\\omega=10\\) in Fig. 5), or else it may come close to synchronizing (see the case \\(\\omega=30\\) in Fig. 6). ### Forward Stability This section will provide numerical evidence supporting Theorem 4.5. In order to investigate the stability of estimators we reproduce ensembles of solutions of equation (21), for a fixed realization of \\(W(t),\\) and a family of initial conditions. We let \\(\\beta=0\\) throughout this section, and we always choose values of \\(\\alpha\\) which ensure that the trace class condition on the noise, \\(4\\alpha+2\\beta>1,\\) is satisfied. Let \\(m^{(k)}(t)\\) be the solution at time \\(t\\) of (21) where the initial conditions are drawn from a Gaussian whose covariance is proportional to the model covariance: \\(m^{(k)}(0)\\sim\\mathcal{N}(0,30^{2}\\widehat{C})\\). First we consider \\(\\alpha=1/2\\). Figure 7 corresponds to parameters given in Fig. 1 of section 6.1. The top figure simply shows the ensemble of trajectories, while the bottom figure shows the convergence of \\(|m^{(k)}(t)-m^{(1)}(t)|/|m^{(1)}(t)|\\) for \\(k>1\\). Notice the trajectories converge to each other, indicating stability. But, the trajectories here do not converge to the truth (or driving signal). This is because the neighbourhood of the signal which bounds the estimators is not small. The next image, Fig. 8, shows results for the smaller value of \\(\\sigma_{0}=0.005\\) corresponding to Fig. 2 of section 6.1. Notice the rate of convergence of the trajectories to each other (bottom) is very similar to the previous case, indicating that there is again stability. However, this time the neighbourhood of the signal which bounds the estimators is small, and so they are indeed accurate. Fig. 9 shows the results for the larger value of \\(\\alpha=1\\)Figure 2: Trajectories of various modes of the estimator \\(\\widehat{m}\\) and the signal \\(u\\) are depicted above for \\(\\beta=0\\) and \\(\\sigma_{0}=0.005\\), along with the relative error in the \\(L^{2}\\) norm, \\(|\\widehat{m}-u|/|u|\\). Figure 1: Trajectories of various modes of the estimator \\(\\widehat{m}\\) and the signal \\(u\\) are depicted above for \\(\\beta=0\\) and \\(\\sigma_{0}=0.05\\), along with the total relative error in the \\(L^{2}\\) norm, \\(|\\widehat{m}-u|/|u|\\). Figure 4: Trajectories of various modes of the estimator \\(\\widehat{m}\\) and the signal \\(u\\) are depicted above for \\(\\beta=1\\) and \\(\\sigma_{0}=0.005\\), along with the relative error in the \\(L^{2}\\) norm, \\(|\\widehat{m}-u|/|u|\\). Figure 3: Trajectories of various modes of the estimator \\(\\widehat{m}\\) and the signal \\(u\\) are depicted above for \\(\\beta=1\\) and \\(\\sigma_{0}=0.05\\), along with the relative error in the \\(L^{2}\\) norm, \\(|\\widehat{m}-u|/|u|\\). (still with \\(\\beta=0\\)). In this case, there is no stability, i.e. the trajectories do not converge to each other (bottom), and also no convergence to the truth (bottom right of the top panels), although all trajectories do remain in a neighbourhood of the truth and the low wavevector modes converge (top left), so there is accuracy with a large bound. Furthermore, the distance of the trajectories from each other is similar to the distance from the truth, so the attractor in this case may be similar to the attractor of the underlying Navier-Stokes equation. ### Pullback Accuracy and Stability Finally, in this section, we illustrate Theorem 5.7. As the subtle nuance differences between forward and pullback accuracy and stability ellude standard numerical simulation, we do not feel it is appropriate to explore this in further detail numerically. So, this section will be brief. We include a single image illustrating the equivalence of the above experiments in Figures 8, 7, and 9 to the traditional notion of pullback attractor in the case that the attractor is a point: Figure 10. ## 7 Conclusions Data assimilation is important in a range of physical applications where it is of interest to use data to improve output from computational models. Analysis of the various algorithms used in practice is in its infancy. The work herein con Figure 5: Trajectories of various modes of the estimator \\(\\widehat{m}\\) and the signal \\(u\\) are depicted above for \\(\\sigma_{0}=0\\) and \\(\\omega=10\\), along with the relative error in the \\(L^{2}\\) norm, \\(|\\widehat{m}-u|/|u|\\). tains analysis of an algorithm, 3DVAR, which is prototypical of more complex Gaussian approximations that are widely used in applications. In particular we have studied the high frequency in time observation limit of 3DVAR, leading to a stochastic PDE. We have demonstrated mathematically how variance inflation, widely used by practitioners, stabilizes, and makes accurate, this filter, complementing the theory in [5] which concerns low frequency in time observations. It is to be expected that the analytical tools developed here and in [5] can be built upon to study more complex algorithms, such as the extended and ensemble Kalman filters, variants on which are used in operational weather forecasting. This will form a focus of our future work. ## References * [1] L. Arnold. _Random Dynamical Systems_. Springer, New York, 1998. * [2] A. Bain and D. Crisan. _Fundamentals of Stochastic Filtering_. Springer Verlag, 2008. * [3] A. Bennett. _Inverse Modeling of the ocean and Atmosphere_. Cambridge, 2002. * [4] A. Beskos, D. Crisan, and A. Jasra. On the stability of sequential monte carlo methods in high dimensions. _arXiv preprint arXiv:1103.3965_, 2011. Figure 6: Trajectories of various modes of the estimator \\(\\widehat{m}\\) and the signal \\(u\\) are depicted above for \\(\\sigma_{0}=0\\) and \\(\\omega=30\\), along with the relative error in the \\(L^{2}\\) norm, \\(|\\widehat{m}-u|/|u|\\). Figure 7: The above panels correspond to Fig. 1 from the text, except illustrating stability by an ensemble of estimators. The top set of panels are the same as in Fig. 1, while the bottom panel shows stability by convergence of the estimators to each other. Figure 8: The above panels correspond to Fig. 2 from the text, except illustrating stability by an ensemble of estimators. The top set of panels are the same as in Fig. 2, while the bottom panel shows stability by convergence of the estimators to each other. Figure 9: The above panels correspond to the same parameter values as above Fig. 8, except \\(\\alpha=1\\). Panels are the same. There is not stability in this case. * [5] CEA Brett, KF Lam, KJH Law, DS McCormick, MR Scott, and AM Stuart. Stability of filters for the Navier-Stokes equation. _Arxiv preprint arXiv:1110.2527_, 2011. * [6] A. Carrassi, M. Ghil, A. Trevisan, and F. Uboldi. Data assimilation as a nonlinear dynamical systems problem: Stability and convergence of the prediction-assimilation system. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 18:023112, 2008. * [7] A. Chorin, M. Morzfeld, and X. Tu. Implicit particle filters for data assimilation. _Communications in Applied Mathematics and Computational Science_, page 221, 2010. * [8] P. Constantin and C. Foias. _Navier-Stokes equations_. University of Chicago Press, 1988. * [9] H. Crauel, Debussche A., and F. Flandoli. Random attractors. _J. Dynam. Differential Equations_, 9:307-341, 1995. * [10] H. Crauel and F. Flandoli. Attractor for random dynamical systems. _Prob. Theory and Relat. Fields_, 100:365-393, 1994. * [11] G. Da Prato and J. Zabczyk. _Ergodicity for infinite dimensional systems_, volume 229. Cambridge Univ Pr, 1996. * [12] G. Da Prato and J. Zabczyk. _Stochastic equations in infinite dimensions_. Cambridge Univ Pr, 2008. Figure 10: The same as Figure 8, except the initial ensemble is initiated at 3 separate times: \\(t_{1},t_{2}\\), and \\(t_{3}\\). Clearly the only relevant interval of time is for \\(t>t_{3}\\). All trajectories converge to each other. * [13] A. Doucet, N. De Freitas, and N. Gordon. _Sequential Monte Carlo methods in practice_. Springer Verlag, 2001. * [14] A. Es-Sarhir and W. Stannat. Improved moment estimates for invariant measures of semilinear diffusions in hilbert spaces and applications. _J. Funct. Anal._, 259(5):1248-1272, 2010. * [15] G. Evensen. _Data Assimilation: the Ensemble Kalman Filter_. Springer Verlag, 2009. * [16] F. Flandoli. Dissipativity and invariant measures for stochastic navier-stokes equations. _NoDEA, Nonlinear Differ. Equ. Appl._, 1(4):403-423, 1994. * [17] F. Flandoli and D. Gatarek. Martingale and stationary solutions for stochastic navier-stokes equations. _Probab. Theory Relat. Fields_, 102(3):367-391, 1995. * [18] F. Flandoli and B. Maslowski. Ergodicity of the 2-d navier-stokes equation under random perturbations. _Commun. Math. Phys._, 172(1):119-141, 1995. * [19] M. Hairer and J.C. Mattingly. Ergodicity of the 2D Navier-Stokes equations with degenerate stochastic forcing. _Annals of Mathematics_, 164:993-1032, 2006. * [20] J. Harlim and AJ Majda. Filtering nonlinear dynamical systems with linear stochastic models. _Nonlinearity_, 21:1281, 2008. * [21] A.C. Harvey. _Forecasting, Structural Time Series Models and the Kalman filter_. Cambridge Univ Pr, 1991. * [22] K. Hayden, E. Olson, and E.S. Titi. Discrete data assimilation in the Lorenz and 2d Navier-Stokes equations. _Physica D: Nonlinear Phenomena_, 2011. * [23] E. Kalnay. _Atmospheric modeling, data assimilation, and predictability_. Cambridge Univ Pr, 2003. * [24] P.E. Kloeden, E. Platen, and H. Schurz. Stochastic differential equations. _Numerical Solution of SDE Through Computer Experiments_, pages 63-90, 1994. * [25] A.C. Lorenc. Analysis methods for numerical weather prediction. _Quarterly Journal of the Royal Meteorological Society_, 112(474):1177-1194, 1986. * [26] A.J. Majda, J. Harlim, and B. Gershgorin. Mathematical strategies for filtering turbulent dynamical systems. _Dynamical Systems_, 27(2):441-486, 2010. * [27] J.C. Mattingly. Exponential convergence for the stochastically forced navier-stokes equations and other partially dissipative dynamics. _Communications in mathematical physics_, 230(3):421-462, 2002. * [28] E. Olson and E.S. Titi. Determining modes for continuous data assimilation in 2D turbulence. _Journal of statistical physics_, 113(5):799-840, 2003. * [29] J. C. Robinson. _Infinite-Dimensional Dynamical Systems_. Cambridge Texts in Applied Mathematics. Cambridge University Press, Cambridge, 2001. * [30] B. Schmalfuss. Bemerkungen zur zweidimensionalen stochastischen navier-stokes-gleichung. _Math. Nachr._, 131:19-32, 1987. * [31] T. Snyder, T. Bengtsson, P. Bickel, and J. Anderson. Obstacles to high-dimensional particle filtering. _Monthly Weather Review._, 136:4629-4640, 2008. * [32] T.J. Tarn and Y. Rasis. Observers for nonlinear stochastic systems. _Automatic Control, IEEE Transactions on_, 21(4):441-448, 1976. * [33] R. Temam. _Navier-Stokes Equations and Nonlinear Functional Analysis_. Number 66. Society for Industrial Mathematics, 1995. * [34] R. Temam. _Infinite-Dimensional Dynamical Systems in Mechanics and Physics_, volume 68 of _Applied Mathematical Sciences_. Springer-Verlag, New York, second edition, 1997. * [35] R. Temam. _Navier-Stokes equations_. AMS Chelsea Publishing, Providence, RI, 2001. * [36] Z. Toth and E. Kalnay. Ensemble forecasting at NCEP and the breeding method. _Monthly Weather Review_, 125:3297, 1997. * [37] A. Trevisan and L. Palatella. Chaos and weather forecasting: the role of the unstable subspace in predictability and state estimation problems. _International Journal of Bifurcation and Chaos_, 21(12):3389-3415, 2011. * [38] P.J. Van Leeuwen. Particle filtering in geophysical systems. _Monthly Weather Review_, 137:4089-4114, 2009. * [39] PJ van Leeuwen. Nonlinear data assimilation in geosciences: an extremely efficient particle filter. _Quarterly Journal of the Royal Meteorological Society_, 136(653):1991-1999, 2010.
The 3DVAR filter is prototypical of methods used to combine observed data with a dynamical system, online, in order to improve estimation of the state of the system. Such methods are used for high dimensional data assimilation problems, such as those arising in weather forecasting. To gain understanding of filters in applications such as these, it is hence of interest to study their behaviour when applied to infinite dimensional dynamical systems. This motivates study of the problem of accuracy and stability of 3DVAR filters for the Navier-Stokes equation. We work in the limit of high frequency observations and derive continuous time filters. This leads to a stochastic partial differential equation (SPDE) for state estimation, in the form of a damped-driven Navier-Stokes equation, with mean-reversion to the signal, and spatially-correlated time-white noise. Both forward and pullback accuracy and stability results are proved for this SPDE, showing in particular that when enough low Fourier modes are observed, and when the model uncertainty is larger than the data uncertainty in these modes (variance inflation), then the filter can lock on to a small neighbourhood of the true signal, recovering from order one initial error, if the error in the observations modes is small. Numerical examples are given to illustrate the theory.
Give a concise overview of the text below.
262
arxiv-format/1903_12089v2.md
# Spectral Unmixing: A Derivation of the Extended Linear Mixing Model from the Hapke Model Lucas Drumetz, Jocelyn Chanussot, and Christian Jutten, L. Drumetz is with IMT Atlantique, Lab-STICC, UBL, Technopole Brest-Iroise CS 83818, 29238 Brest Cedex 3, France (e-mail: [email protected])J. Chanussot and C. Jutten are Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France. (e-mail: {jocelyn.chanussot,christian.jutten}@jipsa-lab.grenoble-inp.fr).This work has been supported by the 2012 ERC Advanced Grant project CHESS (Grant # 320684), as well as the project ANR-DGA APHYPS, under grant ANR-16 ASTR-0027-01. ## I Introduction Hyperspectral imaging provides information in (typically) hundreds of wavelengths in the visible and near infrared domains of the electromagnetic spectrum. The spectral resolution is then much finer than that of color or multispectral images. However, the spatial resolution is conversely coarser. Several distinct materials can then be present in the field of view of a single pixel. The observations captured at the sensor level are then mixtures of the contribution of each material. The inverse problem, called unmixing, aims at identifying the spectra of the pure materials present in the scene (called _endmembers_) and to estimate their relative proportions in each pixel (called _fractional abundances_). Usually, a linear mixing model (LMM) models the relationship between the observed data, the endmembers and their abundances [1] and writes \\(\\mathbf{X}=\\mathbf{SA}+\\mathbf{E}\\). The image is represented as a matrix \\(\\mathbf{X}\\in\\mathbb{R}^{L\\times N}\\), where \\(L\\) is the number of spectral bands, and \\(N\\) is the number of pixels. The endmembers \\(\\mathbf{s}_{p}\\), \\(p=1, ,P\\) are gathered in the columns of a matrix \\(\\mathbf{S}\\in\\mathbb{R}^{L\\times P}\\), where \\(P\\) is the number of materials. The abundance coefficients for each pixel and each material are stored in a matrix \\(\\mathbf{A}\\in\\mathbb{R}^{P\\times N}\\), and \\(\\mathbf{E}\\) is an additive noise. The abundances are proportions, usually constrained to be positive, and to sum to one in each pixel. Geometrically, the LMM constrains the data to lie in a simplex spanned by the endmembers. In many cases, the LMM is a reasonable approximation of the physics of the mixtures. Nevertheless, a key limitation is related to possible nonlinear mixing phenomena, e.g. in urban scenarios or tree canopies, when light bounces on several materials before reaching the sensor. Another situation is intimate mixing phenomena in particulate media [2]. The other important limitation, if not predominant, comes from the underlying assumption in the LMM that each endmember is explained by a unique spectral signature. This is a convenient approximation, but an endmember is actually more accurately described by a collection of signatures, which account for the intra-class variability of that material [3, 4]. Spectral variability approaches of the literature essentially boil down to being able to estimate variants of the materials' spectra in each pixel. Many physical phenomena can induce variations on the spectra of pure materials, be it a change in their physico-chemical composition, or the topography of the scene, which locally changes the incidence angle of the light and the viewing angle of the sensor. This phenomenon is referred to as _endmember variability_[5, 6, 7]. A physics-inspired model to explain illumination induced variability is the Extended Linear Mixing Model (ELMM) [8]: \\[\\mathbf{x}_{n}=\\sum_{p=1}^{P}a_{pn}\\psi_{pn}\\mathbf{s}_{0p}+\\mathbf{e}_{n} \\tag{1}\\] where \\(a_{pn}\\) is the abundance coefficient for material \\(p\\) and pixel \\(n\\), and \\(\\mathbf{e}_{n}\\) is an additive noise. \\(\\psi_{pn}\\) is a positive scaling factor whose effect is to rescale locally each endmember, and \\(\\mathbf{s}_{0p}\\) is a reference endmember for material \\(p\\). This model can be empirically validated by many experimental measurements of spectra of the same materials, whose shapes remains the same but whose amplitudes vary according to a scaling (see e.g. Fig. 1 of [8]). The ELMM enjoys a simple geometric interpretation: the data lie in a convex cone spanned by the reference endmembers, which define lines on which the local endmembers can lie. Each pixel belongs to a simplex spanned by the local endmembers (Fig. 1). This model has been used in several works since it was introduced, e.g. in [9, 10, 11, 12]. Variations in the spectra of one given material due to changingillumination conditions experimentally appear to be reasonably well explained by a scaling variation. Another complex physical semi-empirical model was introduced by Hapke to model both intimate mixing phenomena and reflectance variations induced by the changing geometry of the scene [13]. The ELMM was experimentally shown to better fit Hapke simulated data than other approaches tackling endmember variability in unmixing. These results are presented in detail in [8]. We reproduce in Fig. 2 a figure showing a linearly mixed dataset with Hapke generated endmembers (red), their approximations by the ELMM (green). The materials of interest are commonly found on small bodies of the solar system. Basalt has a relatively low and flat spectrum, and thus is less affected by the nonlinearities of the Hapke model than tephra and palagonite. In any case, the ELMM provides a very good approximation of the red manifolds generated by the Hapke model. For experiments on the capability of the ELMM to explain variability with real data acquired in various contexts, we refer e.g. to [8, 14, 15]. Interestingly, the ELMM is not strictly speaking a linear mixing model, because of the pixel and endmember dependent scaling factors. Besides, those scaling factors were actually proven to be able to capture nonlinear mixing effects in a satisfactory way in a previous study [16]. In this letter, for the first time, we prove how the ELMM can be derived from the Hapke model by several successive approximations. In addition, the derivation and experiments give insight on the ELMM, by showing when it approximates the Hapke model accurately for small albedos, and certain favorable geometrical configurations, confirming past experimental results [8]. The remainder of this letter is organized as follows: section II briefly introduces the Hapke model and its parameters, section III derives the ELMM from the Hapke model, section IV provides further experimental insight on the approximations, and finally, section V gathers a few concluding remarks. ## II The Hapke Model Here, we briefly describe the Hapke model and how it models reflectance as a function of various physical parameters. The complete analytical expressions of all the terms involved, and let alone a detailed derivation of the Hapke model are far beyond the scope of this letter. We refer to [13] for the original derivation, and e.g. to [2] for a gentle introduction to the model. From an unmixing point of view, this model is too complex as is (for example, it is non injective) and its physical parameters are not available in practice. Reflectance, the physical quantity usually used to work with hyperspectral remote sensing images (after atmospheric correction of radiance units), is dependent on the geometry of the acquisition. Depending on the incidence and viewing angles, the measured reflectance can significantly differ. The reflectance of a material is also influenced by its photometry, i.e. the way light interacts with the material. Photometry can be modeled through some optical parameters (surface roughness, scattering behavior ) of the materials. We will briefly describe the photometric parameters involved in the model, but for a more thorough description of their physical and geological interpretations we refer to [17]. The albedo of material, contrary to its reflectance, is truly characteristical of the material and depends neither on the geometry of the scene nor on the photometry of the considered material. The Hapke model is essentially an equation providing the bidirectional reflectance in a given wavelength of a material as a function of its albedo for that wavelength, and of parameters defining the geometry of the acquisition and characterizing the photometry of the observed material. We assume the mixture of the materials occurs at the macroscopic level, and hence we do not consider intimate mixing, which can also be explained by Hapke's model. Therefore, the LMM assumption remains approximately valid in each pixel. The equations below are to be understood to be applied separately to each endmember, each pixel and each wavelength of a hyperspectral image, using its pure albedo spectrum, the photometry and the local geometry. This defines local reflectance endmember variants in each pixel, which are then linearly mixed. The local pixelwise geometry of the scene can be described by several parameters (Fig. 3) [13, 18]. The zenith is locally defined as the direction of the normal vector to the tangent plane to the surface observed. Depending on the topography, this plane is different for each pixel. The angle between the zenith and the sun is called the sun zenith angle, or incidence angle, \\(\\theta_{0}\\). The angle between the zenith and the sensor is called the emergence angle, \\(\\theta\\). The angle between the sun and sensor Fig. 1: Geometric interpretation of the ELMM in the case of three endmembers [8]. In blue are two data points, in red are the reference endmembers and in green are the scaled versions for the two considered pixels. The simplex used in the LMM is shown in dashed lines. Fig. 2: Left: Albedo endmembers of three materials. Right: scatterplot of the first three components of a PCA of a dataset simulated with the LMM (blue), using the endmember variants generated by the Hapke model (red). The endmembers estimated by the ELMM are in green [8]. directions (with the origin on the field of view of the current pixel) is called the phase angle \\(g\\). Finally, the angle between the projections of the sun and the sensor on the tangent plane is called the azimuthal angle \\(\\phi\\). These four angles completely characterize the geometry of a pixel's acquisition. Hapke's model can be expressed as [2, 13]: \\[\\rho(\\omega,\\mu,\\mu_{0},\\phi,g)=\\frac{\\omega}{4(\\mu_{e}+\\mu_{0e})} S(\\mu,\\mu_{0},\\phi)\\times\\\\ ((1+B(g))P(g)+H(\\omega,\\mu_{e})H(\\omega,\\mu_{0e})-1), \\tag{2}\\] \\(\\rho\\) is the reflectance for a given wavelength range, \\(\\mu=\\text{cos}(\\theta)\\), \\(\\mu_{0}=\\text{cos}(\\theta_{0})\\), \\(\\omega\\) is the single scattering albedo of the material for the same wavelength range, \\(P\\) is the phase function, modeling the angular scattering distribution of the material: \\[P(g)=\\frac{c(1-b)^{2}}{(1-2b\\cos(g)+b^{2})^{3/2}}+\\frac{(1-c)(1-b)^{2}}{(1+2b \\cos(g)+b^{2})^{3/2}}. \\tag{3}\\] \\(B\\) is a function related to the opposition effect (brightening of the observed surface when the illumination comes from behind the sensor, i.e. for small \\(g\\) values): \\[B(g)=\\frac{B_{0}}{1+(1/h)\\tan(g/2)}, \\tag{4}\\] and \\(H\\) is the isotropic multiple scattering function: \\[H(\\omega,\\mu)\\approx\\frac{1+2\\mu}{1+2\\mu(\\sqrt{1-\\omega})}, \\tag{5}\\] \\(\\mu_{0e}\\) and \\(\\mu_{e}\\) are the cosines of the modified incidence and emergence angles, accounting for the macroscopic roughness of the materials. \\(S(\\mu,\\mu_{0},\\phi)\\) is a _shadowing_ function, reducing the total reflectance when surface roughness hides parts of the observed surface from the sensor, or shadows a fraction of it. In the remainder of the paper, we will assume that the surface of the materials is smooth, so that there is no shadowing effect, and the emergence and incidence angles are not modified, leading to \\(S(\\mu_{0},\\mu,\\phi)=1\\) and \\(\\mu_{0e}=\\mu_{0}\\), and \\(\\mu_{e}=\\mu\\). \\(B\\) and \\(P\\) are parametrized by photometric parameters of the considered material. For the phase function \\(P\\), the photometric parameters used are i) the asymmetry parameter of the scattering lobes \\(b\\) (\\(0\\leq b\\leq 1\\)), higher values meaning narrower lobes and higher scattering intensity, ii) the backward scattering fraction \\(c\\) (\\(0\\leq c\\leq 1\\)); \\(c<0.5\\) means that the material mainly backscatters the incoming light towards the incidence direction, and \\(c>0.5\\) means that the material has a predominantly forward scattering behavior. As examples of particular behaviors of the phase function, we can cite specular reflection, characterized by \\(b=1\\) and \\(c=1\\), or Lambertian (isotropic) scattering, characterized by \\(b=0\\) and \\(c=0.5\\). For \\(B\\), the parameters \\(h\\) and \\(B_{0}\\), account for the angular width and the strength of the opposition effect, respectively. ## III Derivation of the ELMM ### _Simplifying the Hapke model_ Here, using simplifying assumptions, we go from the general Hapke model (2) to a special case of the ELMM presented in [8, 19]. As explained in [2], assuming a Lambertian scattering, the phase function reduces to \\(P(g)=1\\). Besides, for Lambertian surfaces, there is no opposition surge (\\(B_{0}=0\\)) since the scattering is isotropic. In any case, even for non Lambertian photometries, for large enough \\(g\\), the opposition effect is negligible and \\(B(g)\\approx 0\\) anyway. Incorporating all these assumptions, and plugging the expression of the scattering function (5) in (2), the (bidirectional) reflectance becomes [2]: \\[\\rho(\\omega,\\mu,\\mu_{0})=\\frac{(1+2\\mu)(1+2\\mu_{0})\\omega}{4(\\mu+\\mu_{0})(1+ 2\\mu\\sqrt{1-\\omega})(1+2\\mu_{0}\\sqrt{1-\\omega})}. \\tag{6}\\] Finally, we obtain the relative bidirectional reflectance by dividing by a reference value where \\(\\omega=1\\), in which case \\(\\rho(\\omega=1,\\mu,\\mu_{0})=\\frac{(1+2\\mu)(1+2\\mu_{0})}{4(\\mu+\\mu_{0})}\\). The reflectance \\(\\rho_{0}\\) is then: \\[\\rho_{0}(\\omega,\\mu,\\mu_{0})=\\frac{\\omega}{(1+2\\mu\\sqrt{1-\\omega})(1+2\\mu_{0} \\sqrt{1-\\omega})}. \\tag{7}\\] All the photometric effects are eliminated because of the Lambertian photometry assumption. The model is still material dependent, because the albedo spectrum depends on the material. The only other parameters left are geometry related parameters. However, the albedo spectrum is not available in practice. A workaround for this is to numerically invert the model (the full model for a more precise estimate) if all the parameters but the albedo are known in a pixel. In such a case, the reflectance-albedo relation is bijective. However, there is no simple way to assess the results of this method in practice, especially in real scenarios, and the incertitudes on the results could be very important. The principles of this strategy are applied to controlled lab measurements in [20]. Even then, the model is still complex, highly nonlinear, especially for high albedos, and it is not identifiable when no parameters are known, due to the symmetry of (7) w.r.t. \\(\\mu\\) and \\(\\mu_{0}\\). For small single scattering albedo values (say up to 0.5), (7) is close to linear, while important nonlinearities appear for large albedo values. The validity of the linear approximation actually also depends on the values of the incidence and emergence angles. In Fig. 4, the function defined by (7) is plotted (blue curves) for three values of the acquisition angles: when both the sensor and the sun are at nadir (Fig. 4 (a)), when Fig. 3: Acquisition angles for a given spatial location (red dot). The tangent plane at this point of the surface is in brown. The incidence angle is \\(\\theta_{0}\\), the emergence angle is \\(\\theta\\), and the angle between the projections of the sun and the sensor is the azimuthal angle, denoted as \\(\\phi\\). \\(g\\) is the phase angle. \\(\\theta_{0}\\) and \\(\\theta\\) are defined with respect to the normal to the surface at this point. the sensor and the sum both make an angle of 45 degrees with respect to the normal to the surface (Fig. 4 (b)), and with raking incident light (\\(\\theta_{0}=90^{\\circ}\\)), and the sensor making an angle of 45 degrees with the nadir direction (Fig. 4 (c)). If both angles are equal to 90 degrees (with respect to the normal), the resulting reflectance equals the albedo. Here, because of these considerations, we propose to further approximate the relationship between albedo and reflectance by performing a first order Taylor expansion around \\(\\omega=0\\), with the angles fixed (in practice approximately valid for \"small\" albedos): \\[\\rho_{0}(\\omega,\\mu,\\mu_{0}) =\\rho_{0}(0)+\\frac{\\partial\\rho_{0}}{\\partial\\omega}(0)\\omega+o( \\omega)\\] \\[=\\frac{\\omega}{4\\mu\\mu_{0}+2\\mu+2\\mu_{0}+1}+o(\\omega). \\tag{8}\\] The coefficient of the expansion only depends on the geometry of the acquisition: it affects an albedo spectrum in the same way for any wavelength. Now let us assume that for a given material \\(p\\), we have at our disposal a reference endmember \\(\\mathbf{s}_{0p}\\) (usually extracted from the data), with a geometry defined by the angles \\(\\theta\\) and \\(\\theta_{0}\\). This endmember is a collection of reflectances for various wavelengths. Then with the first order model (8), the ratio between the reflectances in each wavelength is constant, and for the representative \\(\\mathbf{s}_{pn}\\) of endmember \\(p\\) in pixel \\(n\\) and small albedos: \\[\\mathbf{s}_{pn}\\approx\\frac{4\\mu_{n}\\mu_{n0}+2\\mu_{n}+2\\mu_{n0}+1}{4\\mu\\mu_{0} +2\\mu+2\\mu_{0}+1}\\ \\mathbf{s}_{0p}=\\psi_{n}\\mathbf{s}_{0p}. \\tag{9}\\] From this equation, we see that now the link between the local representative of an endmember in a pixel and a reference signature for this material is a positive scaling factor incorporating the information about the geometry in the considered pixel. With this approximation, we make the connection between the semi-empirical model of Hapke and the well known fact in the remote sensing community that illumination effects can be well approximated by scaling variations of the spectra. ### _ELMM description_ The considerations of the previous sections lead to plug this variability model to the usual LMM, so that it becomes: \\[\\mathbf{x}_{n}=\\psi_{n}\\sum_{p=1}^{P}a_{pn}\\mathbf{s}_{0p}+\\mathbf{e}_{n}= \\mathbf{S}_{0}\\psi_{n}\\mathbf{a}_{n}+\\mathbf{e}_{n}. \\tag{10}\\] The LMM is simply scaled in each pixel by a different nonnegative scaling factor. In practice, it can be useful to allow the scaling factor to vary for each material: \\[\\mathbf{x}_{n}=\\sum_{p=1}^{P}a_{pn}\\psi_{pn}\\mathbf{s}_{0p}+\\mathbf{e}_{n}= \\mathbf{S}_{0}\\boldsymbol{\\psi}_{n}\\mathbf{a}_{n}+\\mathbf{e}_{n}, \\tag{11}\\] where the \\(\\psi_{pn}\\) are now pixel and material dependent scaling factors, \\(\\boldsymbol{\\psi}_{n}\\in\\mathbb{R}^{P\\times P}\\) is a diagonal matrix, containing the scaling factors for each material on its diagonal. We thus recover the model of (1). The scaling factors can also be rearranged into a matrix \\(\\boldsymbol{\\Psi}\\in\\mathbb{R}^{P\\times N}\\) (the \\(n^{\\text{th}}\\) column contains the diagonal of \\(\\boldsymbol{\\psi}_{n}\\)). This allows model (11) to be rewritten globally for the whole image as \\(\\mathbf{X}=\\mathbf{S}_{0}(\\boldsymbol{\\Psi}\\odot\\mathbf{A})+\\mathbf{E}\\), with \\(\\odot\\) the Hadamard product. The main reason behind the introduction of a scaling factor for each pixel and material is that it will make the model more flexible, allowing to model material dependent variabilities, e.g. related to material dependent photometric phenomena or more pragmatically to the intrinsic variability of each material. Another important reason is that this version of the ELMM was also proven theorically and experimentally in [16] to locally approximate nonlinear mixtures by absorbing potential nonlinearities in the scaling factors, making it a very versatile model to choose in hyperspectral image unmixing when nonlinearities and endmember variability are significant. We refer the interested reader to [8] for detailed descriptions of algorithms which are able to estimate the parameters of both versions of the ELMM (abundances and scaling factors). ## IV Experimental validation In this section, we provide a qualitative and quantitative analysis of the quality of the approximations of the full Hapke model necessary to reach the model of (9). The goal is not to evaluate the relevance and performance of the ELMM in unmixing applications, which has already been extensively carried out in [19, 16, 8]. For the three endmembers of Fig. 2, for which we obtained estimates of the albedo spectra and the photometric parameters [18, 20], we compare the reflectance endmembers generated in several configurations. In Fig. 5, we show how reflectance is calculated from the full Hapke Model (2), the Lambertian approximation (7) and the linear approximation (8), for basalt, palagonite and tephra, whose photometric parameters are known [18]. We consider two geometries corresponding to those of Fig. 4 (a) Fig. 4: Reflectance plotted as a function of the albedo according to (7) (blue), and the Taylor expansion (8) in \\(\\omega=0\\) (red), for three geometries. Fig. 5: Reflectances computed from the albedos, the photometric and geometric parameters using the full Hapke Model (2), the Lambertian approximation (7) and the linear approximation (8). The columns correspond to the materials; from left to right: basalt (blue), palagonite (green) and tephra (red). The rows correspond to two different angular configurations (top: \\(\\theta_{0}=0^{\\circ}\\) and \\(\\theta=0^{\\circ}\\), \\(\\phi=0^{\\circ}\\), bottom: \\(\\theta_{0}=90^{\\circ}\\) and \\(\\theta=45^{\\circ}\\), \\(\\phi=0^{\\circ}\\)). (\\(\\theta_{0}=\\theta=0^{\\circ}\\)) and (c) (\\(\\theta_{0}=90\\) and \\(\\theta=45^{\\circ}\\)). In the first configuration, we see that the Lambertian approximation and the linear approximation are quite coarse, but acceptable for small reflectance values, while important discrepancies appear for larger reflectance values. The second geometrical configuration leads to a better approximation and a better agreement between the Lambertian model and the linear approximation since the relationship between albedo and reflectance becomes more and more linear as the angles get closer to \\(90^{\\circ}\\). For a more thorough analysis of the accuracy of the linear approximation depending on the geometry, we show in Fig. 6 plots of the spectral angle and the root mean squared errors (RMSE) between the reflectance spectral generated by the Lambertian model and the linear approximation for the three materials of interest for various angular configurations. There is a perfect agreement between both models when \\(\\theta_{0}=90\\) and \\(\\theta=90^{\\circ}\\), and the quality of the approximation decreases as the angles become smaller. Interestingly, the curvature of the spectral angle surface seems to be directly related to the average albedo: the approximation is always satisfactory for basalt, while it is coarser for palagonite and even coarser for tephra. RMSE is overall less affected by the geometry, but the RMSE level is again directly related to the average albedo. ## V Conclusion In the hyperspectral image processing community, it has long been known empirically that scaling factors can efficiently and conveniently model brightness variations due to changing illumination conditions. In this letter, we have theoretically connected the Extended Linear Mixing Model, a tractable model taking explicitly this phenomenon into account for spectral unmixing and the the semi-empirical Hapke model by making simplifying assumptions. We prove and experimentally verify that these assumptions are the most reasonable when the albedo is not too large, or for favorable geometric configurations. Combined with the capability of the ELMM to locally approximate nonlinear mixtures [16], this result further motivates the use of the ELMM to unmix images in which nonlinearities and/or variability effects are non negligible. ## References * [1] J. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot, \"Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 5, pp. 354-379, April 2012. * [2] R. Heylen, M. Parente, and P. Gader, \"A review of nonlinear hyperspectral unmixing methods,\" _IEEE Journal of Sel. Topics in Applied Earth Obs. and Rem. Sens._, vol. 7, pp. 1844-1868, June 2014. * [3] A. Zare and K. Ho, \"Endmember variability in hyperspectral analysis: Addressing spectral variability during spectral unmixing,\" _IEEE Signal Processing Magazine_, vol. 31, pp. 95-104, Jan 2014. * [4] L. Drumetz, J. Chanussot, and C. Jutten, \"Endmember variability in spectral unmixing: recent advances,\" in _Proc. IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS)_, pp. 1-4, 2016. * [5] P.-A. Thouvenin, N. Dobigeon, and J.-Y. Tourneret, \"Hyperspectral unmixing with spectral variability using a perturbed linear mixing model,\" _IEEE Transactions on Signal Processing_, vol. 64, no. 2, pp. 525-538, 2016. * [6] A. Halimi, N. Dobigeon, and J. Y. Tourneret, \"Unsupervised unmixing of hyperspectral images accounting for endmember variability,\" _IEEE Transactions on Image Processing_, vol. 24, pp. 4904-4917, Dec 2015. * [7] S. Henrot, J. Chanussot, and C. Jutten, \"Dynamical spectral unmixing of multitemporal hyperspectral images,\" _IEEE Transactions on Image Processing_, vol. 25, pp. 3219-3232, July 2016. * [8] L. Drumetz, M. A. Vegazonos, S. Henrot, R. Phypo, J. Chanussot, and C. Jutten, \"Blind hyperspectral unmixing using an extended linear mixing model to address spectral variability,\" _IEEE Transactions on Image Processing_, vol. 25, pp. 3890-3905, Aug 2016. * [9] A. Halimi, J. M. Bioucas-Dias, N. Dobigeon, G. S. Buller, and S. McLaughlin, \"Fast hyperspectral unmixing in presence of nonlinearity or mismodeling effects,\" _IEEE Transactions on Computational Imaging_, vol. 3, no. 2, pp. 146-159, 2017. * [10] D. Hong and X. Zhu, \"Subarca: Subspace unmixing with low-rank attribute embedding for hyperspectral data analysis,\" _IEEE Journal of Selected Topics in Signal Processing_, 2018. * [11] R. A. Borosu, T. Imbiriba, and J. C. M. Bermudez, \"Super-resolution for hyperspectral and multispectral images fusion accounting for seasonal spectral variability,\" _arXiv preprint arXiv:1808.10072_, 2018. * [12] T. Uezato, M. Fauvel, and N. Dobigeon, \"Hyperspectral unmixing with spectral variability using adaptive bundles and double sparsity,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 57, no. 6, pp. 3980-3992, 2019. * [13] B. Hapke, _Theory of reflectance and emittance spectroscopy_. Cambridge University Press, 2012. * [14] L. Drumetz, J. Chanussot, and A. Iwasaki, \"Endmembers as directional data for robust material variability retrieval in hyperspectral image unmixing,\" in _42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, pp. 1-5, 2018. * [15] E. Ibarra-Ulmurun, L. Drumetz, J. Marcello, C. Gonzalo-Martin, and J. Chanussot, \"Hyperspectral classification through unmixing abundance maps addressing spectral variability,\" _IEEE Transactions on Geoscience and Remote Sensing_, 2019. * [16] L. Drumetz, B. Ehsandoust, J. Chanussot, B. Rivet, M. Babaie-Zadeh, and C. Jutten, \"Relationships between nonlinear and space-variant linear models in hyperspectral image unmixing,\" _IEEE Signal Processing Letters_, vol. 24, no. 10, pp. 1567-1571, 2017. * [17] J. Fernando, F. Schmidt, C. Pilorget, P. Pinet, X. Ccamanos, S. Doute, Y. Daydou, and F. Costard, \"Characterization and mapping of surface physical properties of mars from crism multi-angular data: Application to spaver cater and meridian planning,\" _Icarus_, vol. 253, pp. 271-295, 2015. * [18] A. M. Cord, P. C. Pinet, Y. Daydou, and S. D. Chevrel, \"Experimental determination of the surface photometric contribution in the spectral reflectance deconvolution processes for a simulated martian crater-like regolithic target,\" _Icarus_, vol. 175, no. 1, pp. 78-91, 2005. * [19] M. A. Vegazonos, L. Drumetz, R. Marrero, G. Tochon, M. Dalla Mura, A. Plaza, J. Bioucas-Dias, and J. Chanussot, \"A new extended linear mixing model to address spectral variability,\" in _Proc. IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS)_, pp. 1-4, 2014. * [20] R. Marrero, S. Doute, A. Plaza, and J. Chanussot, \"Validation of spectral unmixing methods using photometry and topography information,\" in _Proc. IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS)_, pp. 1-4, 2013. Fig. 6: Spectral angle (top row) and Root Mean Squared Errors (bottom) between the reflectance spectra generated by the Lambertian model (7) and the linear approximation (8) plotted against the incidence angle \\(\\theta_{0}\\) and the emergence angle \\(\\theta\\) (in degrees), for each material (from left to right: basalt, palagonite and tephra).
In hyperspectral imaging, spectral unmixing aims at decomposing the image into a set of reference spectral signatures corresponding to the materials present in the observed scene and their relative proportions in every pixel. While a linear mixing model was used for a long time, the complex nature of the physical mixing processes led to shift the community's attention towards nonlinear models or algorithms accounting for the variability of the endmembers. Such intra-class variations are due to local changes in the physico-chemical composition of the materials, and to illumination changes. In the physical remote sensing community, a popular model accounting for illumination variability is the radiative transfer model proposed by Hapke. It is however too complex to be directly used in hyperspectral unmixing in a tractable way. Instead, the Extended Linear Mixing Model (ELMM) allows to easily umix hyperspectral data accounting for changing illumination conditions and to address nonlinear effects to some extent. In this letter, we show that the ELMM can be obtained from the Hapke model by successive simplifying physical assumptions, whose validity we experimentally examine, thus demonstrating its relevance to handle illumination induced variability in the unmixing problem. Hyperspectral image unmixing, spectral variability, Hapke model, extended linear mixing model.
Summarize the following text.
256
arxiv-format/2305_01843v1.md
# Direct LiDAR-Inertial Odometry and Mapping: Perceptive and Connective SLAM Kenny Chen\\({}^{1}\\), Ryan Nemiroff\\({}^{1}\\), and Brett T. Lopez\\({}^{2}\\) \\({}^{1}\\)Kenny Chen and Ryan Nemiroff are with the Department of Electrical and Computer Engineering, University of California Los Angeles, Los Angeles, CA, USA. {kennyjchen, rgyupy}@ucla.edu\\({}^{2}\\)Brett T. Lopez is with the Department of Mechanical and Aerospace Engineering, University of California Los Angeles, Los Angeles, CA, USA. [email protected] authors are with the Verifiable and Control-Theoretic Robotics Laboratory, University of California Los Angeles, Los Angeles, CA, USA. ## I Introduction Accurate real-time state estimation and mapping are fundamental capabilities that are the backbone for autonomous robots to perceive, plan, and navigate through unknown environments. Long-term operational reliability of such capabilities require algorithmic resiliency against off-nominal conditions, such as the presence of particulates (e.g., dust or fog), low-lighting, difficult or unstructured landscape, and other external factors. While visual SLAM approaches may work in well-lit environments, they quickly break down in-the-wild from their strong environmental assumptions, brittle architecture, or high computational complexity. LiDAR-based methods, on the other hand, have recently become a viable option for many mobile platforms due to lighter and cheaper sensors. As a result, researchers have recently developed several new LiDAR odometry (LO) and LiDAR-inertial odometry (LIO) systems which often outperform vision-based localization due to the sensor's superior range and depth measurement accuracy. However, there are still several fundamental challenges in developing robust, long-term LiDAR-centric SLAM solutions, especially for autonomous robots that explore unknown environments, execute agile maneuvers, or traverse uneven terrain [1]. Algorithmic resiliency by means of building proactive safeguards against common failure points in SLAM can provide perceptive and failure-tolerant localization across a wide range of operating environments for long-term reliability. While existing algorithms may work well in structured environments that pose well-constrained problems for the back-end optimizer, their performance can quickly degrade under irregular conditions--yielding slow, brittle perception systems unsuitable for real-world use. Of the few recent approaches that do have the ability to adapt to different conditions on-the-fly, they either rely on switching to other sensing modalities in degraded environments [2], require complex parameter tuning procedures based on manually specified heuristics [3, 4], or focus solely on the scan-matching process in an effort to better anchor weakly-constrained registration problems [5]. To this end, this paper presents the Direct LiDAR-Inertial Fig. 1: **Dense Connective Mapping with Resilient Localization.** Our novel DLIOM algorithm contains several proactive safe-guards against common failure points in LiDAR odometry to create a resilient SLAM framework that adapts to its operating environment. (A) A top-down view of UCLA’s Sculpture Garden mapped by DLIOM, showcasing the algorithm’s derived pose graph with interreframe constraints for local accuracy and global resiliency. (B & C) An example of DLIOM’s slip-resistant keyframing which helps anchor scan-to-map registration, in which abrupt scency changes (e.g., traversal through a door) that normally cause slippage (B) are mitigated by scene change detection (C). (D) A map of an eight-story staircase generated by DLIOM, showcasing the difficult environments our algorithm can track in. Odometry and Mapping (DLIOM) algorithm, a robust, real-time SLAM system with several key innovations that provide increased resiliency and accuracy for both localization and mapping (Fig. 1). The main contributions of this work are five-fold, each targetting a specific module in a typical LiDAR SLAM architecture (bolded) to comprehensively increase algorithmic speed, accuracy, and robustness: * **Keyframing**: A method for slip-resistant keyframing by detecting the onset of scan-matching slippage during abrupt scene changes via a global and sensor-agnostic degeneracy metric. * **Submapping**: A method which generates explicitly-relevant local submaps with maximum coverage by computing the relative 3D Jaccard index for each keyframe for scan-to-map registration. * **Mapping**: A method to increase local mapping accuracy and global loop closure resiliency via connectivity factors and keyframe-based loop closures. * **Scan-Matching**: An adaptive scan-matching method via a novel point cloud sparsity metric for consistent registration in both large and small environments. * **Motion Correction**: A new coarse-to-fine technique for fast and parallelizable point-wise motion correction, in which a set of analytical equations with a constant jerk and angular acceleration motion model is derived for constructing continuous-time trajectories. This paper substantially extends algorithmically and experimentally our previous work, Direct LiDAR-Inertial Odometry (DLIO) [6]. In particular, aside from the continuous-time motion correction, each listed contribution above is an original key idea in order to increase algorithmic accuracy, generalization, and failure resiliency. Moreover, we provide new algorithmic insights from our ground-up approach, in addition to new experimental results for algorithmic resiliency, computational efficiency, and overall map and trajectory accuracy as compared to the state-of-the-art. ## II Related Work Simultaneous localization and mapping (SLAM) algorithms for 3D time-of-flight sensors (e.g., LiDAR) rely on aligning point clouds by solving a nonlinear least-squares problem that minimizes the error across corresponding points and/or planes. To find point/plane correspondences, popular methods such as the iterative closest point (ICP) algorithm [7, 8], Generalized-ICP (GICP) [9], or the normal distribution transform [10] recursively match potential corresponding entities until alignment converges to a local minimum. Slow convergence time is often observed when determining correspondences for a large set of points, so _feature_-based methods [11, 12, 13, 14, 15, 16, 17, 18] attempt to extract only the most salient data points, e.g., corners and edges, in a scan to decrease computation time. However, the efficacy of feature extraction is highly dependent on specific implementation. Moreover, useful points are often discarded resulting in inaccurate estimates and maps. Conversely, _dense_ methods [2, 19, 20, 21, 22] directly align acquired scans but often rely heavily on aggressive voxelization--a process that can alter important data correspondences--to achieve real-time performance. LiDAR odometry approaches can also be broadly classified according to their method of incorporating other sensing modalities into the estimation pipeline. _Loosely_-coupled methods [2, 11, 12, 19, 20] process data sequentially. For example, IMU measurements are used to augment LiDAR scan registration by providing an optimization prior. These methods are often quite robust due to the precision of LiDAR measurements, but localization results can be less accurate as only a subset of all available data is used for estimation. _Tightly_-coupled methods [13, 17, 23, 21, 18], on the otherhand, can offer improved accuracy by jointly considering measurements from all sensing modalities. These methods commonly employ either graph-based optimization [13, 17, 18, 24] or a stochastic filtering framework, e.g., Kalman filter [16, 21]. However, compared to geometric observers [25, 26], these approaches possess minimal convergence guarantees even in the most ideal settings which can result in significant localization error from inconsistent sensor fusion and map deformation from incorrect scan placement. Incorporating additional sensors can also aid in correcting motion-induced point cloud distortion. For example, LOAM [11] compensates for spin distortion by iteratively estimating sensor pose via scan-matching and a loosely-coupled IMU using a constant velocity assumption. Similarly, LIO-SAM [13] formulates LiDAR-inertial odometry atop a factor graph to jointly optimize for body velocity, and in their implementation, points were subsequently deskewed by linearly interpolating rotational motion. FAST-LIO [16] and FAST-LIO2 [21] instead employ a back-propagation step on point timestamps after a forward-propagation of IMU measurements to produce relative transformations to the scan-end time. However, these methods (and others [27, 28]) all operate in _discrete_-time which may induce a loss in precision, leading to a high interest in _continuous_-time methods. Elastic LiDAR Fusion [29], for example, handles scan deformation by optimizing for a continuous linear trajectory, whereas Wildcat [30] and [31] instead iteratively fit a cubic B-spline to remove distortion from surfel maps. More recently, CT-ICP [32] and ElasticLiDAR++ [33] use a LiDAR-only approach to define a continuous-time trajectory parameterized by two poses per scan, which allows for elastic registration of the scan during optimization. However, these methods can still be too simplistic in modeling the trajectory under highly dynamical movements or may be too computationally costly to work reliably in real-time. Another crucial component in LIO systems is submapping, which involves extracting a smaller point cloud from the global map for efficient processing and to increase pose estimation consistency. Rather than processing the entire map on every iteration which is often computationally intractable, a _submap_ instead contains only a subset of all available data points to be considered. However, the efficacy of a submapping strategy depends on its ability to extract only the most relevant map points for scan-matching to avoid any wasted computation when constructing corresponding data structures (i.e., kdtrees, normals). One common approach is to use a sliding window such that the submap consists of a set of recent scans [18, 34]. However, this method assumes a strong temporal correspondence between points which may not always be the case (i.e., revisiting a location) and may not perform well under significant drift. To mitigate this, radius-based approaches [13, 19] extract points nearby the current position by directly working with point clouds and continually adding points to an internal octree data structure. This, however, results in unbounded growth in the map [35] and therefore an explosion in computational expense through the large number of nearest neighbor calculations required, which is infeasible for real-time usage. Keyframe-based methods [13, 20], on the other hand, link keyed locations in space to its corresponding scan, and therefore reduces the search space required to extract a comprehensive submap. However, previous methods [6, 20] have still implicitly assumed that nearby keyframes contain the most relevant data points for a submap and do not explicitly compute a metric of relevancy per keyframe, risking the extraction of keyframes which may not be used at all. More recently, researchers have been interested in building new methods of algorithmic resiliency into odometry pipelines to ensure reliable localization across a diverse set of environments. While the field of adaptive localization is still in its infancy, early works have pioneered the idea of adaptivity in unstructured and/or extreme environments. For instance, to provide resiliency against LiDAR slippage in geometrically-degenerate environments, LION [2], DARE-SLAM [36], and DAMS-LIO [37] proposed using the condition number of the scan-matching Hessian as an observability score in order to switch to a different state-estimation algorithm (e.g., visual-inertial). However, these works assume the availability of other odometry paradigms on-board which may not always be available. On the other hand, works such as [3] and [4] attempt to automatically tune system hyperparameters based on trajectory error modeling, but these require an expensive offline training procedure to retrieve optimal parameter values and do not generalize to other environments outside of the training set. Similarly, KISS-ICP [35] adaptively tunes maximum correspondence distance for ICP, but their method scales the metric according to robot acceleration which does not generalize across differently-sized environments. AdaLIO [38], on the other hand, automatically tunes voxelization, search distance, and plane residual parameters in detected degenerate cases. Methods of global loop closure detection can also aid in reducing drift in the map through place recognition and relocalization, using descriptors and detectors such as ScanContext [39, 40] and Segregator [41]. More recently, [24] and X-ICP [5] propose innovative online methods to mitigate degeneracy by analyzing the geometric structure of the optimization constraints in scan-to-scan and scan-to-map, respectively, but these works only target a specific module in an entire, complex SLAM pipeline. To this end, DIIOM proposes several new techniques to LiDAR-based SLAM systems which address several deficiencies in both the front-end and back-end. Our ideas target different scales in the data processing pipeline to progressively increase localization resiliency and mapping accuracy. First, a fast, coarse-to-fine approach constructs continuous-time trajectories between each LiDAR sweep for accurate, parallelizable point-wise motion correction. These motion-corrected clouds are then incrementally registered against an extracted submap via an adaptive scan-matching technique, which tunes the maximum correspondence distance based on the current cloud's sparsity for consistent registration across different environments. Each extracted submap is explicitly generated by computing each environmental keyframe's relevancy towards the current scan-matching problem via a relative 3D Jaccard index; this is done to maximize submap coverage and therefore data association between the scan and submap. To prevent slippage, scan-matching health is contin Fig. 2: **System Architecture.** DIIOM’s two-pronged architecture contains several key innovations to provide a comprehensive SLAM pipeline with real-world operational reliability. Point-wise continuous-time integration in \\(\\mathcal{V}\\) ensures maximum fidelity of the corrected cloud and is registered onto the robot’s map by a custom GICP-based scan-matcher. An analysis on the environmental structure and health of scan-matching provides several system metrics for adaptively tuning maximum correspondence distance, in addition to slip-resistant keyframe. Additionally, a 3D Jaccard index for each keyframe is computed against the current scan to maximize submap coverage and therefore scan-matching correspondences. The system’s state is subsequently updated by a nonlinear geometric observer with strong convergence properties, and these estimates of pose, velocity, and bias then initialize the next iteration. This system state is also subsequently sent to a background mapping thread, which places pose graph nodes at keyframe locations and builds a connective graph via interkeyframe constraints for local accuracy and global resiliency. ually monitored through a novel sensor-agnostic degeneracy metric, which inserts a new keyframe when optimization is too weakly-constrained during rapid scene changes. Finally, to increase local mapping accuracy and global loop closure resiliency, we compute interkeyframe overlap to provide additional factors to our keyframe-based factor graph mapper. ## III System Overview & Data Processing DLIOM is a robust SLAM algorithm with a specific focus on localization resiliency, mapping accuracy, and real-world operational reliability (Fig. 2). The architecture contains two parallel threads which process odometry estimation and global mapping in real-time. In the first, LiDAR scans are consecutively motion-corrected and then registered against a keyframe-based submap to provide an accurate update signal for integrated IMU measurements. This accuracy is ensured by an adaptive maximum correspondence distance for consistent scan-matching, in addition to how we explicitly derive the local submap, whereby maximum submap coverage of the current scan is enforced by computing a 3D Jaccard index for each environmental keyframe. This helps increase robustness against errors in data association during GICP optimization. A novel method for computing environmental degeneracy provides a global notion of scan-matching slippage and continually monitors optimization health status, placing a new keyframe right before the onset of slippage, and a nonlinear geometric observer fuses LiDAR scan-matching and IMU preintegration to provide high-rate state estimation with certifiable convergence guarantees. In the second thread, keyframes continually build upon an internal factor graph, whereby each keyframe is represented as a node in the graph and various constraints between pairs of keyframes are factors between nodes. \"Sequential\" factors between adjacent keyframes provide a strong backbone to the pose graph and is feasible due to our system's accurate local odometry; \"connective\" factors between overlapping keyframes (via their 3D Jaccard index) provide local map accuracy and global map resiliency against catastrophically incorrect loop closures. Frame offsets after such loop closures are carefully managed in order to prevent discontinuities in estimated velocity for safe robot control. Our algorithm is completely built from the ground-up to decrease computational overhead and increase algorithmic failure-tolerance and real-world reliability. ``` 1input:\\(\\hat{\\mathbf{X}}_{k-1}^{\\mathcal{W}}\\), \\(\\hat{\\mathcal{M}}_{k}^{\\mathcal{W}}\\), \\(\\mathcal{P}_{k}^{\\mathcal{L}}\\), \\(\\mathbf{a}_{k}^{\\mathcal{B}}\\), \\(\\mathbf{\\omega}_{k}^{\\mathcal{B}}\\); output:\\(\\hat{\\mathbf{X}}_{i}^{\\mathcal{W}}\\)// LiDAR CALLback Thread while\\(\\mathcal{P}_{k}^{\\mathcal{L}}\ eq\\emptyset\\)do//initializePoints and transform to\\(\\mathcal{R}\\)\\(\\hat{\\mathcal{P}}_{k}^{\\mathcal{R}}\\leftarrow\\)initializePointCloud\\((\\mathcal{P}_{k}^{\\mathcal{L}})\\); (III-B)//construct coarse trajectory for\\(\\hat{\\mathbf{a}}_{i}^{\\mathcal{R}},\\hat{\\mathbf{\\omega}}_{i}^{\\mathcal{R}}\\)between \\(t_{k:1}\\) and \\(t_{k}\\)do\\(\\hat{\\mathbf{p}}_{i}^{\\mathcal{W}}\\), \\(\\hat{\\mathbf{q}}_{k}\\) = discretlen(\\(\\hat{\\mathbf{X}}_{k-1}^{\\mathcal{W}}\\), \\(\\hat{\\mathbf{a}}_{i+1}^{\\mathcal{R}}\\), \\(\\hat{\\mathbf{\\omega}}_{i+1}^{\\mathcal{R}}\\)); (5)\\(\\hat{\\mathbf{T}}_{i}^{\\mathcal{W}}\\) = [\\(\\hat{\\mathbf{R}}(\\hat{\\mathbf{q}}_{i})\\)]\\(\\hat{\\mathbf{p}}_{i}\\)]; 1 end while // continuous-time motion correction for\\(p_{k}^{n}\\in\\hat{\\mathcal{P}}_{k}^{\\mathcal{R}}\\)do\\(\\hat{\\mathbf{T}}_{n}^{\\mathcal{W}}\\leftarrow\\) continuousInt(\\(\\hat{\\mathbf{T}}_{i}^{\\mathcal{W}}\\), \\(t_{n}\\)); (6)\\(\\hat{p}_{k}^{n}=\\hat{\\mathbf{T}}_{n}^{\\mathcal{W}*}\\otimes p_{k}^{n}\\); \\(\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}}\\).append(\\(\\hat{p}_{k}^{n}\\)); 2 end while // environmental analysis:// computespaciousness and cloud sparsity \\(m_{k},z_{k}\\leftarrow\\) computeAdaptiveParams(\\(\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}}\\)); [20], (11)\\(\\hat{\\mathbf{T}}_{k}^{\\mathcal{W}}\\) = construct submap via 3D Jaccard index for\\(\\mathcal{K}_{j}^{\\mathcal{W}}\\in\\hat{\\mathcal{M}}_{k}^{\\mathcal{W}}\\)do\\(\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}},\\hat{\\mathcal{C}}_{j}^{\\mathcal{W}}\\) = \\(|\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}}\\cap\\mathcal{K}_{j}^{\\mathcal{W}}|/|\\hat{ \\mathcal{P}}_{k}^{\\mathcal{W}}\\cup\\mathcal{K}_{j}^{\\mathcal{W}}|\\); (9) if\\(J(\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}},\\hat{\\mathcal{C}}_{j}^{\\mathcal{W}})\\geq thresh _{jaccard}\\)then\\(\\hat{\\mathcal{S}}_{k}^{\\mathcal{W}}\\).append(\\(\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}}\\)); 3 end if 4 end while // adaptive scan-to-map registration \\(\\hat{\\mathbf{T}}_{k}^{\\mathcal{W}}\\leftarrow\\) GICP(\\(\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}}\\), \\(\\hat{\\mathcal{S}}_{k}^{\\mathcal{W}}\\), \\(z_{k}\\)); (10) // slip-resistant keyframing if\\(\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}}\\) is a keyframe via (7) or [20]then\\(\\mathcal{K}_{k}^{\\mathcal{W}}\\leftarrow\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}}\\); // compute new keyframe overlap and // append to connectivity matrix for\\(\\mathcal{K}_{j}^{\\mathcal{W}}\\in\\hat{\\mathcal{M}}_{k}^{\\mathcal{W}}\\)do\\(\\hat{\\mathbf{C}}_{kj}\\gets J(\\mathcal{K}_{k}^{\\mathcal{W}},\\mathcal{K}_{j}^{ \\mathcal{W}})\\); (V-A) end if // send new keyframe and updated // connectivity matrix to mapping thread \\(\\text{\\rm{dom}}2\\text{\\rm{map}}(\\mathcal{K}_{k}^{\\mathcal{W}},\\mathbf{C})\\); end if // geometric observer: state update \\(\\hat{\\mathbf{X}}_{k}^{\\mathcal{W}}\\leftarrow\\)updateState(\\(\\hat{\\mathbf{T}}_{k}^{\\mathcal{W}}\\), \\(\\Delta t_{k}^{\\mathcal{W}}\\)) ; (IV-D) return\\(\\hat{\\mathbf{X}}_{k}^{\\mathcal{W}}\\) 5 end while // IMU CALLback Thread while\\(\\mathbf{a}_{k}^{\\mathcal{B}}\ eq\\emptyset\\)and\\(\\mathbf{\\omega}_{i}^{\\mathcal{B}}\ eq\\emptyset\\)do// apply biases and transform to\\(\\mathcal{R}\\)\\(\\hat{\\mathbf{a}}_{i}^{\\mathcal{R}}\\), \\(\\hat{\\mathbf{\\omega}}_{i}^{\\mathcal{B}}\\leftarrow\\) initializeImu(\\(\\mathbf{a}_{i}^{\\mathcal{B}}\\), \\(\\mathbf{\\omega}_{i}^{\\mathcal{B}}\\)); (III-B) // geometric observer: state propagation \\(\\hat{\\mathbf{X}}_{i}^{\\mathcal{W}}\\leftarrow\\) propagateState(\\(\\hat{\\mathbf{X}}_{k}^{\\mathcal{W}}\\), \\(\\hat{\\mathbf{a}}_{i}^{\\mathcal{R}}\\), \\(\\hat{\\mathbf{\\omega}}_{i}^{\\mathcal{R}}\\), \\(\\Delta t_{i}^{\\mathcal{R}}\\)); (IV-D) return\\(\\hat{\\mathbf{X}}_{i}^{\\mathcal{W}}\\) 6 end while ``` **Algorithm 1**DLIOM: Odometry Thread ### _Mathematical Notation_ Let the point cloud for a single LiDAR sweep initiated at time \\(t_{k}\\) be denoted as \\(\\mathcal{P}_{k}\\) and indexed by \\(k\\). The point cloud \\(\\mathcal{P}_{k}\\) is composed of points \\(p_{k}^{n}\\in\\mathbb{R}^{3}\\) that are measured at a time \\(\\Delta t_{k}^{n}\\) relative to the start of the scan and indexed by \\(n=1,\\ldots,N\\) where \\(N\\) is the total number of points in the scan. The world frame is denoted as \\(\\mathcal{W}\\) and the robot frame as \\(\\mathcal{R}\\) located at its center of gravity, with the convention that \\(x\\) points forward, \\(y\\) left, and \\(z\\) up. The IMU's coordinate system is denoted as \\(\\mathcal{B}\\) and the LiDAR's as \\(\\mathcal{L}\\), and the robot's state vector \\(\\mathbf{X}_{k}\\) at index \\(k\\) is defined as the tuple \\[\\mathbf{X}_{k}=\\left[\\begin{array}{cccc}\\mathbf{p}_{k}^{\\mathcal{W}}&\\mathbf{q}_{k }^{\\mathcal{W}}&\\mathbf{v}_{k}^{\\mathcal{W}}&\\mathbf{b}_{k}^{\\mathcal{B}}& \\mathbf{b}_{k}^{\\mathcal{L}}\\end{array}\\right]^{\\top}\\;, \\tag{1}\\] where \\(\\mathbf{p}^{\\mathcal{W}}\\in\\mathbb{R}^{3}\\) is the robot's position, \\(\\mathbf{q}^{\\mathcal{W}}\\) is the orientation encoded by a four vector quaternion on \\(\\mathbb{S}^{3}\\) under Hamilton notation, \\(\\mathbf{v}^{\\mathcal{W}}\\in\\mathbb{R}^{3}\\) is the robot's velocity, \\(\\mathbf{b}^{\\mathbf{a}}\\in\\mathbb{R}^{3}\\) is the accelerometer's bias, and \\(\\mathbf{b}^{\\mathbf{\\omega}}\\in\\mathbb{R}^{3}\\) is the gyroscope's bias. Measurements \\(\\hat{\\mathbf{a}}\\) and \\(\\hat{\\mathbf{\\omega}}\\) from an IMU are modeled as \\[\\hat{\\mathbf{a}}_{i} =(\\mathbf{a}_{i}-\\mathbf{g})+\\mathbf{b}^{\\mathbf{a}}_{i}+\\mathbf{n}^{\\mathbf{a}}_{ i}\\;, \\tag{2}\\] \\[\\hat{\\mathbf{\\omega}}_{i} =\\mathbf{\\omega}_{i}+\\mathbf{b}^{\\mathbf{a}}_{i}+\\mathbf{n}^{\\mathbf{a}}_{i}\\;, \\tag{3}\\] and indexed by \\(i=1,\\ldots,M\\) for \\(M\\) measurements between clock times \\(t_{k-1}\\) and \\(t_{k}\\). With some abuse of notation, indices \\(k\\) and \\(i\\) occur at LiDAR and IMU rate, respectively, and will be written this way for simplicity unless otherwise stated. Raw sensor measurements \\(\\mathbf{a}_{i}\\) and \\(\\mathbf{\\omega}_{i}\\) contain bias \\(\\mathbf{b}_{i}\\) and white noise \\(\\mathbf{n}_{i}\\), and \\(\\mathbf{g}\\) is the rotated gravity vector. In this work, we address the following problem: given a distorted point cloud \\(\\mathcal{P}_{k}\\) from a LiDAR and \\(\\mathbf{a}_{i}\\) and \\(\\mathbf{\\omega}_{i}\\) from an IMU, estimate the robot's state \\(\\hat{\\mathbf{X}}^{\\mathcal{W}}_{i}\\) and the geometric map \\(\\hat{\\mathcal{M}}^{\\mathcal{W}}_{k}\\). ### _Sensor Data Preprocessing_ The inputs to DLIOM are a dense 3D point cloud collected by a modern 360\\({}^{\\circ}\\) mechanical LiDAR, such as an Ouster or a Velodyne (10-20Hz), in addition to time-synchronized linear acceleration and angular velocity measurements from a 6-axis IMU at a much higher rate (100-500Hz). To minimize information loss, we do not preprocess the point cloud except for a box filter of size 1m\\({}^{3}\\) around the origin which removes points that may be from the robot itself, and a light voxel filter for higher resolution clouds. This distinguishes our work from others that either attempt to detect features (e.g., corners, edges, and/or surfels) or aggressively downsamples the input cloud. On average, the point clouds used in this work on our custom platform contained \\(\\sim\\)16,000 points per scan. In addition, prior to downstream tasks, all sensor data is transformed to be in \\(\\mathcal{R}\\) located at the robot's center of gravity via extrinsic calibration. For LiDAR, each acquired scan is rotated and shifted via \\(\\frac{\\mathcal{R}}{\\mathcal{L}}\\mathbf{T}\\in\\mathbb{SE}(3)\\) such that \\([\\,p^{\\mathcal{R}}\\;1\\,]^{\\top}=\\frac{\\mathcal{R}}{\\mathcal{L}}\\mathbf{T}[\\,p^ {\\mathcal{C}}\\;1\\,]^{\\top}\\) for each point in the scan. For IMU, effects of displacing linear acceleration measurements on a rigid body must be considered if the sensor is not located exactly at the center of gravity. This is compensated for by considering all contributions of linear acceleration at \\(\\mathcal{R}\\) via the cross product of angular velocity with the displacement between the IMU and center of gravity, such that for raw linear acceleration \\(\\mathbf{a}^{\\mathcal{B}}_{i}\\) measured in the IMU's frame, the corresponding linear acceleration in frame \\(\\mathcal{R}\\) is, assuming a constant displacement, \\[\\hat{\\mathbf{a}}^{\\mathcal{R}}_{i}=\\hat{\\mathbf{a}}^{\\mathcal{B}}_{i}+\\left[\\hat{\\mathbf{ \\omega}}^{\\mathcal{R}}_{i}\\times\\,\\overset{\\mathcal{R}}{\\mathcal{B}}\\mathbf{ t}\\right]+\\left[\\hat{\\mathbf{\\omega}}^{\\mathcal{R}}_{i}\\times\\left(\\hat{\\mathbf{ \\omega}}^{\\mathcal{R}}_{i}\\times\\overset{\\mathcal{R}}{\\mathcal{B}}\\mathbf{t} \\right)\\right] \\tag{4}\\] where \\(\\overset{\\mathcal{R}}{\\mathcal{B}}\\mathbf{t}\\) is the translational distance from \\(\\mathcal{B}\\) to \\(\\mathcal{R}\\), and \\(\\hat{\\mathbf{\\omega}}^{\\mathcal{R}}_{i}\\) is \\(\\hat{\\mathbf{\\omega}}^{\\mathcal{B}}_{i}\\) but only rotated to be in axis convention since angular velocity is equivalent at all points on a rigid body. ### _Continuous-Time Motion Correction with Integrated Prior_ Point clouds from spinning LiDAR sensors suffer from motion distortion during movement due to the rotating laser array collecting points at different instances during a sweep. Rather than assuming simple motion (i.e., constant velocity) during sweep that may not accurately capture fine movement, we instead use a more accurate constant jerk and angular acceleration model to compute a unique transform for each point via a two-step coarse-to-fine propagation scheme. This strategy aims to minimize the errors that arise due to the sampling rate of the IMU and the time offset between IMU and Fig. 3: **Coarse-to-Fine Point Cloud Deskewing. A distorted point \\(p^{\\mathcal{C}_{0}}\\) (A) is deskewed through a two-step process which first integrates IMU measurements between scans, then solves for a unique transform in continuous-time (C) for the original point which deskews \\(p^{\\mathcal{C}_{0}}\\) to \\(p^{\\star}\\) (B).**LiDAR point measurements. Trajectory throughout a sweep is first coarsely constructed through numerical IMU integration, which is subsequently refined by solving a set of analytical continuous-time equations in \\(\\mathcal{W}\\) (Fig. 3). Let \\(t_{k}\\) be the clock time of the received point cloud \\(\\mathcal{P}_{k}^{\\mathcal{R}}\\) with \\(N\\) number of points, and let \\(t_{k}+\\Delta t_{k}^{n}\\) be the timestamp of a point \\(p_{k}^{n}\\) in the cloud. To approximate each point's location in \\(\\mathcal{W}\\), we first integrate IMU measurements between \\(t_{k\\text{-}1}\\) and \\(t_{k}+\\Delta t_{k}^{N}\\) via \\[\\begin{split}\\hat{\\mathbf{p}}_{i}&=\\hat{\\mathbf{p} }_{i\\text{-}1}+\\hat{\\mathbf{v}}_{i\\text{-}1}\\Delta t_{i}+\\frac{1}{2}\\hat{ \\mathbf{R}}(\\hat{\\mathbf{q}}_{i\\text{-}1})\\hat{\\boldsymbol{a}}_{i\\text{-}1} \\Delta t_{i}^{2}+\\frac{1}{6}\\hat{\\boldsymbol{j}}_{i}\\Delta t_{i}^{3}\\,,\\\\ \\hat{\\mathbf{v}}_{i}&=\\hat{\\mathbf{v}}_{i\\text{-}1}+ \\hat{\\mathbf{R}}(\\hat{\\mathbf{q}}_{i\\text{-}1})\\hat{\\boldsymbol{a}}_{i\\text{-} 1}\\Delta t_{i}\\,,\\\\ \\hat{\\mathbf{q}}_{i}&=\\hat{\\mathbf{q}}_{i\\text{-}1}+ \\frac{1}{2}(\\hat{\\mathbf{q}}_{i\\text{-}1}\\otimes\\hat{\\boldsymbol{\\omega}}_{i \\text{-}1})\\Delta t_{i}+\\frac{1}{4}(\\hat{\\mathbf{q}}_{i\\text{-}1}\\otimes\\hat{ \\boldsymbol{\\omega}}_{i\\text{-}1})\\Delta t_{i}^{2}\\,,\\end{split} \\tag{5}\\] for \\(i=1,\\ldots,M\\) for \\(M\\) number of IMU measurements between two scans, where \\(\\hat{\\boldsymbol{j}}_{i}=\\frac{1}{\\Delta t_{i}}(\\hat{\\mathbf{R}}(\\hat{ \\mathbf{q}}_{i\\text{-}1})\\hat{\\boldsymbol{a}}_{i\\text{-}}-\\hat{\\mathbf{R}}( \\hat{\\mathbf{q}}_{i\\text{-}1})\\hat{\\boldsymbol{a}}_{i\\text{-}1})\\) and \\(\\hat{\\boldsymbol{\\alpha}}_{i}=\\frac{1}{\\Delta t_{i}}(\\hat{\\boldsymbol{\\omega}} _{i\\text{-}}-\\hat{\\boldsymbol{\\omega}}_{i\\text{-}1})\\) are the estimated linear jerk and angular acceleration, respectively. The set of homogeneous transformations \\(\\hat{\\mathbf{T}}_{i}^{\\mathcal{W}}\\in\\mathbb{SE}(3)\\) that correspond to \\(\\hat{\\mathbf{p}}_{i}\\) and \\(\\hat{\\mathbf{q}}_{i\\text{-}1}\\) then define the coarse, _discrete_-time trajectory during a sweep. Then, an analytical, _continuous_-time solution from the nearest preceding transformation to each point \\(p_{k}^{n}\\) recovers the point-specific deskwing from \\(\\hat{\\mathbf{T}}_{n}^{\\mathcal{W}*}\\), such that \\[\\begin{split}\\hat{\\mathbf{p}}^{*}(t)&=\\hat{ \\mathbf{p}}_{i\\text{-}1}+\\hat{\\mathbf{v}}_{i\\text{-}1}+\\frac{1}{2}\\hat{ \\mathbf{R}}(\\hat{\\mathbf{q}}_{i\\text{-}1})\\hat{\\boldsymbol{a}}_{i\\text{-}1}t^ {2}+\\frac{1}{6}\\hat{\\boldsymbol{j}}_{i}t^{3}\\,,\\\\ \\hat{\\mathbf{q}}^{*}(t)&=\\hat{\\mathbf{q}}_{i\\text{-} 1}+\\frac{1}{2}(\\hat{\\mathbf{q}}_{i\\text{-}1}\\otimes\\hat{\\boldsymbol{\\omega}}_{i \\text{-}1})t+\\frac{1}{4}(\\hat{\\mathbf{q}}_{i\\text{-}1}\\otimes\\hat{\\boldsymbol {\\alpha}}_{i\\text{-}1})t^{2}\\,,\\end{split} \\tag{6}\\] where \\(i\\text{-}1\\) and \\(i\\) correspond to the closest preceding and successive IMU measurements, respectively, \\(t\\) is the timestamp between point \\(p_{k}^{n}\\) and the closest preceding IMU, and \\(\\hat{\\mathbf{T}}_{n}^{\\mathcal{W}*}\\) is the transformation corresponding to \\(\\hat{\\mathbf{p}}^{*}\\) and \\(\\hat{\\mathbf{q}}^{*}\\) for \\(p_{k}^{n}\\) (Fig. 4). Note that (6) is parameterized only by \\(t\\) and therefore a transform can be queried for any desired time to construct a continuous-time trajectory. The result of this two-step procedure is a motion-corrected point cloud that is also approximately aligned with the map in \\(\\mathcal{W}\\), which therefore inherently incorporates the optimization prior used for GICP (Sec. IV-C). Importantly, (5) and (6) depend on the accuracy of \\(\\hat{\\mathbf{v}}_{0}^{\\mathcal{W}}\\), the initial estimate of velocity, \\(\\mathbf{b}_{k}^{n}\\) and \\(\\mathbf{b}_{k}^{\\omega}\\), the estimated IMU biases, in addition to an accurate initial body orientation \\(\\hat{\\mathbf{q}}_{0}\\) (to properly compensate for the gravity vector) at the time of motion correction. We therefore emphasize that, a key to the reliability of our approach is the _guaranteed global convergence_ of these terms by leveraging a nonlinear geometric observer [42], provided that scan-matching returns an accurate solution. ## IV Robust & Perceptive Localization ### _Slip-Resistant Keyframing via Sensor-Agnostic Degeneracy_ Convergence of (10) into a sub-optimal local minima can occur when correspondences for GICP plane-to-plane registration are sparse or insufficient. Such weak data correspondences can subsequently lead to poor or diverging localization due to the estimate \"escaping\" a shallow gradient around the local minima. This phenomenon, often referred to as _LiDAR slippage_, often occurs when the surrounding environment is featureless or otherwise geometrically degenerate (e.g., long tunnels or large fields) and is a result of incorrect data association between the source and target point clouds. Ill-constrained optimization problems can also arise in _keyframe-based_LIO when the extracted submap insufficiently represents the surrounding environment and which results in a low number of data correspondences for scan-to-map matching. This can happen when there is an abrupt change in the environment (e.g., walking through a door or up a stairwell) but there are no nearby keyframes which describe the new environment. While previous works have used the condition number of the Hessian [36, 37, 2] to identify environmental degeneracy, such that \\(\\kappa(\\mathbf{H_{tt}})=\\left|\\lambda_{\\text{max}}(\\mathbf{H_{tt}})\\right|/ \\left|\\lambda_{\\text{min}}(\\mathbf{H_{tt}})\\right|\\), this metric informs a system only of _relative_ slippage with respect to the most constrained and least constrained directions of the problem. The condition number is affected by the size of the environment and the density of points, and therefore a more robust approach is to compute a more consistent metric of degeneracy across differently-sized environments and sensor configurations. This enables a global, sensor-agnostic metric Fig. 4: **Continuous-Time Motion Correction.** For each point in a cloud, a unique transform is computed by solving a set of closed-form motion equations parameterized solely by its timestamp to provide accurate, point-wise continuous-time motion correction. Fig. 5: **Sensor-Agnostic Degeneracy.** Uncertainty ellipsoids (purple) for each keyframe computing using our generalized degeneracy metric in (A & B) outdoor environments, (C) a narrow hallway, and (D) through a doorway. Our metric is global in that the ellipsoids are consistent in size in both indoor and outdoor environments; our metric is also sensor-agnostic in that it accounts for the density of the cloud (which can vary across different LiDAR sensors and voxelization leaf sizes). Note that these ellipsoids usually on the millimeter-scale but have been enlarged for visualization clarity. in which we use to detect when a new keyframe should be inserted into the environment. Let \\(\\mathcal{C}\\) be the set of all corresponding points between \\(\\mathcal{\\hat{P}}_{k}^{\\mathcal{W}}\\) and \\(\\mathcal{\\hat{S}}_{k}^{\\mathcal{W}}\\), and therefore \\(|\\mathcal{C}|\\) is the total number of corresponding points, and let \\(\\mathcal{E}\\) be the total error between all correspondences after convergence of a nonlinear least squares solver (such as Levenberg-Marquardt) as described previously in (10). Also, let \\(\\mathbf{H}\\in\\mathbb{R}^{6\\alpha 6}\\) be the Hessian of GICP, and let \\(\\mathbf{H_{tt}}\\in\\mathbb{R}^{3\\alpha 3}\\) be the submatrix corresponding to the translational portion of \\(\\mathbf{H}\\), with eigenvalues \\(\\lambda_{\\text{max}}(\\mathbf{H_{tt}})\\geq\\cdots\\geq\\lambda_{\\text{min}}( \\mathbf{H_{tt}})\\) which provide information regarding the local gradient of the nonlinear optimization after convergence. Note that \\(\\mathbf{H}\\approx\\mathbf{J}^{\\top}\\mathbf{J}\\) for computational efficiency and \\(\\mathbf{J}\\) is the Jacobian. Then, the _global degeneracy_\\(d_{k}\\) of the system is the maximum value after scaling each of the eigenvalues \\(\\lambda(\\mathbf{H_{tt}})\\), such that \\[d_{k}=\\text{max}\\left[\\,\\frac{m_{k}^{2}}{\\lambda(\\mathbf{H_{tt}})\\,\\,\\sqrt{z _{k}}}\\,\\right]\\,, \\tag{7}\\] where \\(m_{k}\\) is the computed _spaciousness_[20], defined as \\(m_{k}=\\alpha m_{k-1}+\\beta M_{k}\\), where \\(M_{k}\\) is the median Euclidean point distance from the origin to each point in the preprocessed point cloud (with constants \\(\\alpha\\) = 0.95, \\(\\beta\\) = 0.05), and \\(z_{k}\\) is the cloud _sparsity_ as defined above in (11). This degeneracy is computed for each incoming scan and saved in-memory for each new keyframe. If the difference between the current degeneracy and degeneracy at the location of the previous keyframe is sufficiently large, a new keyframe is inserted to provide the scan-to-map module with new information. The intuition behind (7) lies in how each scaling factor (i.e., \\(m_{k}\\) and \\(z_{k}\\)) affects \\(\\lambda(\\mathbf{H_{tt}})\\). In particular, while computing the condition number \\(\\kappa(\\mathbf{H_{tt}})\\) can provide an idea of how ellipsoidal the local gradient is (and therefore how long it may take to converge to a local minimum), an elongated gradient does not necessarily indicate the onset of slippage from being poorly constrained. In other words, \\(\\kappa(\\mathbf{H_{tt}})\\) is a _relative_ metric of how well the optimization problem is constrained, since it only computes the relative ratio between the steepest and shallowest directions. To get a more accurate idea of when slippage may occur, (7) directly looks at (the inverse of) each individual eigenvalue. By rewarding sensors which provide less information about the environment via \\(z_{k}\\), and by penalizing larger environments since measurements are less accurate with increasing distance, these various scaling factors allow \\(d_{k}\\) to be more consistent across different sensors and differently sized environments, which enables a more reliable metric of slip-detection (Fig. 5). ### _Submap Generation via 3D Jaccard Index_ A key innovation of the DLIOM algorithm is how it explicitly derives its keyframe-based submap for scan-to-map registration. Ideally, the full history of all observed points would be matched against to ensure that there is no absence of important environmental information during scan-matching. Unfortunately, this is far too computationally intractable due to the sheer number of nearest-neighbor operations required for aligning against such a large map. Whereas previous approaches either naively assume that the closest points in map-space are those which are most relevant, or they implicitly compute keyframe relevancy via nearest neighbor and convex hull extraction in keyframe-space [20], we propose a new method for deriving the local submap that explicitly maximizes coverage between the current scan and the submap by computing the _Jaccard index_[44] between the current scan and each keyframe. Let the intersection between two point clouds \\(\\mathcal{P}_{1}\\cap\\mathcal{P}_{2}\\) be a set \\(\\mathcal{C}_{1,2}\\) which contains all corresponding points between the two clouds in a common reference frame (within some corresponding distance), and therefore let \\(|\\mathcal{C}_{1,2}|\\) be the total number of corresponding points. In addition, let the union between two point clouds \\(\\mathcal{P}_{1}\\cup\\mathcal{P}_{2}\\) be defined as the set \\(\\mathcal{U}_{1,2}\\) which contains all non-intersecting points between the two point clouds, in addition to the mean of each pair of corresponding points in \\(\\mathcal{C}_{1,2}\\), such that the total number of points in \\(\\mathcal{U}_{1,2}\\) equates to \\[|\\mathcal{U}_{1,2}|=(|\\mathcal{P}_{1}|\\oplus|\\mathcal{P}_{2}|\\,)\\,\\setminus\\, |\\mathcal{C}_{1,2}|\\,. \\tag{8}\\] Then, for each newly acquired LiDAR scan at time \\(k\\), we compute the 3D Jaccard index between the scan \\(\\mathcal{\\hat{P}}_{k}^{\\mathcal{W}}\\) and each \\(j^{\\text{th}}\\) keyframe \\(\\mathcal{K}_{j}^{\\mathcal{W}}\\), defined as \\[J(\\mathcal{\\hat{P}}_{k}^{\\mathcal{W}},\\mathcal{K}_{j}^{\\mathcal{W}})=\\frac{| \\mathcal{\\hat{P}}_{k}^{\\mathcal{W}}\\cap\\mathcal{K}_{j}^{\\mathcal{W}}|}{| \\mathcal{\\hat{P}}_{k}^{\\mathcal{W}}\\cup\\mathcal{K}_{j}^{\\mathcal{W}}|}\\,, \\tag{9}\\] or, in otherwords, the equivalent of the \"intersection over union\" similarity measurement in the 3D domain. If \\(J(\\mathcal{\\hat{P}}_{k}^{\\mathcal{W}},\\mathcal{K}_{j}^{\\mathcal{W}})\\) surpasses a set threshold for the \\(j^{\\text{th}}\\) keyframe (i.e., a keyframe is sufficiently similar), then that keyframe is included within the submap to be used for scan-to-map registration. In contrast to previous methods which derive the submap through a series of heuristics (such as directly retrieving local points within a certain radius of the current position or assuming that nearby keyframes contain relevant points) our method explicitly computes each keyframes' relevancy to the Fig. 6: **Submapping via Jaccard Index.** Submap generation for the scan-to-map stage using the Newer College Dataset Extension - Cloister in Collection 2 [43]. For each newly acquired scan, we compute its Jaccard index against each environmental keyframe (axes) and extract only those which have a significant overlap with the current scan (green circles & white lines). The point clouds associated to the overlapping keyframes are then concatenated, alongside their in-memory covariances, for accurate scan-to-map registration. A threshold of at least \\(20\\%\\) overlap was used in this example. current environment to ensure the scan-to-map optimization is well-constrained with maximum coverage between the scan and the submap. In addition, by using only keyframe scans that contain significant overlap with the current scan, this guarantees that there are no wasted operations when building normals or the kdtree data structure for the submap (Fig. 6). ### _Adaptive Scan-Matching via Cloud Sparsity_ By simultaneously correcting for motion distortion and incorporating the GICP optimization prior into the point cloud, DLIOM can directly perform scan-to-map registration and bypass scan-to-scan required in previous methods. This registration is cast as a nonlinear optimization problem which minimizes the distance of corresponding points/planes between the current scan and an extracted local submap. Let \\(\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}}\\) be the corrected cloud in \\(\\mathcal{W}\\) and \\(\\hat{\\mathcal{S}}_{k}^{\\mathcal{W}}\\) be the extracted submap. Then, the objective of scan-to-map optimization is to find a transformation \\(\\Delta\\hat{\\mathbf{T}}_{k}\\) which better aligns the point cloud, where \\[\\Delta\\hat{\\mathbf{T}}_{k}=\\operatorname*{arg\\,min}_{\\Delta\\mathbf{T}_{k}}\\, \\mathcal{E}\\left(\\Delta\\mathbf{T}_{k}\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}},\\, \\hat{\\mathcal{S}}_{k}^{\\mathcal{W}}\\right)\\,, \\tag{10}\\] such that the GICP residual error \\(\\mathcal{E}\\) is defined as \\[\\mathcal{E}\\left(\\Delta\\mathbf{T}_{k}\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}},\\hat {\\mathcal{S}}_{k}^{\\mathcal{W}}\\right)=\\sum_{c\\in\\mathcal{C}}d_{c}^{\\top}\\left( C_{k,c}^{\\mathcal{S}}+\\Delta\\mathbf{T}_{k}C_{k,c}^{\\mathcal{P}}\\Delta \\mathbf{T}_{k}^{\\top}\\right)^{-1}d_{c}\\,,\\] for a set \\(\\mathcal{C}\\) of corresponding points between \\(\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}}\\) and \\(\\hat{\\mathcal{S}}_{k}^{\\mathcal{W}}\\) at timestep \\(k\\), \\(d_{c}=\\hat{s}_{k}^{c}-\\Delta\\mathbf{T}_{k}\\hat{p}_{k}^{c}\\), \\(\\hat{p}_{k}^{c}\\in\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}}\\), \\(\\hat{s}_{k}^{c}\\in\\hat{\\mathcal{S}}_{k}^{\\mathcal{W}}\\), \\(\\forall c\\in\\mathcal{C}\\), and \\(C_{k,c}^{\\mathcal{P}}\\) and \\(C_{k,c}^{\\mathcal{S}}\\) are the estimated covariance matrices for point cloud \\(\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}}\\) and submap \\(\\hat{\\mathcal{S}}_{k}^{\\mathcal{W}}\\), respectively. Then, following [9], this point-to-plane formulation is converted into a plane-to-plane optimization by regularizing covariance matrices \\(C_{k,c}^{\\mathcal{P}}\\) and \\(C_{k,c}^{\\mathcal{S}}\\) with \\((1,1,\\epsilon)\\) eigenvalues, where \\(\\epsilon\\) represents the low uncertainty in the surface normal direction. The resulting \\(\\Delta\\hat{\\mathbf{T}}_{k}\\) represents an optimal correction transform which better globally aligns the prior-transformed scan \\(\\hat{\\mathcal{P}}_{k}^{\\mathcal{W}}\\) to the submap \\(\\hat{\\mathcal{S}}_{k}^{\\mathcal{W}}\\), so that \\(\\hat{\\mathbf{T}}_{k}^{\\mathcal{W}}=\\Delta\\hat{\\mathbf{T}}_{k}\\hat{\\mathbf{T} }_{M}^{\\mathcal{W}}\\) (where \\(\\hat{\\mathbf{T}}_{M}^{\\mathcal{W}}\\) is the last point's IMU integration) is the globally-refined robot pose which is used for map construction and as the update signal for the nonlinear geometric observer. An important parameter that is often overlooked is the maximum distance at which corresponding points or planes should be considered in the optimization. This parameter is often hand-tuned by the user but should scale with the environmental structure for consistency and computational efficiency. For example, in small-scale environments (e.g., a lab room), points in the LiDAR scan are much closer together so a small movement has a small effect on the displacement of a given point. In contrast, the same point in a more open environment (e.g., a point on a distant tree outside) will be displaced farther with a small rotational movement due to a larger distance and therefore needs a greater search radius for correct correspondence matching (Fig. 7). Thus, we set the GICP solver's maximum correspondence search distance between two point clouds according to the \"sparsity\" of the current scan, defined as \\(z_{k}=\\alpha z_{k-1}+\\beta D_{k}\\), where \\[D_{k}=\\frac{1}{|\\mathcal{P}|N}\\sum_{n=1}^{N}D_{k}^{n} \\tag{11}\\] is the normalized per-point sparsity, \\(D_{k}^{n}\\) is the average Euclidean distance to \\(K\\) nearest neighbors for point \\(n\\), and \\(\\alpha=0.95\\) and \\(\\beta=0.05\\) are smoothing constants to produce \\(z_{k}\\), the filtered signal set as the max correspondence distance. Intuitively, this is the average inter-point distance in the current scan; the larger the environment, higher the number of sparse points (i.e., points further away), driving this number up. By adapting the corresponding distance according to the sparsity of points, the efficacy of scan-matching can be more consistent across differently-sized environments. ### _Hierarchical Geometric Observer_ The transformation \\(\\hat{\\mathbf{T}}_{k}^{\\mathcal{W}}\\) computed by scan-to-map alignment is fused with IMU measurements to generate a full state estimate \\(\\hat{\\mathbf{X}}_{k}\\) via a novel hierarchical nonlinear geometric observer. A full analysis of the observer can be found in [42], but in summary, one can show that \\(\\hat{\\mathbf{X}}\\) will globally converge to \\(\\mathbf{X}\\) in the deterministic setting with minimal computation. The proof utilizes contraction theory to first prove that the quaternion estimate converges exponentially to a region near the true quaternion. The orientation estimate then serves as an input to another contracting observer that estimates translation states. This architecture forms a contracting hierarchy that guarantees the estimates converge to their true values. The strong convergence property of the observer is the main advantage over other fusion schemes, e.g., filtering or pose graph optimization that possess minimal convergence guarantees, and will be important for future theoretical studies on the advantages of our LiDAR odometry and mapping pipeline. From a practical viewpoint, the observer generates smooth estimates in real-time so it output is also suitable for control. The observer used here is a special case of that in [42]. Let \\(\\gamma_{\\ell\\in\\{1,\\dots,5\\}}\\) be positive constants and \\(\\Delta t_{k}^{+}\\) be the time between GICP poses. If the errors between propagated and Fig. 7: **Adaptive Scan-Matching via Cloud Sparsity.** For each motion-corrected point cloud, we compute its _sparsity_, defined as the average per-point Euclidean distance across \\(K\\) nearest neighbors (11) (\\(K\\)=\\(5\\) in this example). This metric is used to scale the scan-to-map module’s maximum correspondence distance for adaptive registration. A scan within a small-scale environment will contain points much closer together (left), so a small movement will have a small effect on point displacement. On the otherhand, a large environment will have points much more spread out (right) and will require a larger search distance during GICP for correct data association. measured poses are \\(\\textbf{q}_{e}\\coloneqq(q_{e}^{o},\\ \\tilde{q}_{e})=\\hat{\\textbf{q}}_{i}^{*} \\otimes\\hat{\\textbf{q}}_{i}\\) and \\(\\textbf{p}_{e}=\\hat{\\textbf{p}}_{k}-\\hat{\\textbf{p}}_{i}\\), then the attitude update takes the form of \\[\\begin{split}\\hat{\\textbf{q}}_{i}\\ \\leftarrow\\hat{\\textbf{q}}_{i}\\ +\\Delta t_{k}^{+}\\,\\gamma_{1}\\,\\hat{\\textbf{q}}_{i}\\otimes\\left[\\begin{array} []{c}1-|q_{e}^{o}|\\\\ \\mathrm{sgn}(q_{e}^{o})\\,\\tilde{q}_{e}\\end{array}\\right]\\,,\\\\ \\hat{\\textbf{b}}_{i}^{\\,\\omega}\\ \\leftarrow\\hat{\\textbf{b}}_{i}^{\\,\\omega}\\ -\\Delta t_{k}^{+}\\,\\gamma_{2}\\,q_{e}^{o}\\tilde{q}_{e}\\,,\\end{split} \\tag{12}\\] and the translational update as \\[\\begin{split}\\hat{\\textbf{p}}_{i}\\ \\leftarrow\\hat{\\textbf{p}}_{i}\\ +\\Delta t_{k}^{+}\\,\\gamma_{3}\\,\\textbf{p}_{e}\\,,\\\\ \\hat{\\textbf{v}}_{i}\\ \\leftarrow\\hat{\\textbf{v}}_{i}\\ +\\Delta t_{k}^{+}\\, \\gamma_{4}\\,\\textbf{p}_{e}\\,,\\\\ \\hat{\\textbf{b}}_{i}^{\\,a}\\ \\leftarrow\\hat{\\textbf{b}}_{i}^{\\,a}\\ -\\Delta t_{k}^{+}\\, \\gamma_{5}\\,\\hat{\\textbf{R}}(\\hat{\\textbf{q}}_{i})^{\\top}\\textbf{p}_{e}\\,.\\end{split} \\tag{13}\\] Note that state correction is hierarchical as the attitude update (12) is completely decoupled from the translation update (13). Also, this is a fully nonlinear update which allows one to guarantee the state estimates are accurate enough to directly perform scan-to-map registration solely with an IMU prior without the need for scan-to-scan. ## V Connective Mapping Factor graphs are widely used in SLAM as they are a powerful tool to estimate a system's full state by combining pose estimates from various modalities via _pose graph optimization_[13]. Such works model relative pose constraints as a maximum a posteriori (MAP) estimation problem with a Gaussian noise assumption, and they typically view mapping as an afterthought as a result of refining the trajectory. However, such a unimodal noise model is far too simplistic for the complex uncertainty distribution that can arise from LiDAR scan-matching and IMU pre-integration. Moreover, graph-based optimization for odometry possesses minimal convergence guarantees and can often result in significant localization error and map deformation from inconsistent sensor fusion. To this end, we instead employ a factor graph not for odometry (which is instead handled by our geometric observer), but rather to explicitly represent the environment. A new node is added to the graph for every incoming keyframe (as determined by DLIOM's odometry thread), and various factors between nodes contribute to the global consistency of the map (Fig. 8). In addition to the factors detailed in this section, we additionally add a gravity factor to locally constrain the direction of each keyframe as described in [45]. ### _Connective Keyframe Factors_ Relative constraints between nodes in a factor graph are typically added sequentially (i.e., factors are added between adjacent nodes [13]), but the relationship between non-adjacent keyframes can provide additional information to the graph which helps to create a more accurate and globally consistent map. These additional constraints are what we call _connective_ factors, which are determined by pairs of keyframes having sufficient overlap in \\(\\mathcal{W}\\) as computed in Sec. IV-B. That is, for \\(K\\) number of total keyframes, the connectivity between \\(K_{i}^{\\mathcal{W}}\\) and \\(K_{j}^{\\mathcal{W}}\\) is defined as the 3D Jaccard index (9) and encoded in a symmetric matrix \\(\\textbf{C}\\in\\mathbb{R}^{K{k}K}\\), such that \\[\\textbf{C}_{ij}=\\begin{cases}\\dfrac{|\\hat{\\mathcal{K}}_{i}^{\\mathcal{W}} \\cap\\hat{\\mathcal{L}}_{j}^{\\mathcal{W}}|}{|\\hat{\\mathcal{K}}_{i}^{\\mathcal{W}} \\cup\\hat{\\mathcal{K}}_{j}^{\\mathcal{W}}|}&\\text{for }i>j\\\\ \\dfrac{|\\hat{\\mathcal{K}}_{j}^{\\mathcal{W}}\\cap\\hat{\\mathcal{L}}_{i}^{\\mathcal{ W}}|}{|\\hat{\\mathcal{K}}_{j}^{\\mathcal{W}}\\cup\\hat{\\mathcal{K}}_{i}^{\\mathcal{W}}|}& \\text{for }i<j\\\\ 1&\\text{for }i=j\\end{cases} \\tag{14}\\] where the diagonal contains all 1's by definition of (9). A new factor is added between two keyframes if \\(\\textbf{C}_{ij}\\) is above a set threshold, and the noise for this factor is computed by \\(\\zeta(1-\\textbf{C}_{ij})\\), where \\(\\zeta\\) is a tunable scaling parameter that controls the strength of the environmental graph (Fig. 9). ### _Keyframe-Based Loop Closures_ Ideally, place recognition modules would search across all seen data for loop closures and add corresponding graph factors accordingly. However, storing all historical scans is computationally infeasible, so scans are stored in-memory incrementally as _keyframes_. In this lens, such keyframes can be understood as the subset of all historical point clouds which maximize information about the environment and contain data about the most significant locations. However, individual scans can be quite sparse (depending on the selected sensor) and may not contain enough points for data association for accurate detection via scan-matching. Therefore, rather than iterating through all keyframes individually or reusing the submap constructed from the frontend which is only optimal for odometry [23], we instead build an additional submap optimized for the backend mapper which consists of all candidate loop closure keyframes. Fig. 8: **Keyframe-based Factor Graph Mapping. Our mapper adds a node to its factor graph for each new keyframe and adds relative constraints through either sequential factors (yellow), connectivity factors (blue), gravity factors (green), or loop closure factors (purple). Sequential factors provide a strong “skeleton” for the graph with low uncertainty between adjacent keyframes, while connectivity factors scale depending on the overlap between pairs of keyframes. Loop closure factors enable global consistency after long-term drift from pure odometry.**After adding the new keyframe to the factor graph and the associated connectivity constraints as described above, prior to optimization we search for and perform loop closure detection through a three-step process. First, we extract keyframes that are within some radius of the current position, in addition to those that contain some overlap with the current keyframe. The corresponding point clouds are then concatenated into a _loop cloud_\\(\\mathcal{L}_{k}^{\\mathcal{W}}\\) and transformed back into \\(\\mathcal{R}\\), and GICP scanning matching is performed between this and the new keyframe. If the fitness score between \\(\\hat{\\mathcal{K}}_{k}^{\\mathcal{R}}\\) and \\(\\mathcal{L}_{k}^{\\mathcal{R}}\\) is sufficiently low (i.e., the average Euclidean error across all corresponding points is small), a new loop-closure factor is added between the current keyframe and the closest keyframe in \\(\\mathcal{W}\\) used to build the loop cloud. Crucially, the registration is primed with a prior equal to the distance between these two keyframes in \\(\\mathcal{W}\\) with a sufficiently large plane-to-plane search distance. This process is fast since we do not rebuild the covariances required for GICP, as individual keyframe normals are concatenated instead [20]. This idea of reconstructing a submap for loop closure detection can easily be extend to using other place recognition modules (e.g., [39, 40]) for further robustness. ## VI Algorithmic Implementation This section highlights three important implementation details of our system for small lightweight platforms: sensor time synchronization, resource management for consistent computational load, and velocity-consistent loop closures. ### _Sensor Synchronization_ Time synchronization is a critical element in odometry algorithms which utilize sensors that have their own internal clock. This is necessary as it permits time-based data association to temporally align IMU measurements and LiDAR scans. There are three clock sources in DLIOM: one each for the LIDAR, IMU, and processing computer. Hardware-based time synchronization--where the acquisition of a LiDAR scan is triggered from an external source--is not compatible with existing spinning LiDARs since starting and stopping the rotation assembly can lead to inconsistent data acquisition and timing. As a result, we developed a software-based approach that compensates for the offset between the LiDAR (IMU) clock and the processing computer clock. When the first LiDAR (IMU) packet is received, the processing computer records its current clock time \\({}^{c}t_{0}\\) and the time the measurement was taken on the sensor \\({}^{s}t_{0}\\). Then, each subsequent \\(k^{\\text{th}}\\) measurement has a time \\({}^{c}t_{k}\\) with respect to the processing computer clock given by \\({}^{c}t_{k}={}^{c}t_{0}+({}^{s}t_{k}-{}^{s}t_{0})\\), where \\({}^{s}t_{k}\\) is the time the measurement was taken on the sensor. This approach was found to work well in practice despite its inability to observe the transportation delay of sending the first measurement over the wire. The satisfactory performance was attributed to using the elapsed _sensor_ time to compute the compensated measurement time since a sensor's clock is generally more accurate than that of the processing computer. ### _Submap Multithreading_ Fast and consistent computation time is essential for ensuring that incoming LiDAR scans are not dropped, especially on resource-constrained platforms. To this end, DLIOM offloads work not immediately relevant to the current scan to a separate thread which minimally interferes with its parent thread as it handles further incoming scans. Thus, the main point cloud processing thread has lower, more consistent computation times. The secondary thread builds the local submap kdtree used for scan-matching and builds data structures corresponding to each keyframe which are needed by the submap. Speed of the submap building process is additionally increased by saving in-memory the computed kdtrees for each keyframe in order to quickly compute the Jaccard index of each keyframe, making that process negligibly different than an implicit nearest neighbor keyframe search. This thread can finish at any time without affecting DLIOM's ability to accept new LiDAR scans. Additionally, it periodically checks if the main thread is running so it can pause itself and free up resources that may be needed by the highly-parallelized scan-matching algorithm. Crucially, the submap changes at a much slower rate than the LiDAR sensor rate, and there is no strict deadline for when the new submap must be built. Therefore, the effect of this thread--which is to occasionally delay a new submap by a few scan iterations--has negligible impact on performance. ### _Velocity-Consistent Loop Closures_ Although our odometry module constructs a submap of relevant keyframes, these keyframes are pulled directly from the globally optimized map. Therefore, we must be careful with how the state and keyframes get updated upon loop closure detection. In particular, the estimated position within the map will jump instantaneously, but a _continuous_ trajectory would be beneficial for control and require fewer updates Fig. 9: **Environmental Connectivity.** Example of increasing graph strength (left to right) by reducing the threshold for connective factors. A weak graph (left) is less locally accurate but allows for more compliency when adding loop closures to the graph, while a strong graph (right) is more locally accurate from its higher number of interkeyframe factors, which are computed according to keyframe-to-keyframe overlap. in the odometry module. We therefore allow the mapping module to perform this instantaneous update and maintain the robot pose in the _map_ frame, but establish an offset from the _odometry_ frame so that the robot pose and latest keyframe pose never jump. After an update, keyframes which have shifted must have their point clouds transformed in the odometry module; this is executed in a background thread, with submap keyframes being prioritized. ## VII Results In this section, we first provide an analysis of each proposed contribution to convince the reader that our core innovations are reasonable for improving the accuracy and resiliency of localization and mapping. Then, to validate our methods and system as a whole, DLIOM's accuracy and efficiency was compared against several current state-of-the-art and open-source systems. These include three LO algorithms, namely DLO [20], CT-ICP [46], and KISS-ICP [35], and three LIO algorithms, namely LIO-SAM [13], FAST-LIO2 [21], and DLIO [6]. We use the entirety of two public benchmark datasets, in addition to a self-collected dataset around a university campus, to compare the algorithms. These benchmark datasets include the Newer College dataset [47] and the extension to Newer College Extension dataset [43]. Note that some well-known algorithms (e.g., Wildcat [30] and X-ICP [5]) could not be thoroughly compared against due to closed-source implementation and/or custom unreleased datasets; however, Wildcat [30] was briefly evaluated against using the MulRan DCC03 dataset [48] as they provide numerical results of this dataset in their manuscript. Finally, we demonstrate the usage of DLIOM in a fully closed-loop flight through several aggressive autonomous maneuvers in a lab setting using our custom aerial platform. ### _Analysis of Components_ #### Vii-A1 Slip-Resistant Keyframing To showcase the resiliency of our slip-resistant keyframing strategy, which continually monitors scan-matching optimality and places an environmental keyframe during the onset of slippage, we use the Newer College Extension - Stairs dataset [43] and compare against LIO-SAM [13] and FAST-LIO2 [21]. Staircases are notoriously difficult for SLAM algorithms--especially those which are LiDAR-centric--due to the sensors' limited field-of-view in the Z direction. Because of this, tracking can be challenging as there are less data points to associate with during ascension. This can be observed in Fig. 10. For LIOM-SAM, the algorithm slipped right at the entrance of the stairwell, most likely due to a lack in sufficient features for feature extraction that the algorithm relies on. For FAST-LIO2, tracking was much better (as the algorithm also performs no feature extraction), but localization was jittery near the apex (e.g., blurry map at the top), and the algorithm completely slipped during descension. However, by detecting the onset of slippage, DLIOM can actively place new keyframes to continually anchor itself through space and therefore allow for tracking in challenging scenarios. This can be further seen in Fig. 1(D), in which our method was able to track through eight flights of stairs, while all other algorithms failed. #### Vii-A2 Jaccard Submapping The efficacy of our submapping strategy, which directly computes each environmental keyframe's relevancy (i.e., overlap) through a computed 3D Jaccard index and subsequently extracts those which are most useful for scan-matching, is compared against a naive, implicit method. More specifically, the naive approach extracts keyframes which are spatially nearby and those which construct the convex hull of keyframes. First proposed in [20], this strategy implicitly assumes that these keyframes are the best for globally aligning the current scan. However, as seen in Table I, which compares trajectory error between the naive method (\"NN + Convex\") and our Jaccard method (\"Jaccard Index\") using the Newer College Extension - Cloister dataset (Fig. 6), this may not extract the most relevant submap for scan-to-map alignment and can be detrimental to accuracy and computational complexity. Heuristically extracting such keyframes risks using point clouds which are not used for scan-to-map, and therefore adds unnecessary operations during kdtree or covariance structure building. In contrast, an explicit extraction of the most useful keyframes provides scan-to-map a more practical set of keyframes to align with, ultimately helping with data association for GICP and reducing computational waste on keyframes which may not be used at all for registration. This applies for all scenarios, such as ascension of a staircase, where nearby keyframes may have zero overlap but would be extracted using the nearest-neighbor method. #### Vii-A3 Connective Mapping Finally, we verify the effectiveness of adding connective factors between keyframes in our factor graph mapper. These factors provide additional constraints between overlapping nodes to better locally constrain each keyframe relative to one another, which can help with Fig. 10: **Slip-Resistant Localization.** Comparison of maps and trajectories generated by (A) LIO-SAM [13], (B) FAST-LIO2 [21], and (C) our method, using the Newer College Extension - Stairs dataset [43]. For (A), we observed slippage right after entering the stairwell, while for (B), tracking was shaking during ascension (e.g., blurry map), with it slipping after descension at the bottom. For (C), our keyframe placement (white nodes) allowed our algorithm to track sufficiently both during ascension and descension, constructing a clear map and accurate trajectory. mapping accuracy after loop closure. We compared overall cloud-to-cloud distance to ground truth between a map generated with connectivity factors between keyframes, and a map generated only with sequential (\"odometry\"-like) factors. The Newer College Extension - Maths (H) dataset was used in this experiment, and ground truth was provided by a high-grade Leica BLK360 laser scanner. Cloud-to-cloud error was computed using the CloudCompare application [49] after manual registration. All clouds were voxelized with a leaf size of 0.1m to provide a fair comparison, and a maximum threshold of 1m was set for computing the average error to filter non-overlaping regions. Without connective factors, DLIOM's output map had a mean cloud-to-cloud distance to ground truth of 0.3285 \\(\\pm\\) 0.2289; however, with connective factors, this cloud-to-cloud distance reduced down to 0.2982 \\(\\pm\\) 0.2214. These connective factors between overlapping nodes can create a more accurate map after graph optimization by providing additional constraints for loop closures. #### Iv-A4 Adaptive Scan-Matching Next, we compared our adaptive scan-matching technique, which scales the GICP maximum correspondence distance according to scan sparsity, against two statically-set thresholds. Specifically, we compared against a static correspondence distance of \\(0.25\\), which is typically optimal for smaller environments (since points are closer together), and to the trajectory from a static correspondence distance of \\(1.0\\), which is more reasonable for larger, outdoor environments. We used the Newer College - Short Experiment dataset for this comparison, as it features three different sections of varying sizes (\"Quad\", \"Mid-Section\", and \"Parkland\"). The results are shown in Fig. 11, in which we observed our adaptive thresholding scheme to perform the best, followed by a static threshold of \\(1.0\\), and finally a threshold of \\(0.25\\) which performed the worst amongst the three. This is reasonable because the majority of the Short Experiment dataset is in medium and large (\"Quad\" and \"Parkland\") scenes, with about 20% of the trajectory in the smaller \"Mid-Section.\" Because of this, a larger threshold would, on average, perform better than a smaller threshold (RMSE of 0.3810 \\(\\pm\\) 0.1063 versus 0.4140 \\(\\pm\\) 0.1167), but not as well as one that adapts to provide the best of both worlds (0.3571 \\(\\pm\\) 0.0971). #### Iv-A5 Comparison of Motion Correction To investigate the impact of our proposed motion correction scheme, we first conducted an ablation study with varying degrees of deskewing using the Newer College dataset [47]. Each of the tested algorithms employ a different degree and method of motion compensation, therefore creating an exhaustive comparison to the current state-of-the-art. To isolate our new additions since our previous work, we used DLIO [6] in this study, which ranged from no motion correction (None), to correction using only nearest IMU integration via (5) (Discrete), and finally to full continuous-time motion correction via both (5) and (6) (Continuous) (Table II). Particularly of note is the Dynamic dataset, which contained highly aggressive motions with rotational speeds up to 3.5 rad/s. With no correction, error was the highest among all algorithms at \\(0.1959\\) RMSE. With partial correction, error significantly reduced due to scan-matching with more accurate and representative point clouds; however, using the full proposed scheme, we observed an error of only \\(0.0612\\) RMSE--the lowest among all tested algorithms. With similar trends for all other datasets, the superior tracking accuracy granted by better motion correction is clear: constructing a unique transform in continuous-time creates a more authentic point cloud than previous methods, which ultimately affects scan-matching and therefore trajectory accuracy. Fig. 12 showcases this empirically: our coarse-to-fine technique constructs point clouds that more accurately represents the environment compared to methods with simple or no motion correction. ### _Benchmark Results_ We compare the accuracy and efficiency of DLIOM against six state-of-the-art algorithms using public and self-collected datasets. Aside from extrinsics, default parameters at the time of writing for each algorithm were used in all experiments unless otherwise noted. Specifically, loop-closures were kept enabled for LIO-SAM and online extrinsics estimation disabled for FAST-LIO2 to provide the best results of each Fig. 11: **Adaptive Scan-Matching.** A comparison of absolute pose error on the Newer College - Short Experiment dataset using adaptive and static scan-matching correspondence thresholds. We observed, on average, a lower trajectory error using our adaptive scaling technique as compared to static search thresholds which other methods typically use; this allows for more consistent localization in both small and large environments. Fig. 12: **Motion Correction.** Distortion caused by rapid movement can severely skew LiDAR scans which affects scan-matching (and therefore localization) accuracy. (A) A point cloud from the Newer College - Dynamic dataset with severe distortion, which causes the bottom walls to misalign. (B) A motion-corrected cloud using our coarse-to-fine scheme which accurately reconstructs the environment. This phenomenon is also observed in our method’s output map, in which maps generated from aggressive maneuvers are blurrier without motion correction (C) than those generated with (D). algorithm. For FAST-LIO2, we reduced the default crop otherwise it would fail in smaller environments. Deskewing was enabled for KISS-ICP, and for CT-ICP, voxelization was increased and data playback speed was slowed down to 25% otherwise the algorithm would fail due to significant frame drops. Loop closures were disabled in DLIOM to provide a more fair assessment. Trajectories were compared against the ground truth using evo [51] in TUM [52] format and aligned with the Umeyama algorithm [53] for all public benchmark datasets. Algorithms which did not produce meaningful results are indicated accordingly in the tables, and trajectory lengths for each dataset are indicated in italics to give a reader a sense of duration. All tests were conducted on a 16-core Intel i7-11800H CPU. #### V-A1 Newer College Dataset Trajectory accuracy and average per-scan time of all algorithms were also compared using the original Newer College benchmark dataset [47]. For these tests, we used data from the Ouster OS1-64 (10Hz) in addition to its internal IMU (100Hz) to ensure accurate time synchronization between sensors. For certain Newer College datasets, the first 100 poses were excluded from computing FAST-LIO2's RMSE due to slippage at the start in order to provide a fair comparison. Short, Long, and Parkland experiments were routes recorded at a standard walking pace around several different sections, while Quad and Dynamic featured rapid linear and angular movements. The results are shown in Table II, in which we observed our method to produce the lowest trajectory RMSE as compared to all other algorithms. Fig. 13 illustrates DLIOM's low trajectory error compared to ground truth for the Newer College - Long Experiment dataset even after over three kilometers of travel. #### V-A2 Newer College Extension Dataset Additionally, we compared all algorithms using the latest extension to the Newer College benchmark dataset [43], which features three collections of data. Collections 1 and 3 contains three datasets each with progressively increasing difficulty, from \"Easy\" (E), which had slow paced movement, to \"Medium\" (M), which featured slightly more aggressive turn-rates and motions, and finally to \"Hard\" (H), which contained highly aggressive motions, rotations, and locations in both small and large environments. Collection 2 contains three datasets, each of which are highly different than the other to create a diverse set of environments. This includes traversing up and down a staircase, walking around a cloister with limited visibility, and a large-scale park with multiple loops. In this benchmark dataset, we used data from the Ouster OS0-128 (10Hz) in addition to the Alphasense Core IMU (200Hz), since this particular extension had well-synchronized data. The results are shown in Table III for all tested algorithms. Particularly of note are the two Hard (H) datasets, in addition to the Stairs dataset. For Quad (H) and Maths (H), both of which had highly aggressive and unpredictable movements, CT-ICP failed (even when reducing playback speed down to 10%), while both DLO and KISS-ICP had significantly higher trajectory errors. This demonstrates the strength and need for fusing inertial measurement units for point cloud motion correction. For Stairs, most algorithms failed to produce meaningful results due to the difficult ascension and the limited vertical field-of-view from the LiDAR sensor. Of those which could track sufficiently, both DLO and DLIO had significantly high errors; however, by detecting and placing a new keyframe right at the onset of slippage, DLIOM is able to achieve a low RMSE of just \\(0.0686\\)m. This is further illustrated in Fig. 10, Fig. 13: **Trajectory of Long Experiment. The generated trajectory for the Newer College - Long Experiment. Color indicates absolute pose error.**in which keyframes (white nodes) are placed at locations with high scene change (e.g., through the door, in-between stairs). #### Vii-B3 MulRan Dataset To compare against Wildcat [30], we use the MulRan DCC03 [48] dataset. This is shown in Table IV (results for LIGO-SAM, FAST-LIO2, and Wildcat retrieved from [30]). While the details of specifically how the relative trajectory error (RPE) was computed are unclear, we assume that the translational metric was the average RPE with respect to the point distance error ratio using evo [51], and the rotational metric was using evo's \"rot_part\" option. A fair comparison of absolute trajectory error could not be conducted, as numerical values were not provided by the authors, but DLIOM's ATE was on average 2.36m and 2.4\\({}^{\\circ}\\). Top-down map is shown in Fig 14. #### Vii-B4 UCLA Campus Dataset We additionally showcase our method's accuracy using four large-scale datasets at UCLA for additional comparison (Fig. 15). These datasets were gathered by hand-carrying our aerial platform (Fig. 1) over 2261.37m of total trajectory. Our sensor suite included an Ouster OS1 (10Hz, 32 channels recorded with a 512 horizontal resolution) and a 6-axis InvenSense MPU-6050 IMU located approximately 0.1m below it. We note here that this IMU can be purchased for approximately $10, demonstrating that LIO algorithms need not require high-grade IMU sensors that previous works have used. Note that a comparison of absolute trajectory error was not possible due to the absence of ground truth, so as is common practice, we compute end-to-end translational error as a proxy metric (Table V). In these experiments, our method outperformed all others across the board in end-to-end translational error. However, similar to the trends found in the Newer College datasets, our average per-scan computational time has slightly increased due to the new algorithmic additions since DLIO. Regardless however, our resulting maps can capture fine detail in the environment which ultimately provides more intricate information cues for autonomous mobile robots such as terrain traversability. ## VIII Discussion This letter presents Direct LiDAR-Inertial Odometry and Mapping (DLIOM), a robust SLAM algorithm with an extreme focus on operational reliability and accuracy to yield real-time state estimates and environmental maps across a diverse set of domains. DLIOM mitigates several common failure points in typical LiDAR-based SLAM solutions through an architectural restructuring and several algorithmic innovations. Rather than using a single sensor fusion framework (e.g., probabilistic filter or graph optimization) to produce both localization and map as is typical in other algorithms, we separate these two processes into separate threads and tackle them independently. Leveraging a nonlinear geometric observer guarantees the convergence of IMU propagation towards LiDAR scant matching and reliably initializes velocity and sensor biases, Fig. 14: **MulRan DCC03.** Top-down view of the map of the MulRan DCC03 dataset, generated by DLIOM. This specific dataset featured approximately 5421.82 meters of travel from driving around three different loops in Korea. which is required for our fast coarse-to-fine motion correction technique. On the other hand, a factor graph, with nodes at keyframe locations determined by our odometry thread, continually optimizes for a best-fit map using connective factors between overlapping keyframes, which provide extra relative constraints to the optimization problem. Fast and robust localization is achieved hierarchically in the front-end's scan-matching, keyframing and submapping processes. An adaptive scan-matching method automatically tunes the maximum distance between corresponding planes for GICP by computing a novel point cloud sparsity metric, resulting in more consistent registration in differently sized environments. Slip-resistant keyframing ensures a sufficient number of data correspondences between the scan and the submap by detecting abrupt scene changes using a new sensor-agnostic degeneracy metric. Finally, our submap is explicitly generated by computing the 3D Jaccard index between the current scan and each environmental keyframe to ensure maximal overlap in the submap for data correspondence searching. These ideas collectively enable a highly reliable LiDAR SLAM system that is not only agnostic to the operating environment, but is also fast and online for real-time usage on computationally-constrained platforms. ## References * [1]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [2]A. Segal, D. Haehnel, and S. Thrun (2009) Generalized-icp. In Robotics: science and systems, Vol. 2, pp. 435. Cited by: SSII-A. * [3]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [4]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [5]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [6]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [7]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [8]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [9]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [10]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [11]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [12]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [13]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [14]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [15]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [16]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [17]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [18]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [19]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [20]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [21]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [22]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [23]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [24]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [25]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [26]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [27]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [28]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [29]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [30]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [31]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [32]C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics32 (6), pp. 1309-1332. Cited by: SSI. * [33]C. Cadena, L. Carlone, H. Carrillo, Y. Latif,* [10] P. Biber and W. Strasser, \"The normal distributions transform: a new approach to laser scan matching,\" in _Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH3745353)_, vol. 3, 2003, pp. 2743-2748 vol.3. * [11] J. Zhang and S. Singh, \"Loam: Lidar odometry and mapping in realtime.\" in _Robotics: Science and Systems_, vol. 2, no. 9, 2014, pp. 1-9. * [12] T. Shan and B. Englot, \"Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,\" in _IEEE/RSJ International Conference on Intelligent Robots and Systems_, 2018, pp. 4758-4765. * [13] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, \"Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping,\" in _IEEE/RSJ International Conference on Intelligent Robots and Systems_, 2020, pp. 5135-5142. * [14] T. Shan, B. Englot, C. Ratti, and D. Rus, \"Lvi-sam: Tightly-coupled lidar-visual-inertial odometry via smoothing and mapping,\" in _IEEE International Conference on Robotics and Automation_, 2021, pp. 5692-5698. * [15] Y. Pan, P. Xiao, Y. He, Z. Shao, and Z. Li, \"Mulls: Versatile lidar slam via multi-metric linear least square,\" in _IEEE International Conference on Robotics and Automation_, 2021, pp. 11 633-11 640. * [16] W. Xu and F. Zhang, \"Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter,\" _IEEE Robotics and Automation Letters_, vol. 6, no. 2, pp. 3317-3324, 2021. * [17] T.-M. Nguyen, S. Yuan, M. Cao, L. Yang, H. Nguyen, and L. Xie, \"Milion: Tightly coupled multi-input lidar-inertiaem odometry and mapping,\" _IEEE Robotics and Automation Letters_, vol. 6, no. 3, pp. 5573-5580, 2021. * [18] H. Ye, Y. Chen, and M. Liu, \"Tightly coupled 3d lidar inertial odometry and mapping,\" in _International Conference on Robotics and Automation_, 2019, pp. 3144-3150. * [19] M. Palieri, B. Morrell, A. Thakur, E. Badi, J. Nash, A. Chatterjee, C. Kanellakis, L. Carlone, C. Guarangella, and A.-a. Agha-Mohammadi, \"Locus: A multi-sensor lidar-centric solution for high-precision odometry and 3d mapping in real-time,\" _IEEE Robotics and Automation Letters_, vol. 6, no. 2, 2020. * [20] K. Chen, B. T. Lopez, A.-a. Agha-mohammadi, and A. Mehta, \"Direct lidar odometry: Fast localization with dense point clouds,\" _IEEE Robotics and Automation Letters_, vol. 7, no. 2, pp. 2000-2007, 2022. * [21] W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, \"Fast-lio2: Fast direct lidar-inertial odometry,\" _IEEE Transactions on Robotics_, 2022. * [22] A. Reinke, M. Palieri, B. Morrell, Y. Chang, K. Ebadi, L. Carlone, and A.-A. Agha-Mohammadi, \"Locus 2.0: Robust and computationally efficient lidar odometry for real-time 3d mapping,\" _IEEE Robotics and Automation Letters_, pp. 1-8, 2022. * [23] Z. Wang, L. Zhang, Y. Shen, and Y. Zhou, \"D-liom: Tightly-coupled direct lidar-inertial odometry and mapping,\" _IEEE Transactions on Multimedia_, pp. 1-1, 2022. * [24] J. Zhang, M. Kaess, and S. Singh, \"On degeneracy of optimization-based state estimation problems,\" in _2016 IEEE International Conference on Robotics and Automation (ICRA)_, 2016, pp. 809-816. * [25] G. Baldwin, R. Mahony, J. Trumpf, T. Hamel, and T. Cheviron, \"Complementary filter design on the special euclidean group se (3),\" in _2007 European Control Conference (ECC)_. IEEE, 2007. * [26] J. F. Vasconcelos, C. Silvestre, and P. Oliveira, \"A nonlinear gps/imu based observer for rigid body attitude and position estimation,\" in _2008 47th IEEE Conference on Decision and Control_. IEEE, 2008, pp. 1255-1260. * [27] T. Renzler, M. Stolz, M. Schratter, and D. Watzenig, \"Increased accuracy for fast moving ldars: Correction of distorted point clouds,\" in _IEEE International Instrumentation and Measurement Technology Conference_, 2020. * [28] S.-P. Deschenes, D. Baril, V. Kubelka, P. Giguere, and F. Pomerleau, \"Lidar scan registration robust to extreme motions,\" in _Conference on Robots and Vision_, 2021. * [29] C. Park, P. Moghadam, S. Kim, A. Elfes, C. Fookes, and S. Sridharan, \"Elastic lidar fusion: Dense map-centric continuous-time slam,\" in _2018 IEEE International Conference on Robotics and Automation (ICRA)_, 2018, pp. 1206-1213. * [30] M. Ramezani, K. Khosoussi, G. Cat, P. Moghadam, J. Williams, P. Borges, F. Pauling, and N. Kottoge, \"Wildcat: Online continuous-time 3d lidar-inertial slam,\" _arXiv preprint arXiv:2205.12595_, 2022. * [31] D. Droeschel and S. Behnke, \"Efficient continuous-time slam for 3d lidar-based online mapping,\" in _2018 IEEE International Conference on Robotics and Automation (ICRA)_, 2018, pp. 5000-5007. * [32] P. Dellenbach, J.-E. Deschaud, B. Jacquet, and F. Goulette, \"Cl-icp: Real-time elastic lidar odometry with loop closure,\" in _2022 International Conference on Robotics and Automation (ICRA)_. IEEE, 2022, pp. 5580-5586. * [33] C. Park, P. Moghadam, J. L. Williams, S. Kim, S. Sridharan, and C. Fookes, \"Elasticity meets continuous-time: Map-centric dense 3d lidar slam,\" _IEEE Transactions on Robotics_, vol. 38, no. 2, pp. 798-997, 2022. * [34] Z. Liu and F. Zhang, \"Balm: Bundle adjustment for lidar mapping,\" _IEEE Robotics and Automation Letters_, vol. 6, no. 2, pp. 3184-3191, 2021. * Simple, Accurate, and Robust Registration If Done the Right Way,\" _IEEE Robotics and Automation Letters (RA-L)_, vol. 8, no. 2, pp. 1029-1036, 2023. * [36] K. Ebadi, M. Palieri, S. Wood, C. Padgett, and A.-a. Agha-mohammadi, \"Dare-slam: Degeneracy-aware and resilient loop closing in perceptually-degraded environments,\" _Journal of Intelligent & Robotic Systems_, vol. 102, pp. 1-25, 2021. * [37] Furzhang Han, Han Zheng, Wenjun Huang, Rong Xiong, Yue Wang, and Yannet Jiao, \"Dams-lio: A degeneration-aware and modular sensor-fusion lidar-inertial odometry,\" 2023. [Online]. Available: [https://arxiv.org/abs/2302.01703](https://arxiv.org/abs/2302.01703) * [38] H. Lim, D. Kim, B. Kim, and H. Myung, \"Adalio: Robust adaptive lidar-inertial odometry in degenerate indoor environments,\" 2023. * [39] G. Kim and A. Kim, \"Scan context: Egocentric spatial descriptor for place recognition within 3D point cloud map,\" in _Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems_, Madrid, Oct. 2018. * [40] G. Kim, S. Choi, and A. Kim, \"Scan context++: Structural place recognition robust to rotation and lateral variations in urban environments,\" _IEEE Transactions on Robotics_, 2021, accepted. To appear. * [41] P. Yin, S. Yuan, H. Cao, X. Ji, S. Zhang, and L. Xie, \"Segregator: Global point cloud registration with semantic and geometric cues,\" _arXiv preprint arXiv:2301.07425_, 2023. * [42] B. T. Lopez, \"A contracting hierarchical observer for pose-inertial fusion,\" _arXiv:2303.02777_, 2023. * [43] L. Zhang, M. Camurri, and M. Fallon, \"Multi-camera lidar inertial extension to the newer college dataset,\" 2021. * [44] P. Jaccard, \"The distribution of the flora in the alpine zone,\" _New Phytologist_, vol. 11, no. 2, pp. 37-50, 1912. [Online]. Available: [https://nph.onlinelibrary.wiley.com/doi/abs/10.1111/j.1469-8137.1912.tb05611.x](https://nph.onlinelibrary.wiley.com/doi/abs/10.1111/j.1469-8137.1912.tb05611.x) * [45] R. Nemiroff, K. Chen, and B. T. Lopez, \"Joint on-manifold gravity and accelerometer intrinsics estimation,\" 2023. [Online]. Available: [https://arxiv.org/abs/2303.03505](https://arxiv.org/abs/2303.03505) * [46] P. Dellenbach, J.-E. Deschaud, B. Jacquet, and F. Goulette, \"Ct-icp: Real-time elastic lidar odometry with loop closure,\" 2021. * [47] M. Ramezani, Y. Wang, M. Camurri, D. Wisht, M. Matamala, and M. Fallon, \"The newer college dataset: Handheld lidar, inertial and vision with ground truth,\" in _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, 2020, pp. 4353-4360. * [48] G. Kim, Y. S. Park, Y. Cho, J. Jeong, and A. Kim, \"Multim: Multimodal range dataset for urban place recognition,\" in _Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)_, Paris, May 2020. * [49] D. Girardeau-Montaut, M. Roux, R. Marc, and G. Thibault, \"Change detection on point cloud data acquired with a ground laser scanner,\" _ISPRS_, 2005. * Simple, Accurate, and Robust Registration If Done the Right Way,\" _IEEE Robotics and Automation Letters (RA-L)_, vol. 8, no. 2, pp. 1-8, 2023. * [51] M. Grupp, \"evo: Python package for the evaluation of odometry and slam.\" [https://github.com/Michael/Grupp/evo](https://github.com/Michael/Grupp/evo), 2017. * [52] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, \"A benchmark for the evaluation of rgb-d slam systems,\" in _Proc. of the International Conference on Intelligent Robot Systems (IROS)_, Oct. 2012. * [53] S. Umeyama, \"Least-squares estimation of transformation parameters between two point patterns,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 13, no. 4, pp. 376-38
This paper presents Direct LiDAR-Inertial Odometry and Mapping (DLIOM), a robust SLAM algorithm with an explicit focus on computational efficiency, operational reliability, and real-world efficacy. DLIOM contains several key algorithmic innovations in both the front-end and back-end subsystems to design a resilient LiDAR-inertial architecture that is perceptive to the environment and produces accurate localization and high-fidelity 3D mapping for autonomous robotic platforms. Our ideas spawned after a deep investigation into modern LiDAR SLAM systems and their inabilities to generalize across different operating environments, in which we address several common algorithmic failure points by means of proactive safe-guards to provide long-term operational reliability in the unstructured real world. We detail several important innovations to localization accuracy and mapping resiliency distributed throughout a typical LiDAR SLAM pipeline to comprehensively increase algorithmic speed, accuracy, and robustness. In addition, we discuss insights gained from our ground-up approach while implementing such a complex system for real-time state estimation on resource-constrained systems, and we experimentally show the increased performance of our method as compared to the current state-of-the-art on both public benchmark and self-collected datasets. Localization, Mapping, Odometry, State Estimation, SLAM, LiDAR, IMU, Sensor Fusion, Field Robotics
Provide a brief summary of the text.
273
elsevier/8347361b_3ed3_4ee7_b0dd_b23b3903729b.md
# Multilevel intuitive attention neural network for airborne LiDAR point cloud semantic segmentation Ziyang Wang Corresponding authors at: School of Geography and Ocean Science, Nanjing University, Nanjing 210023, China (H. Chen), School of Geography, Nanjing Normal University, Nanjing, China (J. Liu). Hui Chen Corresponding authors at: School of Geography and Ocean Science, Nanjing University, Nanjing 210023, China (H. Chen), School of Geography, Nanjing Normal University, Nanjing, China (J. Liu). Jing Liu Corresponding authors at: School of Geography and Ocean Science, Nanjing University, Nanjing 210023, China (H. Chen), School of Geography, Nanjing Normal University, Nanjing, China (J. Liu). Jiarui Qin Corresponding authors at: School of Geography and Ocean Science, Nanjing University, Nanjing 210023, China (H. Chen), School of Geography, Nanjing Normal University, Nanjing, China (J. Liu). Yehua Sheng Corresponding authors at: School of Geography and Ocean Science, Nanjing University, Nanjing 210023, China (H. Chen), School of Geography, Nanjing Normal University, Nanjing, China (J. Liu). Lin Yang Corresponding authors at: School of Geography and Ocean Science, Nanjing University, Nanjing 210023, China (H. Chen), School of Geography, Nanjing Normal University, Nanjing, China (J. Liu). ## 1 Introduction The continuous evolution of sensor technology has led to an increasingly diverse array of methods for collecting geographic scene data. Such as photogrammetry (Huo et al., 2023; Kim et al., 2021; Pepe et al., 2018), satellite images (Guo et al., 2023; Li et al., 2023; Yang et al., 2023), and three-dimensional (3D) laser scanning (Dersch et al., 2023; Verma et al., 2023; Wang et al., 2023). Compared with other methods, airborne laser scanning stands out for its ability in achieve high-precision data acquisition while maintaining efficiency. Due to this advantage, it has been applied in many aspects of scientific research, for example, vegetation index estimation(Liu et al., 2018; Solberg, 2010), building extraction(Widyaningrum et al., 2020; Xia et al., 2021) and digital heritage protection (Li et al., 2023). Most of these applications require semantic labels as a prerequisite step. However, the disordered and unstructured nature of point clouds make achieving highly accurate semantic segmentation a challenging task in current research. In the initial stages of point cloud classification development, most methods relied on manually constructing local and global feature descriptors. They were further combined with classifiers to achieve segmentation. Common classification methods include using Random Forest (Guo et al., 2011; Hackel et al., 2016; Sreevalsan-Nair and Mohapatra, 2020; Wang et al., 2021), the tensor voting algorithm (Sreevalsan-Nair et al., 2018), Support Vector Machines (Ekhtaf et al., 2018; Hui et al., 2016; Lodha et al., 2006), Expectation Maximization (Lodha et al., 2007), Conditional Random Field (Niemeyer et al., 2016, 2012, 2011), and AdaBoost (Lodha et al., 2007). While feature-driven classification methods is relatively straightforward, it lacks highautomation and demands significant manual computation. And limited by the representativeness of feature definitions, it may leads to less significant in classification results. With the increasing computational power and the surge in deep learning applications for image data processing (Girshick, 2015; He et al., 2017; Ronneberger et al., 2015), deep learning-based techniques for segmenting point clouds have progressively gained prominence. These methods can be categorized into the image-based segmentation approaches (Su et al., 2015; Yang et al., 2018; Zhao et al., 2018), voxel-based methods (Liu et al., 2021; Liu et al., 2019; Tang et al., 2020), and point-based methods (Wen et al., 2020).The first two classification methods were proposed in the early stages of the development of point cloud deep learning, so they have some natural limitations. The results of image-based point cloud segmentation often exhibit systematic errors due to resolution disparities between image and point cloud data. Voxel-based approach divides the point cloud into three-dimensional voxels, which breaks relationships between point clouds and reduces computational efficiency. Point-based deep learning methods is different from image-based and voxel-based methods. It utilizes disordered point clouds as inputs to directly derive point labels, substantially reducing accuracy loss due to data conversion. PointNet (Qi et al., 2017), as a milestone work on point-based deep learning method, employs a multilayer perceptron (MLP) to extract feature and utilizes max pooling to accomplish point cloud segmentation. Pioneering a new research trend in point cloud classification, leveraging deep learning as its foundation. Although point-based deep learning methods have many advantages, they don't take into account the connection between point clouds. In recent studies, researchers have explored the use of graph-based method as a technique for organizing raw disordered point clouds, which representing point cloud data as the collection of nodes and edges. These approaches include abstracting disordered points into superpoint graphs and establishing global relationships (Landrieu and Simonovsky, 2018), and utilizing graph convolution-based methods (Lei et al., 2020; Simonovsky and Komodakis, 2017; Zhang and Rabbat, 2018). Inspired by self-attention mechanism in the text data processing, many studies have attempted to apply it to 30 point cloud processing. Such as utilizing multi-layer perceptron combined with self attention structure to enhance feature robustness (Guo et al., 2021; Zhao et al., 2021),and using point convolution combined with attention mechanism to dynamically adjust convolutional kernel weights to improve model generalization ability(Wu et al., 2023). Compared to graph-based structures, attention-based methods can dynamically focus the model on different parts of the input, and the construction method is simpler. Although constructing an attention structure enhances the organization of the point clouds, it also introduces a significant increase in the computational complexity of the network. This increment in complexity may hamper the operational efficiency of the algorithm, especially when dealing with large volumes of data. While existing attention-based point cloud classification algorithms have demonstrated satisfactory results, they suffer from inherent network design flaws. Firstly, the establishment of point-based attention structures often requires computing relationships between all points, resulting in an exponential increase in algorithmic complexity. Secondly, there is currently little research on fusing feature context information based on offset point cloud mechanisms. To address these issues, this study proposes a Multilevel Intuitive Attention Mechanism Network (MIA-Net) specifically designed for airborne point cloud data: 1. A 3D point-offset mechanism is proposed for inter-point feature fusion within the feature map. It not only eliminates the constraint imposed by point neighborhood size, but it also reduces the redundancy of graph-based feature relationship calculation between points. 2. An MIA mechanism module was developed to efficiently integrate spatial contextual information, making feature fusion not limited to a single scale. This approach significantly reduces computational complexity when compared to traditional attention mechanisms. 3. The network architecture was build based on the encoder-decoder paradigm to accurately segment airborne point clouds within expansive scenes. ## 2 Related work ### Image-based deep learning classification method for point clouds The success observed in 2D convolutional neural networks for image processing has prompted researchers to extend these techniques to 3D point clouds. This involves projecting 3D point clouds onto 2D images and subsequently back-projecting image labels onto the original 3D point clouds to achieve semantic segmentation in a pseudo-image space. For example, (Rizaldy et al., 2018) employed a method where a point cloud was projected onto a 2D plane. They then designed a fully convolutional neural network with an expansion kernel to obtain image pixel labels, which were further back-projected to the 3D point cloud, facilitating segmentation in a large-scale scene. To address the potential loss of feature information during dimensionality conversion, a multi-scale structure was introduced into the network. To enhance computational efficiency, Yang et al.(2018) proposed a hybrid approach, combining a multi-regular region-growing method with multi-scale convolution for airborne point cloud segmentation. Notably, this method utilizes a point feature map as the object for projection segmentation. Increasing the number of projected images was suggested to generate more representative images, thereby improving label sustainability. In a different approach, Gerdzhev et al.(2021) projected point clouds onto a multi-view image, incorporating an encoder-decoder structure and employing multiple loss functions. Their method demonstrated improved results on benchmark data. Lions et al. (2020) introduced a multi-view fusion network, optimizing point labels from different view prediction scores, resulting in a significant enhancement in prediction accuracy. Despite the increased robustness of these methods, it is essential to acknowledge that accuracy loss may occur during the calculation process due to inherent limitations in the methodologies themselves. ### Voxel-based deep learning classification method for point clouds To mitigate the loss of feature information caused by dimensional transformation, researchers have drawn inspiration from the 2D convolution architecture and developed a 3D convolutional structure directly applicable to voxelized point clouds. For example, VoxNet (Maturana and Scherer, 2015), employing a 3D convolutional architecture, demonstrated outstanding classification results in both LiDAR data and RGBD point clouds. Huang and You (2016) built a convolutional network structure inspired by LeNet (LeCun et al., 1998), comprising 3D convolutional and pooling layers. This approach, requiring no prior segmentation knowledge, exhibits high computational efficiency, completing voxelization and classification processes within 10 min. However, the computational efficiency of 3D convolution is significantly constrained by the voxel search process. To address this limitation, Wang et al.(2017) utilized an octree for point-cloud voxel searching, effectively reducing the computational time of 3D convolution. Another approach by Ben-Shabat et al.(2018) involves integrating a 3D mesh with continuous generalized Fisher vectors, creating a novel network architecture that preserves detailed features of point clouds compared to the general voxelized approach. The computational efficiency of voxel-based deep learning methods is often closely linked to voxel resolution. The voxel division of the raw point cloud further diminishing feature relationships between points. Consequently, there is an urgent need for a point cloud classification network directly applicable to 3D points. ### Unordered point-based deep learning classification method for point clouds Point-based methods can be divided into the following categories: multilayer perceptrons based methods, graph structure based methods and attention based methods. Following the introduction of PointNet, in order to better extract point cloud local detail features, Qi et al.(2017) added a hierarchical framework which augmentation facilitated the acquisition of point features at various scales. And compared to PointNet, this network has a stronger ability to capture detailed features. But it is sensitive to noise, so its application on datasets with uneven density still needs to be explored. Jiang et al.(2018) sought to enhance the feature robustness by introducing a branch-encoding structure, enabling the encoding of feature information in different directions. The utilization of feature stacking contributed to obtaining multi-scale information, which improved the feature representation. However, the method proposed by PointSIFT requires stacking multiple encoding modules to achieve scale perception, resulting in high computational costs. Inspired by the emerging graph neural network, researchers endeavored to construct graph structures with point clouds to enhance feature interaction. Zhiheng and Ning (2019) innovatively designed a graph embedding module, incorporated a pyramid network to learn point features from a graph-structured point cloud, thereby enhancing feature representativeness. But this network has a high computational complexity and requires parameter adjustments to achieve optimal results. Wang et al.(2019) proposed the graph convolution, dynamically capturing point cloud structure information. This approach enabled the segmentation of point clouds by distributing different weights to adjacent points, while it required a significant amount of computation, and this method requires high quality of raw point cloud. Additionally, Wang et al.(2019) presented an edge convolution module, interacting with local point features through different edge weights, which can effectively capture local geometric features. But its spatial search is relatively complex and when the point cloud density changes significantly, it can lead to a decrease in model performance. The application of attention mechanisms, as highlighted by Vaswani et al.(2017) enhanced interconnections between feature points. Zhao et al.(2021)used the self-attention mechanism, constructing the graph feature relationships, which has achieved relatively robust results on multiple tasks, but this method need to calculate a large amount of attention weights, which results in its low efficiency on large-scale datasets. Guo et al.(2021) used attention mechanism to fuse features using a multi-layer attention module. This approach outperformed others in classification tasks, but it not consider feature interaction between different levels. Zhang et al.(2022) introducing an elevation attention mechanism, enhancing overall recognition accuracy by adaptively aggregating local features using attention weights. However, it requires additional feature input. Since of its capacity to establish more robust point-to-point relationships, attention mechanisms currently stand out as the most suitable framework for point cloud segmentation. However, in constructing a global attention map, each point often necessitates calculating its relationship with all other points, leading to substantial computational redundancy. Simultaneously, the intricate interrelations between points escalate the computational complexity of the model. Although some researchers have made improvements to its computational redundancy, Lai et al.(2022) proposed an efficient attention module, which can calculate the dot product between all key points and query points in advance, and then perform attention calculation. Sheng et al.(2021) build the channel weighting mechanism, effectively reducing the number of features that need to be fused, thereby improving computational efficiency. However, these methods did not consider optimizing spatial relationship reconstruction method and attention weights, which are the main reasons for the high computational complexity of attention mechanisms. Therefore, improving the method of constructing spatial relationships between points and reduce attention weight calculation consumption have become the primary challenge for better utilizing attention mechanisms in point cloud classification tasks. ## 3 Method In this section, the detailed structural information of MIA-Net is described. First, a trigonometric encoding module was introduced to construct structured features for disordered point clouds. Second, a multilayer feature pyramid was constructed using downsampling and MLP. Subsequently, an intuitive attention mechanism was introduced for interpoint and interlevel feature interactions to enhance the feature representativeness. Finally, the fused features were upsampled to complete the point-cloud classification task. ### Trigonometric encoding 3D point clouds are highly discrete and unstructured data. To strengthen the connection between points, we referred to Zhang et al.(2023) to introduce a non-parametric trigonometric encoding module. For the initial 3D point cloud \\(p_{i}=(x_{i},y_{i},z_{i})\\in\\mathbb{R}^{3}\\), the 3D coordinates data are encapsulated into C-dimensional feature vectors using sine and cosine functions to encode features according to the point order. \\[LocF(p_{i})=Concat(f^{*}_{i},f^{*}_{i},f^{*}_{i})\\in\\mathbb{R}^{c} \\tag{1}\\] where \\(f^{*}_{i},f^{*}_{i}\\), and \\(f^{*}_{i}\\) denote the triangular encoding of the coordinate values and the dimensions of each value are C/3. Considering the x-axis as an example, the coordinate values are encoded as follows: \\[\\left\\{\\begin{aligned} & f^{*}_{i}[2m]=sin(\\alpha x_{i}/\\beta^{ \\text{min}})\\\\ & f^{*}_{i}[2m+1]=cos(\\alpha x_{i}/\\beta^{\\text{min}})\\end{aligned}\\right. \\tag{2}\\] where \\(\\alpha\\) and \\(\\beta\\) represent the amplitude and wavelength of the trigonometric function \\(m\\in[0,\\xi]\\). The relative positional relationship between point clouds can be obtained by a product operation that captures the local semantic features. ### Intuitive attention Attention mechanism is capable of establishing connections among points. The traditional self-attention approach can be used to obtain the weight matrix by constructing the relationship between the key and query points and the self-attention feature is obtained by multiplying the feature vector: \\[Attention(q,k,v)=Softmax\\left(\\frac{qk^{\\tau}}{\\sqrt{d_{k}}}\\right) \\tag{3}\\] where \\(q\\), \\(k\\), and \\(v\\) represent the query points, key points and feature vectors, \\(d_{k}\\) represents a scale factor. If \\(N\\) is the number of feature points, the search complexity in Eq. (3) is \\(N^{\\alpha}\\). Hence, the computational complexity experiences exponential growth with an increase in the number of feature points. In order to mitigate computational complexity, an MIA mechanism is proposed in this study. The specific structure of the intuitive attention mechanism is shown in Fig. 1. The dimension of the point feature is (N, 512), while the linear layer to softmax has dimensions of (512, 128), and the linear layer for off-setting points is sized (512, 96). Each point cloud has 32 offset point clouds in four feature layers, and each offset point containing a 512-dimensional feature vector. The linear layer dimension for output features is (512, 512), resulting in output features of size (N, 512), where N represents the number of points. #### 3.2.1 Offset point cloud The construction of feature relationships between point clouds has been considered to effectively improve feature representativeness, thereby improving the final accuracy in point cloud classification tasks. If the relationship structure between point clouds is constructed without careful consideration, it can lead to increased computational complexity and unnecessary feature redundancy. Therefore, we constructed a point cloud offset mechanism at the feature level. This method first uses linear layers to calculate offset points based on the initial features of the point cloud. This approach begins by employing linear layers to compute offset points using the initial features of the point cloud. However, this process may introduce a misalignment between the offset points and the original feature points. To address this, inverse distance weights are employed to compute offset point features. Specifically, by using inverse distance weights, the features of four closest point clouds around the offset point cloud are weighted and summed to obtain the offset point cloud feature. Subsequently, a multi-head intuitive attention mechanism is applied to integrate offset point features and refine the features of point clouds. This method utilizes multi-layer perceptrons to construct a dynamically changing attention structure based on inter point relationships, greatly reducing computational redundancy compared to traditional graph structured networks. The structure of point offset is shown in Fig. 2. #### 3.2.2 Feature aggregation After obtaining the offset point distance, it is summed with the original coordinates to obtain the offset point coordinates, searching for the nearest coordinates of the offset point in the feature space, and using the inverse distance weight to capture the features of the five closest points around it. Different from traditional attention mechanism, MIA-Net uses point cloud features combined with linear layer to directly generate attention weight matrix, eliminating the calculation of query matrix and key matrix. And the features of different offset points are connected using multi-head attention weights. After obtaining multi head attention features, using a linear layer to unify the output dimensions, a skip connection is made between the output and input features, which is used as the result for the feature output. The intuitive attention mechanism mentioned above can be expressed as follows: \\[\\textit{Intuitive}-\\textit{Attention}=\\sum_{m=1}^{M}\\textit{W}_{m}\\left[\\sum_{k =1}^{k}A_{m\\textit{gt}}\\textit{W}_{m}\\textit{V}_{p\\textit{i}\\textit{P}}\\right] \\tag{4}\\] where \\(m\\) represents the number of heads of the multilevel attention mechanism, \\(k\\) denotes the quantity of search points, and the value of \\(k\\) Figure 1: Structure of the intuitive attention mechanism. Figure 2: Structure of the point offset. the number of feature points.\\(A_{\\text{mbk}}\\) represents the attention weight between the key point and search point under the _mth_ attention head and \\(\\sum_{k-1}^{k}A_{\\text{migk}}=1\\). \\(P\\) represents the feature point coordinates, \\(P\\) denotes the computed offset value, \\(P\\) and \\(P\\) have the same dimension, and \\(P+P\\) and \\(V\\) represent the 3D-coordinates and feature vectors of the offset points. \\(W_{\\text{m}}\\) and \\(W_{\\text{m}}\\) denote two sets of weight parameters, respectively. When point features are fused between different layers, Eq. (4) is transformed as follows: \\[\\text{{Mit}}-\\text{{IntuitiveAttention}}=\\sum_{n=1}^{M}W_{n}\\left[ \\sum_{i=1}^{L}\\sum_{k=1}^{K}A_{\\text{migk}}\\bullet W_{n}V_{\\varphi,\\varphi_{ i}}\\right] \\tag{5}\\] where \\(L\\) denotes the quantity of feature layers, \\(\\sum_{l=1}^{L}\\sum_{k=1}^{K}A_{\\text{migk}}=1\\). \\(N\\) denotes the quantity of key point clouds. Based on the above-mentioned two equations, our method can reduce the computational complexity from \\(o(N^{2}d)\\) to \\(o(Nkd)\\) compared with the traditional attention mechanism. ### Architecture of the MIA-Net module To obtain the multilayer features from the airborne point clouds, MLP, farthest point sampling, inverse distance weight, and an intuitive attention mechanism were used to build the network structure in the feature extraction stage. First, the original coordinates were encoded with trigonometric functions and the initial features of the point cloud were extracted using farthest point sampling and MLP. In order to ensure that the extracted features have the same dimension when input into the intuitive attention module, we used three FC model to unified dimension and the input and output dimension of FC are shown in the annotations in Fig. 3. Subsequently, the intuitive attention mechanism was used for the layers and point feature interaction. Finally, feature mapping was completed using inverse distance weight interpolation and link with fully connection layer to output category labels. The network architecture is shown in Fig. 3. In our method, the numbers of sampled points were set to 1024, 256, 64, and 16. For the computation of local features, the neighborhood searching strategy was used and the neighborhood radius was set to 2, 4, 8, and 10 m. Multilayer features designed using our method contain four layers. Feature splicing was performed using a skip connection to preserve low-level semantic features. Concatenate operation concatenates point cloud features based on feature dimensions. Subsequently, the softmax function was applied to transform the features to the predicted label obtained by the network. ## 4 Experiments In order to demonstrate the efficacy of our approach, experiments were conducted using three public datasets including the ISPRS Vaihingen dataset, LASDU (Ye et al., 2020) and GMI datasets (Shapovalov et al., 2010). In addition, ablation experiments were conducted using the Vaihingen dataset to demonstrate the feasibility of the intuitive attention mechanism. The classification accuracy of the two datasets was compared with existing mainstream deep learning methods to prove the performance of our method. ### Evaluation with the ISPRS Vaihingen dataset The ISPRS Vaihingen dataset was collected in an urban area in Germany and contained nine categories (power lines, low vegetation, impervious surfaces, cars, fences, roofs, facades, shrubs, and trees). Each point contained information on the coordinate position, intensity, number of return, and return numbers. An official hyperspectral orthophoto image with 0.02 m resolution corresponding to the study area was also provided. To obtain the texture information of each point, the band value of the pixel associated with points was calculated separately. Data obtained after color assignment are shown in Fig. 4. Point cloud texture information was used in the training and testing stages. In the original data, the training data includes 753,876 points while the test data includes 411,722 points. The quantity of points within the specific categories is summarized in Table 1. Training data: set's size is \\(\\sim\\) 383 m \\(\\times\\) 406 m and that of the test dataset is \\(\\sim\\) 374 m \\(\\times\\) 403 m. Before training, the dataset was divided into blocks of 30 m \\(\\times\\) 30 m(Zhang et al., 2022). Subsequently, blocks with few point cloud data were merged. After chunking, the data comprised 99 training and 59 test blocks. The visualization effect is illustrated in Fig. 4. In the training phase, 8192 points were randomly sampled in each block using the farthest point sampling method as input. To improve the training performance, our method converts original coordinates into local coordinates and decenters original coordinates. Thus, input data contain local coordinates, texture information, and decentralized data. To prevent overfitting during the training process, a random dropout was used in MIA-Net. In the final evaluation stage, our method adopted four commonly used evaluation metrics: overall precision (OA), accuracy, recall, and \\(F1\\). \\[OA=\\frac{TP+TN}{TP+TN+FP+FN} \\tag{6}\\] _TP, TN, FP, and FN represent true positives, true negatives, false positives, and false negatives. In the calculation of the point cloud classification results, \\(TP+TN\\) denotes the number of points with correct prediction results and \\(TP+TN+FP+FN\\) denotes the aggregate point count. In order to calculate the Avg_F1, we focus on the specific types of Figure 3: Architecture of MIA-Net. _F1_ calculation. The _F1_ score is typically used for binary classification tasks, when calculating multi-classification tasks, we need to calculate the accuracy and recall of each category to calculate the F1 score for a specific category. The calculation method of _precision_ and _recall_ are shown in the following equation: \\[\\left\\{\\begin{aligned} & precision=\\frac{\\text{TP}}{\\text{TP}+\\text{FP}} \\\\ & recall=\\frac{\\text{TP}}{\\text{TP}+\\text{FN}}\\end{aligned}\\right. \\tag{7}\\] After obtaining the accuracy and recall values, we can compute the _F1_ score for a specific category: \\[F1=2\\bullet\\frac{precision\\times recall}{precision+recall} \\tag{8}\\] Finally, when we get the _F1_ score of all categories, the calculation method of _Avg_F1 are shown in the following equation: \\[Avg_{\\text{F1}}=\\frac{1}{C}\\sum_{i=1}^{c}\\text{F1} \\tag{9}\\] where c is the number of categories in the multi-classification tasks. MIA-Net model construction relies on the PyTorch 1.12.0 framework of Python 3.8 and the use of a TITAN RTX GPU for training. While the learning rate was established at 0.03, Adam was used to adjust this parameter. Decay rate, batch size and maximum epoch was established at 0.5, 16, and 500. Table 2 is the confusion matrix of classification result on ISPRS Vaihingen dataset. The confusion matrix of the classification results indicates that there are six out of nine species with F1 scores greater than 65 % including power lines, low vegetation, impervious surfaces, cars, roofs, and trees. Our method is not only robust in the object with a large number of point clouds but also achieves better results for cars and power lines. However, for objects with similar external structures, such as shrubs, trees, \\begin{table} \\begin{tabular}{l l l} \\hline Categories & Training & Test \\\\ \\hline Powerline & 546 & 600 \\\\ Low vegetation & 180,850 & 98,690 \\\\ Impervious surfaces & 193,723 & 101,986 \\\\ Car & 4614 & 3708 \\\\ Fence/Hedge & 12,070 & 7422 \\\\ Roof & 152,045 & 109,048 \\\\ Facade & 27,250 & 11,224 \\\\ Shrub & 47,605 & 24,818 \\\\ Tree & 135,173 & 54,226 \\\\ Sum & 753,876 & 411,722 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Number of points of ISPRS Vaihingen dataset. \\begin{table} \\begin{tabular}{l l l l l l l l l l} \\hline Categories & Powerline & Low vegetation & Impervious surfaces & Car & Fence/ & Roof & Facade & Shrub & Tree \\\\ & & & & & & & Hedge & & \\\\ \\hline Powerline & 363 & 1 & 1 & 0 & 0 & 155 & 3 & 3 & 74 \\\\ Low & 1 & 72939 & 12524 & 234 & 167 & 2054 & 354 & 6745 & 3672 \\\\ vegetation & & & & & & & & & \\\\ Impervious surfaces & 0 & 5080 & 94549 & 61 & 5 & 1638 & 78 & 500 & 75 \\\\ Car & 0 & 224 & 382 & 2409 & 81 & 170 & 37 & 365 & 40 \\\\ Fence/ & 0 & 858 & 364 & 59 & 1187 & 249 & 117 & 3243 & 1345 \\\\ Hedge & & & & & & & & & \\\\ Roof & 120 & 942 & 353 & 67 & 1 & 104153 & 1665 & 892 & 855 \\\\ Facade & 8 & 648 & 166 & 68 & 3 & 1932 & 6668 & 993 & 738 \\\\ Shrub & 1 & 3409 & 374 & 141 & 123 & 1316 & 377 & 13402 & 5675 \\\\ Tree & 10 & 686 & 47 & 32 & 57 & 976 & 468 & 4827 & 47123 \\\\ Precision & 72.2 & 86.0 & 86.9 & 78.4 & 73.1 & 92.5 & 68.3 & 43.3 & 79.1 \\\\ Recall & 60.5 & 73.9 & 92.7 & 65.0 & 16.0 & 95.5 & 59.4 & 54.0 & 86.9 \\\\ F1 score & 65.8 & 79.5 & 89.7 & 71.1 & 26.2 & 94.0 & 63.5 & 48.1 & 82.8 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Confusion matrix for the MIA-Net on ISPRS Vaihingen dataset (The units in lines 2–10 are the number of point clouds, and the units in the last three lines are percentages). Figure 4: Vaihingen dataset displayed in false color. (a) Training set. (b) Test set. Black rectangles indicate divided blocks. facades, and roofs, confusion remains in the classification results. Fig. 6 shows that most of the misidentified regions appear at the boundaries of the ground objects, which is mainly due to unstable boundary features. To further illustrate the effectiveness of the intuitive attention mechanism and trigonometric encoding structure, ablation experiments on these two types of modules were conducted based on the original network. We used \"IA\" to represent whether remove the intuitive attention mechanism from MIA-Net and \"TE\" to represent the trigonometric encoding (TE) structure. The results are shown in Table 3. In order to demonstrate the effectiveness of the triangle encoding method selected in this article, experimental comparisons of constant encoding (CE) were also included in the ablation experiment. Table 3 shows that intuitive attention has a larger effect on network performance improvement than trigonometric encoding and removing the intuitive attention mechanism from MIA-Net reduces the average F1 by 2.3 %. TE also improves the network performance, and the average F1 increase of 1.0 % compared with the network without TE structure. Based on the classification results, MIA-Net outperforms the baseline network in seven categories of recognition results. And from the Table 3, it can be seen that compared with the triangular encoding method, constant encoding not only does not improve the model accuracy, but also leads to a decrease in overall classification result. To further prove the efficacy of MIA-Net, we contrasted the MIA-Net classification results with those of UM (Horvat et al., 2016), WuY, WuYuY2, HM 1, and BIJ_W (Vaihingen 3D Semantic Labeling (isprs.org)). As shown in Table 4. In the above-mentioned comparison experiments, MIA-Net obtained better results for more categories (cars, roofs, facades, shrub and trees). Both OA and _F1_ of the classification results are higher than those of the other algorithms. Although HM_1 and WhuY2 outperform the proposed method in some categories, the average score shows that their results include significant deviations and these methods only perform better in individual categories. To further demonstrate the effectiveness of MIA-Net, the results were compared with some classical and new deep-learning methods, as listed in Table 5, These include PointNet\\(++\\)(Li et al., 2020), PointSIFT(Li et al., 2020), PointConv (Wu et al., 2019), DGCNN (Wang et al., 2019), GAC-Net (Wang et al., 2019),RFFS-Net(Mao et al., 2022), VD\\(-\\)LabLi et al., 2022), LGEANet(Dai et al., 2023) and Full-PointNet\\(+\\)(Nong et al., 2023). The classification accuracy was measured with F1. Our method exhibits the best performance for roof, facades and tree, with an Figure 5: The visual comparison of the ground truth, the classification results achieved by the RFFS-Net and our MIA-Net on ISPRS Vaihingen dataset. Figure 6: Error map of MIA-Net. The green represent the correct results and the red represent the wrong results. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) average F1 improvement of 0.1 %, 2.0 % and 2.3 %, respectively, compared with other methods. The OA index of MIA-Net is second highest among these methods. While classification results details are shown in Fig. 5, the error map is shown in Fig. 6 and the comparison of error maps are shown in Fig. 7. MIA-Net yields better results mainly because it uses an intuitive attention module for the interaction between different feature levels. It obtains features containing both low-level and high-level semantic information, which increases the robustness of the features and improves the results. Traditional neighborhood-based methods cannot perform feature interactions, which reduces the overall classification accuracy, whereas our method considers the linkages between features using the point-offset mechanism. From the visual comparisons results and the error map, it can be seen that our MIA-Net is more accurate in details. especially at the intersection of multiple surface features, MIA-Net has more obvious boundary effects compare with other methods. ### Evaluation with the LASDU dataset LASDU point clouds was acquired using the Leica ALS70 airborne scanning device in an urban area in Northwest China, and the terrain has no significant undulations. The main undulation originates from the elevation difference between the buildings and the ground. The overall altitude was 1550 m, the flight altitude of the aircraft was \\(\\sim\\) 1200 m. Vertical accuracy varied from \\(\\sim\\) 5-30 cm. LASDU provides airborne \\begin{table} \\begin{tabular}{l c c c c c c c c c c} \\hline \\hline Class & encoding & attention & encoding & attention & encoding & attention & encoding & attention & encoding & attention \\\\ & \\(\\times\\) & IA & CE & IA & TE & \\(\\times\\) & TE & IA \\\\ \\hline Powerline & & 65.1 & & 60.1 & & 64.9 & & **65.8** & \\\\ Low vegetation & & 78.9 & & 77.5 & & 78.2 & & **79.5** & \\\\ Impervious Surfaces & & & & & & & & & \\\\ Car & 87.9 & & 87.0 & & 87.9 & & **89.7** & \\\\ & 68.2 & & 59.8 & & 70.4 & & **71.1** & \\\\ & **31.9** & & 26.7 & & 22.8 & & 26.2 & \\\\ Roof & & 93.9 & & 93.6 & & **94.2** & & 94.0 & \\\\ Facade & & & 57.5 & & 55.7 & & 59.4 & & **63.5** & \\\\ Shrub & & 46.9 & & 45.9 & & 41.3 & & **48.1** & \\\\ Tree & & 82.2 & & 82.5 & & 81.0 & & **82.8** & \\\\ Avg\\_F1 & & 68.0 & & 65.4 & & 66.7 & & **69.0** & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Results of ablation experiment (F1%). (The bold values mean the highest value of the current indicator and the underline is the second best results). \\begin{table} \\begin{tabular}{l c c c c c c c c c c} \\hline \\hline Categories & Powerline & Low vegetation & Impervious surfaces & Car & Fence/ & Roof & Facade & Shrub & Tree & OA & Avg\\_F1 \\\\ & & & & & & & & & & & \\\\ UM & 46.1 & 79.0 & 89.1 & 47.7 & 5.2 & 92.0 & 52.7 & 40.9 & 77.9 & 80.8 & 58.9 \\\\ HM\\_1 & **69.8** & 73.8 & **91.5** & 58.2 & 29.9 & 91.6 & 54.7 & 47.8 & 80.2 & 80.5 & 56.3 \\\\ WinvT & 24.5 & 59.2 & 62.7 & 41.5 & 28.0 & 81.9 & 53.1 & 32.0 & 68.7 & 64.4 & 50.1 \\\\ WhntY & 31.9 & **80.0** & 88.9 & 40.8 & 24.5 & 93.1 & 49.4 & 41.1 & 77.3 & 81.0 & 58.5 \\\\ RJJ\\_W & 13.8 & 78.5 & 90.5 & 56.4 & **36.3** & 92.2 & 53.2 & 43.3 & 78.4 & 81.5 & 60.2 \\\\ Ours & 65.8 & 79.5 & 89.7 & **71.1** & 26.2 & **94.0** & **63.5** & **48.1** & **52.8** & **83.3** & **69.0** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Quantitative results of MIA-Net and five other methods on the ISPRS Vaihingen dataset (F1%), we report the F1 score of each category, meanwhile the overall accuracy (OA) and average F1 score (Avg_F1) are given in the last two columns. (The bold values mean the highest value of the current indicator and the underline is the second results). \\begin{table} \\begin{tabular}{l c c c c c c c c c c c} \\hline \\hline Methods & Powerline & Low vegetation & Impervious surfaces & Car & Fence/ & Roof & Facade & Shrub & Tree & OA & Avg\\_F1 \\\\ & & & & & & & & & & & \\\\ PointNet++ & 57.9 & 79.6 & 90.6 & 66.1 & 31.5 & 91.6 & 54.3 & 41.6 & 77.0 & 81.2 & 65.6 \\\\ PointSPIT & 55.7 & 80.7 & 90.9 & 77.8 & 30.5 & 92.5 & 56.9 & 44.4 & 79.6 & 82.2 & 67.7 \\\\ PointConv & 65.5 & 79.9 & 88.5 & 72.1 & 25.0 & 90.5 & 5point cloud data with labels of 1 km\\({}^{2}\\) for point clouds classification, which contains five surface features. In this experiment, parts 2 and 3 were used for training, and parts 1 and 4 were used for testing. The numbers of point-cloud distributions are listed in Table 6. Table 7 summarizes the final classification results. To further demonstrate the efficacy of MIA-Net, we evaluated the F1 scores, average F1 and OA of the final results and compared with those of current mainstream deep learning methods including PointNet++ (Ye et al., 2020), PointCNN(Li et al., 2022), KPConv (Li et al., 2020), DGCNN (Wang et al., 2019), GAC-Net (Wang et al., 2019), IPCONV (Zhang et al., 2023a) and RFFS-Net (Mao et al., 2022), the results can be viewed in Table 8. Table 8 shows that our method exhibits a competitive classification performance with respect to the LASDU dataset compared with other methods. It outperforms other deep learning algorithms with respect to low vegetation classification by 0.6 % and OA performs the best compared to the other seven algorithms. In the five label classes, the artifacts result is not high. This is because the artifacts in the LASDU dataset were composed of two types of features: artifacts and vehicles. Differences in the two types of features lead to confusion in the classification process. Fig. 8 shows the final classification results and the method comparison for LASDU data. ### Evaluation with the GML dataset GML datasets was collected by the airborne LiDAR system of Optech ALTM 2050 in 2010, which contains five types of surface features: ground, buildings, cars, trees and low vegetation. GML provides labeled training and testing point cloud data, which contains of 1,073,989 and 1,002,426 point clouds respectively. The numbers of point cloud distributions are listed in Table 9. Table 10 summarizes the final classification results. To further underscore the efficacy of MIA-Net, we evaluated the F1 scores, average F1 and OA of the final results and compared with those of current mainstream deep learning methods including PointNet++, PointCNN, DensePoint (Liu et al., 2019), RS-CNN(Li et al., 2022), DGCNN (Wang et al., 2019), PointConv and VD-LAB, the results can be viewed in Table 11. From the Table 11, it can be seen that our method performs well in the classification of ground, buildings and trees, which is 0.6 %, 11.4 % and 0.4 % higher than other methods. At the same time, it shows the state-of-the-art performance in terms of OA and average F1-score. Poor performance in car and low vegetation due to the small proportion of these types in the total and susceptibility to fluctuations. Fig. 9 shows the final classification results for the GML data. ### Performance comparison In order to demonstrate the efficiency and simplicity of our proposed MIA-Net, we compared the parameters, FLOPs and the model complexity with existing transformer-based networks. From Table 12, it can be seen that our model has a small number of parameters and FLOPs and can achieve excellent results. \\(N\\), \\(k\\), and \\(d\\) represent point cloud numbers, offset point, and the dimension of point cloud features. Compared with PointCloudTransformer, our model parameters and FLOPs are larger because we used more feature extraction layers. But we can reduce the model complexity from exponential level to log level, and achieved minimal model complexity. ## 5 Conclusion In this study, an airborne point cloud classification network, MIA-Net, is proposed, which first encodes the 3D point cloud coordinate relation as a continuous trigonometric function to obtain fine-grained features. To better capture the relationship between different feature layers and points, an intuitive attention module is constructed in our method, which uses the point-offset attention mechanism to fuse the \\begin{table} \\begin{tabular}{l c c} \\hline Categories & Training data & Test data \\\\ \\hline Ground & 704,425 & 637,257 \\\\ Buildings & 508,479 & 395,109 \\\\ Trees & 204,775 & 108,466 \\\\ Low vegetation & 210,495 & 192,051 \\\\ Artifacts & 66,738 & 53,061 \\\\ Sum & 1,694,912 & 1,385,944 \\\\ \\hline \\end{tabular} \\end{table} Table 6: Number of points of the LASDU. \\begin{table} \\begin{tabular}{l c c c c c} \\hline Categories & Ground & Building & Tree & Low vegetation & Artifacts \\\\ \\hline Ground & 601,895 & 1124 & 479 & 24,825 & 8934 \\\\ Buildings & 11,040 & 371,605 & 2887 & 3336 & 5877 \\\\ Tree & 3917 & 1909 & 93,090 & 7409 & 2141 \\\\ Low vegetation & 46,205 & 1504 & 9886 & 129,542 & 4914 \\\\ Artifacts & 22,803 & 5506 & 1230 & 4665 & 19,057 \\\\ precision & 87.7 & 97.4 & 86.5 & 76.4 & 46.6 \\\\ Recall & 94.5 & 94.1 & 85.8 & 67.5 & 35.9 \\\\ F1 score & 91.0 & 95.7 & 86.2 & 71.6 & 40.6 \\\\ \\hline \\end{tabular} \\end{table} Table 7: Confusion matrix for the MIA-Net on LASDU dataset(%). Figure 7: Comparison of error maps for MIA-Net, HM_methods, PointConv and UM method. features and directly calculates the attention relationship matrix with the point feature vectors, which greatly reduces the computational complexity of the traditional attention mechanism. In addition, we performed a series of experiments to validate the efficacy of MIA-Net, which achieved a better performance than the current mainstream deep learning methods with respect to the Vahilangen, LASDU and GML datasets. The outcomes of the ablation experiment indicate the intuitive attention mechanism module effectively improves the model's performance and the transferability of this module is superior. Our study provides guidance for further research on the semantic classification of large scenes. However, our method is fully supervised, which leads to time-consuming annotation in practical applications. Therefore, we plan to introduce a weakly supervised method in our MIA-Net to reduce the manual intervention. **CRediT authorship contribution statement** **Ziyang Wang:** Software, Resources, Methodology, Data curation, Conceptualization. **Hui Chen:** Conceptualization, Data curation, Software. **Jing Liu:** Conceptualization. **Jiarui Qin:** Methodology, Data \\begin{table} \\begin{tabular}{l c c c c c} \\hline Categories & Ground & Building & Car & Tree & Low vegetation \\\\ \\hline Ground & 415,205 & 10,805 & 747 & 8916 & 4316 \\\\ Buildings & 434 & 17,379 & 0 & 1192 & 587 \\\\ Cars & 1305 & 113 & 796 & 246 & 775 \\\\ Trees & 3166 & 144 & 49 & 527,823 & 670 \\\\ Low vegetation & 1719 & 56 & 298 & 2658 & 3027 \\\\ precision & 98.4 & 61.0 & 42.1 & 97.6 & 32.2 \\\\ Recall & 94.4 & 88.7 & 24.6 & 99.2 & 39.0 \\\\ F1 score & 96.4 & 72.3 & 31.1 & 98.4 & 35.3 \\\\ \\hline \\end{tabular} \\end{table} Table 10: Confusion matrix for the MIA-Net on GML dataset(%). \\begin{table} \\begin{tabular}{l c c c c c c c} \\hline Methods & Ground & Building & Tree & Low vegetation & Artifacts & OA & Avg.F1 \\\\ \\hline Pointnet++ & 87.7 & 90.6 & 82.0 & 63.2 & 31.3 & 82.8 & 71.0 \\\\ PointCNN & 89.3 & 92.8 & 84.1 & 62.8 & 31.7 & 85.0 & 72.1 \\\\ PointConv & 89.6 & 93.4 & 84.6 & 67.5 & 36.4 & 85.9 & 74.5 \\\\ DGCNN & 90.5 & 93.2 & 81.6 & 63.3 & 37.1 & 85.5 & 73.1 \\\\ GAC-Net & **91.4** & 94.4 & 85.1 & 70.5 & 43.7 & 87.2 & 72.0 \\\\ IPCONV & 90.5 & **96.3** & 85.8 & 59.6 & **46.3** & 86.7 & 75.7 \\\\ RIFS-Net & 90.9 & 95.4 & **86.8** & **71.0** & **44.4** & 87.1 & **77.7** \\\\ Ours & 91.0 & 95.7 & 86.2 & **71.6** & 40.6 & 87.7 & 77.0 \\\\ \\hline \\end{tabular} \\end{table} Table 8: Quantitative results of MIA-Net and five additional deep learning methods on the LASDU dataset (F1%). we report the F1 score of each category, meanwhile the overall accuracy (OA) and average F1 score (Avg.F1) are given in the last two columns. (The bold values mean the highest value of the current indicator and the underline is the second best results). Figure 8: The visual comparison of the ground truth, the classification results achieved by the GAC-Net and our MIA-Net on LASDU dataset. \\begin{table} \\begin{tabular}{l c c} \\hline Categories & Training data & Test data \\\\ \\hline Ground & 557,142 & 439,989 \\\\ Buildings & 98,244 & 19,592 \\\\ Cars & 1833 & 3235 \\\\ Trees & 381,677 & 531,852 \\\\ Low vegetation & 35,093 & 7758 \\\\ Sum & 1,073,989 & 1,002,426 \\\\ \\hline \\end{tabular} \\end{table} Table 9: Number of points of the GML. curation. **Yehua Sheng**: Conceptualization. **Lin Yang**: Methodology. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Data availability Data will be made available on request. ## Acknowledgments This work was supported by the National Natural Science Foundation of China (No: 42001284), the Jiangsu Natural Science Foundation (No: BK20200722), the Postgraduate Research & Practice Innovation Program of Jiangsu Province(No. KYC23_1707), and the Jiangsu Natural Science Foundation of China (Grant No. BK20231283). ## References * Ben-Shabur et al. (2018) Ben-Shabur, Y., Lindenbaum, M., Fischer, A., 2018. 3dmfr: Three-dimensional point cloud classification in real-time using convolutional neural networks. IEEE Robot Acment Lett 3, 3145-3152. * Dai et al. (2023) Dai, H., Hu, X., Shu, Z., Qin, N., Zhang, J., 2023. Deep ground filtering of large-scale AIS point clouds via iterative sequential ground prediction. Remote Sens (Neac) 15, 961. * Dersch et al. (2023) Dersch, S., Schott, A., Krysbak, P., Heuricht, M., 2023. Towards complete tree crown delineation by instance segmentation with Mask R-CNN and DETR using UAV-based \\begin{table} \\begin{tabular}{l c c c c c c} \\hline Methods & OA & Parameters & FLOPs & Complexity \\\\ & (\\%) & (MHz) & (GB) & & \\\\ \\hline PointTransformer & 92.8 & 13.9 & 9.4 & OO/P(d) \\\\ PointCloudTransformer & 93.2 & **2.8** & **2.0** & \\\\ PointTransformer & 93.7 & 9.1 & 17.1 & \\\\ CloudTransformer & 93.1 & 22.9 & 12.7 & \\\\ GB-Net & **93.8** & 8.4 & 9.0 & \\\\ Mi/Net & 93.0 & 3.2 & 8.9 & **O(M/d)** \\\\ \\hline \\end{tabular} \\end{table} Table 12: Comparing model parameters on the modelnet40 dataset. (The bold values mean the highest value of the current indicator and the underline is the second best results). \\begin{table} \\begin{tabular}{l c c c c c c c} \\hline Methods & Ground & Building & Car & Tree & Low vegetation & OA & Avg.F1 \\\\ \\hline Pointnet++ & 83.8 & 27.4 & 39.2 & 94.0 & 15.6 & 85.3 & 52.0 \\\\ PointCNN & 92.3 & 42.1 & 22.9 & 94.6 & 22.4 & 92.0 & 55.1 \\\\ DensePoint & 94.5 & 51.3 & 32.7 & 96.7 & 8.4 & 94.1 & 56.7 \\\\ RS-CNN & 92.8 & 35.0 & 35.2 & 96.2 & 15.1 & 93.3 & 54.9 \\\\ DGCNN & 95.3 & 46.9 & 27.0 & 97.5 & 11.4 & 94.7 & 55.7 \\\\ PointConv & 95.8 & 53.1 & **46.0** & 97.4 & 28.9 & 95.1 & 64.2 \\\\ VDLAB & 93.4 & 60.9 & 41.8 & 98.0 & **35.6** & 95.5 & 66.3 \\\\ Ours & **96.4** & **72.3** & 31.1 & **98.4** & 35.3 & **96.2** & **66.7** \\\\ \\hline \\end{tabular} \\end{table} Table 11: Quantitative results of MiA-Net and seven additional deep learning methods on the GMI dataset (F1%). we report the F1 score of each category, meanwhile the overall accuracy (OA) and average F1 score (Avg.F1) are given in the last two columns. (The bold values mean the highest value of the current indicator and the underline is the second best results). Figure 9: The visual comparison of the ground truth, the classification results achieved by the PointConv and our MIA-Net on GMI dataset. multispectral imagery and lidar data. ISPRS Open J. Photogram. Remote Sens. **8**, 100037. * [2] Eldarst, N., Glennie, C., Fernandez-Diaz, J.C., 2018. Classification of airborne multispectral lidar point clouds for land cover mapping. IEEE J. Sel. Top Appl. Earth Obs. Remote Sens. **11**, 1608-2078. * [3] Gerthew, M., Razani, R., Taghavi, E., 2021. **Tornado-net: multiview total variation semantic segmentation with diamond inception module.** in 2021 IEEE International Conference on Robotics and Automation (OCRA). IEEE, pp. 9543-9549. * [4] Girshick, R., 2015. Fast r-cnn, in: Proceedings of the IEEE international Conference on Computer Vision. pp. 1440-1448. * [5] Guo, M., Li, S., Liu, Z., Xu, J., Yu, T., Martin, R.R., Hu, S.M., 2021. PCI: Point cloud transformer. Comput. vis Media (Deielling) **7**, 187-199. * [6] Guo, L., Chehats, N., Mallet, C., Budair, S., 2011. Relevance of airborne lidar and multispectral image data for urban scene classification using Random Forests. ISPRS J. Photogram. Remote Sens. **6**, 56-66. * [7] Guo, J., Xu, Q., Zeng, Y., Liu, Z., Zhu, X., 2023. Nationwide urban tree canopy mapping and oversea assessment in Brazil from high-resolution remote sensing images using deep learning. ISPRS J. Photogram. Remote Sens. **10**, 1-15. * [8] Hacel, T., Wegger, J.D., Schindler, K., 2016. Fast semantic segmentation of 3D point clouds with strongly varying density. ISPRS Ann. Photogram. Remote Sens. Spatial Inf. Sci. **3**, 17-184. * [9] He, K., Gloxoxari, G., Dollar, P., Girshick, R., 2017. Mask r-cnn, in: Proceedings of the IEEE International Conference on Computer Vision. pp. 2961-2969. * [10] Horvat, D., Zalik, B., Mongos, D., 2016. Context-dependent orientation of non-linearly distributed points for vegetation classification in airborne LIDAR. ISPRS J. Photogram. Remote Sens. **11**, 1-14. * [11] Huang, J., You, S., 2016. Point cloud labeling using 3d convolutional neural network, in: 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, pp. 2670-2675. * [12] Hui, Z., Hu, Y., Jin, S., Yeeveng, V.Z., 2016. Road centerline extraction from airborne LIDAR point cloud based on hierarchical fusion and optimization. ISPRS J. Photogram. Remote Sens. **11**, 22-36. * [13] Hu, L., Lindberg, E., Bohlin, J., Persson, H.J., 2023. Assessing the detectability of European sparse lidar kinetic green attack in multispectral image images with high spatial-and temporal resolution. Remote Sens Environ **287**, 113484. * [14] Jiang, M., Wu, Y., Zhao, Z., Zha, Z., 2015. **Pointcast: A still-the-network module for 3d point cloud semantic segmentation.** arXiv preprint arXiv:1807.00652. * [15] Kim, J., Iiyun, C.,-U., Han, H., Kim, H.-C., 2021. Digital surface model generation for drifting Arctic sea ice with low-neutral surfaces based on drone images. ISPRS J. Photogram. Remote Sens. **17**, 147-159. * [16] Lai, X., Liu, J., Jiang, L., Wang, L., Zhao, H., Liu, S., Qi, X., Jia, J., 2022. Stratified transformer 3d point cloud segmentation, in: Proceedings of the IEEE/CVI Conference on Computer Vision and Pattern Recognition. pp. 8500-8509. * [17] Landstein, L., Simonovsky, M., 2018. Large-scale point cloud semantic segmentation with supergraph. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4558-4567. * [18] Com, Y., Butota, L., Bengio, V., Izhirter, P., 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86**, 2278-2324. * [19] Li, H., Althar, N., Mina, A., 2020. Spatiotemporal kernel for efficient graph convolution on 3d point clouds, IEEE Trans. Pattern Anal. Mach Intell. **4**, 3664-3680. * [20] M., Long, J., Stein, A., Wang, X., 2023. Using a semantic edge-aware multi-task neural network to delineate agricultural parcels from remote sensing images. ISPRS J. Photogram. Remote Sens. **20**, 24-40. * [21] X., Wang, L., Wang, M., Wen, C., Fang, Y., 2020. DANC-NET: Disparity-aware convolution networks with context encoding for airborne LIDAR point cloud classification. ISPRS J. Photogram. Remote Sens. **16**, 128-139. * [22] J., Weinnan, M., Sun, D., Wu, F., Jin, S., Fu, K., 2022. VU-LAB: A five-d decoupled network with local-segmentation bridge for airborne laser scanning point cloud classification. ISPRS J. Photogram. Remote Sens. **186**, 19-33. * [23] Yu, Zho, L., Chen, Y., Zhang, N., Fan, H., Zhang, Z., 2023. DJA: End-and multi-technology collaboration for presentation of built heritage in China. a review. Int. J. Appl. Earth Obs. Getif. **116**, 103156. * [24] Long, V.E., Nguyen, T.N., Widija, S., Sharma, D., Chong, Z.J., 2020. Anvnet: Ascern-based multi-view fusion network for lidar semantic segmentation. arXiv preprint arXiv:2012.04934. * [25] Liu, Y., Fan, B., Meng, G., Lu, J., Xiang, S., Pan, C., 2019. Densepoint: Learning densely contextual representation for efficient point cloud processing. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5293-5248. * [26] Liu, Z., Tang, H., Liu, Y., Hu, S., 2019. Point-voxel cm for efficient 3d deep learning. Adv. Neural Inf. Process Syst. **32**. * [27] Liu, J., Skidarc, A.K., Jones, S., Wang, T., Heirich, M., Zhu, X., Shi, Y., 2018. Large off-nxid scan image of airborne LIDAR can severely affect the estimates of forest structure metrics. ISPRS J. Photogram. Remote Sens. **136**, 13-25. * [28] Liu, Z., Tang, H., Zhao, S., Shao, K., Han, S., 2021. Pruss: 3d neural architecture search with point-voxel convolution. IEEE Trans. Pattern Anal. Mach. Intell. **4**, 8552-8568. * [29] Lodha, S.K., Fitzpatrick, D.M., Helmbold, D.P., 2007a. Aerial lidar data classification using adaboost. In: Sixth International Conference on 3D Digital Imaging and Modeling (SIDM 2007). IEEE, pp. 435-442. * [30] Lodha, S.K., Kreg, J.E., Helmbold, D.P., Fitzpatrick, D., 2006. Aerial LiDAR data classification using support vector machines (SVM). In: Thrl International Symposium on 3D Data Processing, Visualization, and Transmission (SDPVT'06). IEEE, pp. 507-574. * [31] Lodha, S.K., Fitzpatrick, D.M., Helmbold, D.P., 2007a. Aerial lidar data classification using expectation-maximization. Vision Geometry XV. SPIE **177**-187. * [32] Mao, Y., Chen, K., Dao, W., Sun, X., Lin, X., Fu, K., Weinmann, M., 2022. Beyond single receptive field: A receptive field fusion and stratification network for airborne laser scanning point cloud classification. ISPRS J. Photogram. Remote Sens. **18**, 45-61. * [33] Matuma, D., Scherrer, S., 2015. Voxnet: A 3d convolutional neural network for real-time object recognition. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 922-928. * [34] Niemeyer, J., Mallet, C., Rottensteiner, P., Siegel, U., 2011. Conditional random fields for the classification of LIDAR point clouds. The International Archives of the Photogramogram. Remote Sensing and Spatial Information Sciences. * [35] Niemeyer, J., Rottensteiner, P., Siegel, U., 2012. Conditional random fields for lidar point cloud classification in complex urban areas. ISPRS Ann. Photogram. Remote Sens. Spatial Inf. Sci. **1**, 263-268. * [36] Niemeyer, J., Rottensteiner, P., Siegel, U., Heigle, C., 2016. Hierarchical higher order rcf for the classification of airborne lidar point clouds in urban areas. Int. Arch. Photogram. Remote Sens. Stat. Inf. Sci. **4**, 655-662. * [37] Nong, X., Bai, W., Liu, Q., 2023. Airborne LIDAR point cloud classification using PointNet + network with full neighborhood features. PLoS One **18**, 6020346. * [38] Pepe, M., Greenstein, L., Scintill, M., 2018. Planning airborne photogrammetry and remote-sensing missions with modern platforms and sensors. Eur. J. Remote Sens. **51**, 412-426. * [39] Qi, C., Zhang, N., Gu, L.J., 2017. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Adv Neural Inf Process Syst **30**. * [40] Qi, C., Su, H., Mo, G., Guhos, L.J., 2017. Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Im Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652-660. * [41] Riziday, A., Persello, C., Grevert, C., Oude Eberriens, S., Vossmann, G., 2018. Ground and multi-class classification of airborne laser scanner point clouds using fully convolutional networks. Remote Sens. **1**, 1023. * [42] Romenberg, O., Fischer, P., 1987. U-net: Convolutional networks for biomedical image segmentation. In: Medical Imaging Computing and Computer-Assisted Intervention-MCRAI 2015. 18th International Conference, Munich, Germany, October 59, 2015, Proceedings, Part III 18. Springer, pp. 234-241. * [43] Shapovakov, R., Velthuev, A., Barinova, O., 2010. Non-associative markov networks for point cloud classification. In: Proceedings of the ISPRS Technical Communication III Symposium on Photogrammetry Computer Vision and Image Analysis, Paris, France. pp. 1-3. * [44] Sheng, H., Cai, S., Liu, Y., Deng, B., Huang, J., Hua, X.-S., Zhao, M.J., 2021. Improving 3d object detection with channel-aware transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2743-2752. * [45] Simonovsky, M., Komodakis, N., 2017. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3693-3702. * [46] Solberg, S., 2010. Mapping gap fraction, LAI and defiliation using various ALS penetration variables. Int. J. Remote Sens. **3**, 1227-1247. * [47] Sreevansai-Nair, J., Jindal, A., Kumar, B., 2018. Contour extraction in buildings in airborne lidar point clouds using multiscale local geometric descriptors and visual analytics. IEEE J. Sel. Top. Appl. Earth Obs. Remote* Wu et al. (2023) Wu, W., Paulin, L., Shan, Q., 2023. Pointcomformer: Revenge of the point-based convolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21802-21813. * Si et al. (2021) Si, S., Xu, S., Wang, R., Li, J., Wang, G., 2021. Building instance mapping from ALS point clouds aided by polygonal maps. IEEE Trans. Geosci. Remote Sens. 60, 1-13. * Wang et al. (2023) Wang, W., Hu, B., Li, S., Luo, Z., Deng, T., He, W., He, H., Chen, Z., 2023. Monitoring multi-water quality of intermittationality important karst wetland through deep learning, multi-sensor and multi-platform remote sensing images: A case study of Guilin, China, Ecol Indie 154, 110755. * Yang et al. (2018) Yang, Z., Tan, B., Pei, H., Jiang, W., 2018. Segmentation and multi-scale convolutional neural network-based classification of airborne laser scanner data. Sensors 18, 3347. * Ye et al. (2020) Ye, Z., Xu, Y., Huang, R., Tong, X., Li, X., Liu, X., Luan, K., Hoegner, L., Stills, U., 2020. Inside 4: ImageNet aerial lidar dataset for semantic labeling in dense urban areas. ISPIS Int J Geosci 9, 450. * Zhang and Rabbat (2018) Zhang, Y., Rabbat, M., 2018. A graph-cnn for 3d point cloud classification, inc 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pp. 6279-6283. * Zhang et al. (2023) Zhang, R., Chen, S., Wang, X., Zhang, Y., 2023. IPCONV: convolution with multiple different kernels for point cloud semantic segmentation. Remote Sens (based) 15, 5136. * Renni et al. (2023) Renni, Wang, L., Wang, Y., Gao, P., Li, H., Shi, J., 2023. Parameter is not all you need. Starting from non-parametric networks for 3d point cloud analysis. arXiv arXiv:2303.08134. * Zhang et al. (2022) Zhang, K., Ye, L., Xia, W., Sheng, Y., Zhang, S., Tao, X., Zhou, Y., 2022. A dual attention neural network for airborne LiDAR point cloud semantic segmentation. IEEE Trans. Geosci. Remote Sens. 60, 1-17. * Zhao et al. (2021) Zhao, H., Jiang, L., J., Torr, P.H.S., Koltun, V., 2021. Point transformer, inc: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16259-16268. * Zhao et al. (2015) Zhao, R., Pang, M., Wang, J., 2015. Classifying airborne LiDAR point clouds via deep features learned by a multi-scale convolutional neural network. Int. J. Geogr. Inf. Sci. 32, 960-97. * Zhibera and Ning (2019) Zhibera, K., Ning, L., 2019. PyramNet: Point cloud pyramid attention network and graph embedding module for classification and segmentation. arXiv preprint arXiv:1906.03299.
Three-dimensional laser scanning technology is widely employed in various fields due to its advantage in rapid acquisition of geographic scene structures. Achieving high precision and automated semantic segmentation of three-dimensional point cloud data remains a vital challenge in point cloud recognition. This study introduces a Multilevel intuitive Attention Network (MIA-Net) designed for point cloud segmentation. MIA-Net consists of three key components: local trigonometric function encoding, feature sampling, and intuitive attention interaction. Initially, trigonometric encoding captures fine-grained local semantics within disordered point clouds. Subsequently, a multilayer perceptron addresses point-cloud feature pyramid construction, and feature sampling is performed using the point offset mechanism in the different levels. Finally, the multilevel intuitive attention (MIA) mechanism facilitates feature interactions across different layers, enabling the capture of both local attention features and global structure. The point-offset attention scheme introduced in this study significantly reduces computational complexity compared to traditional attention mechanisms, enhancing computational efficiency while preserving the advantages of attention mechanisms. To evaluate the results of MIA-Net, the ISPRS Vahlingen benchmark, LASDU and GML airborne datasets were tested. Experiments show that our network can achieve state-of-art performance in terms of Overall Accuracy(OA) and average F1-score(e.g., reaching 96.2% and 66.7% for GML datasets, respectively).
Give a concise overview of the text below.
280
arxiv-format/1709_00308v2.md
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community John E. Ball [email protected] Mississippi State University, Department of Electrical and Computer Engineering, 406 Hardy Rd., Mississippi State, MS, USA, 39762 University of Malaya, Faculty of Computer Science and Information Technology, 50603 Lembah Pantai, Kuala Lumpur, Malaysia Derek T. Anderson Mississippi State University, Department of Electrical and Computer Engineering, 406 Hardy Rd., Mississippi State, MS, USA, 39762 University of Malaya, Faculty of Computer Science and Information Technology, 50603 Lembah Pantai, Kuala Lumpur, Malaysia Chee Seng Chan University of Malaya, Faculty of Computer Science and Information Technology, 50603 Lembah Pantai, Kuala Lumpur, Malaysia ## 1 Introduction In recent years, deep learning (DL) has led to leaps, versus incremental gain, in fields like computer vision (CV), speech recognition, and natural language processing, to name a few. The irony is that DL, a surrogate for neural networks (NNs), is an age old branch of artificial intelligence that has been resurrected due to factors like algorithmic advancements, high performance computing, and Big Data. The idea of DL is simple; the machine is learning the features and decision making (classification), versus a human manually designing the system. The reason this article exists is remote sensing (RS). The reality is, RS draws from core theories such as physics, statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, advancements like DL. The aim of this article is to provide resources with respect to theory, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL. Herein, RS is a technological challenge where objects or scenes are analyzed by remote means. This includes the traditional remote sensing areas, such as satellite-based and aerial imaging. This definition also includes non-traditional areas, such as unmanned aerial vehicles (UAVs), crowd-sourcing (phone imagery, tweets, etc.), advanced driver assistance systems (ADAS), etc. Thesetypes of remote sensing offer different types of data and have different processing needs, and thus also come with new challenges to algorithms that analyze the data. The contributions of this paper are as follows: 1. _Thorough list of challenges and open problems in DL RS._ We focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL. These observations are based on surveying RS DL and feature learning literature, as well as numerous RS survey papers. This topic is the majority of our paper and is discussed in section 4. 2. _Thorough literature survey._ Herein, we review 207 RS application papers, and 57 survey papers in remote sensing and DL. In addition, many relevant DL papers are cited. Our work extends the previous DL survey papers [1, 2, 3] to be more comprehensive. We also cluster DL approaches into different application areas and provide detailed discussions of some relevant example papers in these areas in section 3. 3. _Detailed discussions of modifying DL architectures to tackle RS problems._ We highlight approaches in DL in RS, including new architectures, tools and DL components, that current RS researchers have implemented in DL. This is discussed in section 4.5. 4. _Overview of DL._ For RS researchers not familiar with DL, section 2 provides a high-level overview of DL and lists many good references for interested readers to pursue. 5. _Deep learning tool list._ Tools are a major enabler of DL, and we review the more popular DL tools. We also list pros and cons of several of the most popular toolsets and provide a table summarizing the tools, with references and links (refer to Table 2). For more details, see section 2.4.4. 6. _Online summaries of RS datasets and DL RS papers reviewed_. First, an extensive online table with details about each DL RS paper we reviewed: sensor modalities, a compilation of the datasets used, a summary of the main contribution, and references. Second, a dataset summary for all the DL RS papers analyzed in this paper is provided online. It contains the dataset name, a description, a URL (if one is available) and a list of references. Since the literature review for this paper was so extensive, these tables are too large to put in the main article, but are provided online for the readers' benefit. These tables are located at [http://www.cs-chan.com/source/FADL/OnlineDataset_Summary_Table.pdf](http://www.cs-chan.com/source/FADL/OnlineDataset_Summary_Table.pdf) and [http://www.cs-chan.com/source/FADL/Online_Paper_Summary_Table.pdf](http://www.cs-chan.com/source/FADL/Online_Paper_Summary_Table.pdf). As an aid to the reader, Table 1 lists acronyms used in this paper. \\begin{table} \\begin{tabular}{|l|l|l|l|l|} \\hline **Acronym** & \\multicolumn{2}{|c|}{**Meaning**} & **Acronym** & \\multicolumn{1}{l|}{**Meaning**} \\\\ \\hline ADAS & Advanced Driver Assistance & AE & AutoEncoder \\\\ & System & & & \\\\ \\hline ANN & Artificial Neural Network & ATR & Automated Target Recognition \\\\ \\hline AVHRR & Advanced Very High Resolution & AVIRIS & Airborne Visible / Infrared \\\\ & Radiometer & & Imaging Spectrometer \\\\ \\hline BP & Backpropagation & CAD & Computer Aided Design \\\\ \\hline CFAR & Constant False Alarm Rate & CG & Conjugate Gradient \\\\ \\hline ChI & Choquet Integral & CV & Computer Vision \\\\ \\hline CNN & Convolutional Neural Network & DAE & Denoising AE \\\\ \\hline DAG & Directed Acyclic Graph & DBM & Deep Boltzmann Machine \\\\ \\hline DBN & Deep Belief Network & DeconvNet & DeConvolutional Neural Network \\\\ \\hline DEM & Digital Elevation Model & DIDO & Decision In Decision Out \\\\ \\hline DL & Deep Learning & DNN & Deep Neural Network \\\\ \\hline DSN & Deep Stacking Network & DWT & Discrete Wavelet Transform \\\\ \\hline FC & Fully Connected & FCN & Fully Convolutional Network \\\\ \\hline FC- & Fully Convolutional CNN & FC- & Fully Connected LSTM \\\\ CNN & & LSTM & \\\\ \\hline FIFO & Feature In Feature Out & FL & Feature Learning \\\\ \\hline GBRCN & Gradient-Boosting Random & GIS & Geographic Information System \\\\ & Convolutional Network & & \\\\ \\hline GPU & Graphical Processing Unit & HOG & Histogram of Ordered Gradients \\\\ \\hline HR & High Resolution & HSI & HyperSpectral Imagery \\\\ \\hline ILSVRC & ImageNet Large Scale Visual & L-BGFS & Limited Memory BGFS \\\\ & Recognition Challenge & & \\\\ \\hline LBP & Local Binary Patterns & LiDAR & Light Detection and Ranging \\\\ \\hline LR & Low Resolution & LSTM & Long Short-Term Memory \\\\ \\hline LWIR & Long-Wave InfraRed & MKL & Multi-Kernel Learning \\\\ \\hline MLP & Multi-Layer Perceptron & MSDAE & Modified Sparse Denoising Autoencoder \\\\ \\hline MSI & MultiSpectral Imagery & MWIR & Mid-wave InfraRed \\\\ \\hline NASA & National Aeronautics and Space & NN & Neural Network \\\\ & Administration & & \\\\ \\hline NOAA & National Oceanic and Atmospheric Administration & PCA & Principal Component Analysis \\\\ \\hline PGM & Probabilistic Graphical Model & PReLU & Parametric Rectified Linear Unit \\\\ \\hline RANSAC & RANdom SAmple Concesus & RBM & Restricted Boltzmann Machine \\\\ \\hline ReLU & Rectified Linear Unit & RGB & Red, Green and Blue image \\\\ \\hline RGBD & RGB + Depth image & RF & Receptive Field \\\\ \\hline \\multicolumn{4}{|c|}{Continued on next page} \\\\ \\hline \\end{tabular} \\end{table} Table 1: Acronym list. This paper is organized as follows. Section 2 discusses related work in CV. This section contrasts deep and \"shallow\" learning, and discusses DL architectures. The main reasons for success of DL are also discussed in this section. Section 3 provides an overview of DL in RS, highlighting DL approaches in many disparate areas of RS. Section 4 discusses the unique challenges and open issues in applying DL to RS. Conclusions and recommendations are listed in section 5. ## 2 Related work in CV CV is a field of study trying to achieve visual understanding through computer analysis of imagery. In the past, typical approaches utilized a processing chain which usually started with image denoising or enhancement, followed by feature extraction (with human coded features), a feature optimization stage, and then processing on the extracted features. These architectures were mostly \"shallow\", in the sense that they usually had only one to two processing layers between the features and the output. Shallow learners (Support Vector Machines (SVMs), Gaussian Mixture Models, Hidden Markov Models, Conditional Random Fields, etc.) have been the backbone of traditional research efforts for many years[2]. In contrast, DL usually has many layers (the exact demarcation between \"shallow\" and \"deep\" learning is not a set number), which allows a rich variety of highly complex, nonlinear and hierarchical features to be learned from the data. The following sections contrast deep and shallow learning, discuss DL approaches and DL enablers, and finally discuss DL success in domains other than RS. ### Deep vs. shallow learning _Shallow learning_ is a term used to describe learning networks that usually have at most one to two layers. Examples of shallow learners include the popular SVM, Gaussian mixture models, hidden Markov models, conditional random fields, logistic regression models, and the extreme learning machine[2]. Shallow learning models usually have one or two layers that compute a linear or non-linear function of the input data (often hand-designed features). DL, on the other hand, usually means a deeper network, with many layers of (usually) non-linear transformations. Although \\begin{table} \\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \\hline **Acronym** & **Meaning** & **Acronym** & **Meaning** \\\\ \\hline RICNN & Rotation Invariant CNN & RNN & Recurrent NN \\\\ \\hline RS & Remote Sensing & R-VCANet & Rolling guidance filter Vertex Component Analysis NETwork \\\\ \\hline S- & Stacked MSDAE & SAE & Stacked AE \\\\ MSDAE & & SAE & Stacked DAE \\\\ \\hline SAR & Synthetic Aperture Radar & SDAE & Stacked DAE \\\\ \\hline SIDO & Signal In Decision Out & SIFT & Scale Invariant Feature Transform \\\\ \\hline SISO & Signal In Signal Out & SGD & Stochastic Gradient Descent \\\\ \\hline SPI & Standardized Precipitation Index & SSAE & Stacked Sparse Autoencoder \\\\ \\hline SVM & Support Vector Machine & UAV & Unmanned Aerial Vehicle \\\\ \\hline VGG & Visual Geometry Group & VHR & Very High Resolution \\\\ \\hline \\end{tabular} \\end{table} Table 1: continued from previous page there is no universally accepted definition of how many layers constitute a \"deep\" learner, typical networks are typically at least four or five layers deep. Three main features of DL systems are that DL systems (1) can learn features directly from the data itself, versus human-designed features, (2) can learn hierarchical features which increase in complexity through the deep network, and (3) can be more generalizable and more efficiently encode the model compared to a shallower NN approach; that is, a shallow system will require exponentially more neurons (and thus more free parameters) and more training data [4, 5]. An interesting study on deep and shallow nets is given by Ba and Caruana [6], where they perform model compression, by training a Deep NN (DNN). The unlabeled data is then evaluated by the DNN and the scores produced by that model are used to train a compressed (shallower) model. If the compressed model learns to mimic the large model perfectly it makes exactly the same predictions and mistakes as the complex model. The key is the compressed model has to have enough complexity to regenerate the more complex model output. DL systems are often designed to loosely mimic human or animal processing, in which there are many layers of interconnected components, e.g. human vision. So there is a natural motivation to use deep architectures in CV-related problems. For the interested reader, we provide some useful survey paper references. Arel et al. provide a survey paper on DL [7]. Deng et al. [2] provide two important reasons for DL success: (1) Graphical Processing Unit (GPU) units and (2) recent advances in DL research. They discuss generative, discriminative, and hybrid deep architectures and show there is vast room to improve the current optimization techniques in DL. Liu et al. [8] give an overview of the autoencoder, the CNN, and DL applications. Wang et al. provide a history of DL [4]. Yu et al. [9] provide a review of DL in signal and image processing. Comparisons are made to shallow learning, and DL advantages are given. Two good overviews of DL are the survey paper of Schmidhuber et al. [10] and the book by Goodfellow et al. [11] Zhang et al. [3] give a general framework for DL in remote sensing, which covers four RS perspectives: (1) image processing, (2) pixel-based classification, (3) target recognition, and (4) scene understanding. In addition, they review many DL applications in remote sensing. Cheng et al. discuss both shallow and DL methods for feature extraction [1]. Some good DL papers are the introductory DL papers of Arnold et al. [12] and Wang et al. [4], the DL book by Goodfellow et al. [11], and the DL survey papers [2, 4, 5, 8, 10, 13, 14, 15, 16]. ### Traditional Feature Learning methods Traditional methods of feature extraction involve hand-coded features to extract information based on spatial, spectral, textural, morphological content, etc. These traditional methods are discussed in detail in the following references, and we will not give extensive algorithmic details herein. All of these hand-derived features are designed for a specific task, e.g. characterizing image texture. In contrast, DL systems derive complicated, (usually) non-linear and hierarchical features from the data itself. Cheng et al. [1] discuss traditional handcrafted features such as the Histogram of Ordered Gradients (HOG), the Scale-Invariant Feature Transform (SIFT) and SIFT variants, color histograms, etc. They also discuss unsupervised FL methods, such as principal components analysis, \\(k\\)-means clustering, sparse coding, etc. Other good survey papers discuss hyperspectral image (HSI) data analysis [17], kernel-based methods [18], statistical learning methods in HSI [19], spectral distance functions [20], pedestrian detection [21], multi-classifier systems [22], spectral-spatial classification [23], change detection [24, 25], machine learning in RS [26], manifold learning [27], transfer learning [28], endmember extraction [29], and spectral unmixing [30, 31, 32, 33, 34]. ### DL Approaches To date, the auto-encoder (AE), the CNN, Deep Belief Networks (DBNs), and the Recurrent NN (RNN), have been the four mainstream DL architectures. The deconvolutional NN (DeconvNet) is a relative newcomer to the DL community. The following sections discuss each of these architectures at a high level. Many good references are provided for the interested reader. #### 2.3.1 Autoencoder (AE) An AE is a network designed to learn useful features from unsupervised data. One of the first applications of AEs was dimensionality reduction, which is required in many RS applications. By reducing the size of the adjacent layers, the AE is forced to learn a compact representation of the data. The AE maps the input through an encoder function \\(f\\) to generate an internal (latent) representation, or code, \\(h\\), that is, \\(h=f(\\mathbf{x})\\). The autoencoder also has a decoder function, \\(g\\) that maps \\(h\\) to the output \\(\\hat{\\mathbf{x}}\\). In general, the AE is constrained, either through its architecture, or through a sparsity constraint (or both), to learn a useful mapping (but not the trivial identity mapping). A loss function \\(L\\) measures how close the AE can reconstruct the output: \\(L\\) is a function of \\(\\mathbf{x}\\) and \\(\\hat{\\mathbf{x}}=g(f(\\mathbf{x}))\\). A regularization function \\(\\Omega(h)\\) can also be added to the loss function to force a more sparse solution. The regularization function can involve penalty terms for model complexity, model prior information, penalizing based on derivatives, or penalties based on some other criteria such as supervised classification results, etc. (reference SS14.2 of [11] ). A Denoising AE (DAE) is an AE designed to remove noise from a signal or an image. Chen et al. developed an efficient DAE, which marginalizes the noise and has a computationally efficient closed form solution [35]. To provide robustness, the system is trained using additive Gaussian noise or binary masking noise (force some percentage of inputs to zero). Many RS applications utilize an AE for denoising. Figure 1(a) shows an example of a AE. The diabolo shape results in dimensionality reduction. #### 2.3.2 Convolutional Neural Network (CNN) A CNN is a network that is loosely inspired by the human visual cortex. A typical CNN is composed of multiple dual-layers of convolutional masks followed by pooling, and these layers are then usually followed by either fully-connected or partially-connected layers, which perform classification or class probability estimation. Some CNNs also utilize data normalization layers. The convolution masks have coefficients that are learned by the CNN. A CNN that analyzes grayscale imagery will employ 2D convolution masks, while a CNN using Red-Green-Blue (RGB) imagery will use 3D masks. Through training, these masks learn to extract features directly from the data, in stark contrast to traditional machine learning approaches, which utilize \"hand-crafted\" features. The pooling layers are non-linear operators (usually maximum operators), which allows the CNN to learn non-linear features, which greatly increases its learning capabilities. Figure 1(b) shows an example CNN, where the input is a HSI, and there are two convolution and pooling layers, followed by two fully connected (FC) layers. The number of convolution masks, the size of the masks, and the pooling functions are all parameters of the CNN. The masks at the first layers of the CNN typically learn basic features, and as one traverses the depths of the network, the features become more complex and are built-up hierarchically. Normalization layers provide regularization and can aid in training. The fully-connected layers (or partially-connected layers) are usually near the end of the CNN network, andallow complex non-linear functions to be learned from the hierarchical outputs of the previous layers. These final layers typically output class labels or estimates of the probabilities of the class label. CNNs have dominated in many perceptual tasks. Following Ujjwalkarn[36], the image recognition community has shown keen interest in CNNs. Starting in the 1990s, LeNet was developed by LeCun et al.[37], and was designed for reading zip codes. It generated great interest in the image processing community. In 2012, Krizhevsky et al.[38] introduced AlexNet, a deep CNN. It won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2012 by a significant margin. In 2013, Zeiler and Fergus[39] created ZFNet, which was AlexNet with tweaked parameters and won ILSVRC. Szegedy et al.[40] won ILSVRC with GoogLeNet in 2014, which used a much smaller number of parameters (4 million) than AlexNet (60 million). In 2015, ResNets were developed by He et al[41], which allowed CNNs to have very deep networks. In 2016, Huang et al.[42] published DenseNet, where each layer is directly connected to every other layer in a feedforward fashion. This architecture also eliminates the vanishing-gradient problem, allowing very deep networks. The examples above are only a few examples of CNNs. #### 2.3.3 Deep Belief Network (DBN) A DBN is a type (generative) of Probabilistic Graphical Model (PGM), the marriage of probability and graph theory. Specifically, a DBN is a \"deep\" (large) Directed Acyclic Graph (DAG). A number of well-known algorithms exist for exact and approximate inference (infer the states of unobserved (hidden) variables) and learning (learn the interactions between variables) in PGMs. A DBN can also be thought of as a type of deep NN. In[43], Hinton showed that a DBN can be viewed and trained (in a greedy manner) as a stack of simple unsupervised networks, namely Figure 1: Block diagrams of DL architectures. (a) AE. (b) CNN. (c) DBN. (d) RNN. Restricted Boltzmann Machines (RBMs), or generative AEs. To date, CNNs have demonstrated better performance on various benchmark CV data sets. However, in theory DBNs are arguably superior. CNNs possess generally a lot more \"constraints\". The DBN versus CNN topic is likely subject to change as better algorithms are proposed for DBN learning. Figure 1(c) depicts a DBN, which is made up of RBM layers and a visible layer. #### 2.3.4 Recurrent Neural Network (RNN) The RNN is a network where connections form directed cycles. The RNN is primarily used for analyzing non-stationary processes such as speech and time-series analysis. The RNN has memory, so the RNN has persistence, which the AE and CNN don't possess. A RNN can be unrolled and analyzed as a series of interconnected networks that process time-series data. A major breakthrough for RNNs was the seminal work of Hochreiter and Schmidhuber[44], the long short-term memory (LSTM) unit, which allows information to be written to a cell, output from the cell, and stored in the cell. The LSTM allows information to flow and helps counteract the vanishing/exploding gradient problems in very deep networks. Figure 1(d) shows a RNN and its unfolded version. #### 2.3.5 Deconvolutional Neural Network (DeconvNet) CNNs are often used for classification only. However, a wealth of questions exist beyond classification, e.g., what are our filters really learning, how transferable are these filters, what filters are the most active in a given image, where is a filter the most active at in a given image (or images), or more holistically, where in the image is our object(s) of interest (soft or hard segmentation). To this end, researchers have recently explored deconvolutional NN (DeconvNet)[45, 46, 47, 39]. Whereas CNNs use pooling, which helps us filter noisy activations and address affine transformations, a DeconvNet uses unpooling-the \"inverse\" of pooling. Unpooling makes use of \"switch variables\", which help us place activation in layer \\(l\\) back to its original pooled location in layer \\(l-1\\). Unpooling results in an enlarged, be it sparse, activation map that is fed to deconvolution filters (that are either learned or derived from the CNN filters). In[47], the Visual Geometry Group (VGG) developed the VGG 16-layer CNN, thus no deconvolution, with its last classification layer removed was used relative to computer vision on non-remotely sensed data. The resultant DeconvNet is twice as large as the VGG CNN. The first part of the network is the VGG CNN and the second part is an architecturally reversed copy of the VGG CNN with pooling replaced by unpooling. The entire network was trained and used for semantic image segmentation. In a different work, Zeiler et al. showed that a DeconvNet can be used to visualize a single CNN filter at any layer or a combination of CNN filters can be visualized for one or more images[45, 39]. The point is, relevant DeconvNet research exists in the CV literature. Two high-level comments are worth noting. First, DeconvNets have been used by some to help rationalize their architecture and operation selections in the context of a visual odyssey of its impact on the filters relative to one another, e.g., existence of a single dominant feature versus a diverse set of features. In many cases its not a rationalization of the final network performance per se, but instead a DeconvNet is a helpful tool that aids them in exploring the vast sea of choices in designing the network. Second, whereas DeconvNet can be used in many cases for segmentation, they do not always produce the segmentation that we might desire. Meaning, if the CNN learned parts, not the full object, then activation of those parts, or a subset thereof, may not equate to the whole and those parts might also be spatially separated in the image. The later makes it challengingto construct a high quality full object segmentation, or segmentation's if there are more than one instance of that object in an image. DeconvNets are basically very recent and have not (yet) been widely adopted by the RS community. ### DL Meets the Real World It is important to understand the different \"factors\" related to the rise and success of DL. This section discusses these factors: GPUs, DL NN expressivness, big data, and tools. #### 2.4.1 GPUs GPUs are hardware devices that are optimized for fast parallel processing. GPUs enable DL by offloading computations from the computer's main processor (which is basically optimized for serial tasks) and efficiently performing the matrix-based computations at the heart of many DL algorithms. The DL community can leverage the personal computer gaming industry, which demands relatively inexpensive and powerful GPUs. A major driver of the research interest in CNNs is the Imagenet contest, which has over one million training images and 1,000 classes[48]. DNNs are inherently parallel, utilize matrix operations, and use a large number of floating point operations per second. GPUs are a match because they have the same characteristics[49]. GPU speedups have been measured at 8.5 to 9[49] and even higher depending on the GPU and the code being optimized. The CNN convolution, pooling and activation calculation operations are readily portable to GPUs. #### 2.4.2 DL NN Expressiveness Cybenko[50] proved that MLPs are universal function approximators. Specifically, Cybenko showed that a feed-forward network with a single hidden layer containing a finite number of neurons can approximate continuous functions on compact subsets of \\(\\Re^{n}\\), with respect to relatively minimalistic assumptions regarding the activation function. However, Cybenok's proof is an existence theorem, meaning it tells us a solution exists, but it does not tell us how to design or learn such a network. The point is, NNs have an intriguing mathematical foundation that make them attractive with respect to machine learning. Furthermore, in a theoretical work, Telgarsky[51] has shown that for NN with Rectified Linear Units (ReLU) (1) functions with few oscillations poorly approximate functions with many oscillations, and (2) functions computed by NN with few (many) layers have few (many) oscillations. Basically, a deep network allows decision functions with high oscillations. This gives evidence to show why DL performs well in classification tasks, and that shallower networks have limitations with highly oscillatory functions. Sharir et al.[52] showed that having overlapping local receptive fields and more broadly denser connectivity gives an exponential increase in the expressive capacity of the NN. Liang et al.[53] showed that shallow networks require exponentially more neurons than a deep network to achieve the level of accuracy for function approximation. #### 2.4.3 Big Data Every day, approximately \\(350\\) million images are uploaded to Facebook[49], Wal-Mart collects approximately \\(2.5\\) petabytes of data per day[49], and National Aeronautics and Space Administration (NASA) alone is actively streaming \\(1.73\\) gigabytes of spacecraft borne observation data for active missions alone[54]. IBM reports that \\(2.5\\) quintillion bytes of data are now generated every data, which means that \"\\(90\\%\\) of the data in the world today has been created in the last two years alone\"[55]. The point is, an unprecedented amount of (varying quality) data exists due to technologies like remote sensing, smart phones, inexpensive data storage, etc. In times past, researchers used tens to hundreds, maybe thousands of data training samples, but nothing on the order of magnitude as today. In areas like CV, high data volume and variety have been at the heart of advancements in performance. Meaning, reported results are a reflection of advances in both data and machine learning. To date, a number of approaches have been explored relative to large scale deep networks (e.g., hundreds of layers) and Big Data (e.g., high volume of data). For example, in [56] Raina et al. put forth CPU and GPU ideas to accelerate DBNs and sparse coding. They reported a 5 to 15-fold speed up for networks with 100 million plus parameters versus previous works that used only a few million parameters at best. On the other hand, CNNs typically use back propagation and they can be implemented either by pulling of pushing [57]. Furthermore, ideas like circular buffers [58] and multi GPU based CNN architectures, e.g., Krizhevsky [38], have been put forth. Outside of hardware speedups, operators like ReLUs have been shown to run sever times faster than other common nonlinear functions. In [59], Deng et al. put forth a Deep Stacking Network (DSN) that consists of specialized NNs (called modules), each of which have a single hidden layer. Hutchinson et al. put forth Tensor-DSN is an efficient and parallel extension of DSNs for CPU clusters [60]. Furthermore, DistBelief is a library for distributed training and learning of deep networks with large models (billions of parameters) and massive sized data sets [61]. DistBelief makes use of machine clusters to manage the data and parallelism via methods like multi-threading, message passing, synchronization and machine-to-machine communication. DistBelief uses different optimization methods, namely SGD and Sandblaster [62]. Last, but not least, there are network architectures such as highway networks, residual networks and dense nets [63, 64, 65, 66, 67]. For example, highway networks are based on LSTM recurrent networks and they allow for the efficient training of deep networks with hundreds of layers based on gradient descent [64, 65, 66]. #### 2.4.4 Tools Tools are also a large factor in DL research and development. Wan et al. observe that DL is at the intersection of NNs, graphical modeling, optimization, pattern recognition and signal processing [5], which means there is a fairly high background level required for this area. Good DL tools allow researchers and students to try some basic architectures and create new ones more efficiently. Table 2 lists some popular DL toolkits and links to the code. Herein, we review some of the DL tools, and the tool analysis below are based on our experiences with these tools. We thank our graduate students for providing detailed feedback on these tools. AlexNet [38] was a revolutionary paper that re-introduced the world to the results that DL can offer. AlexNet utilizes ReLU because it is several times faster to evaluate than the hyperbolic tangent. AlexNet revealed the importance of pre-processing by incorporating some data augmentation techniques and was able to combat overfitting by using max pooling and dropout layers. Caffe [68] was the first widely used deep learning toolkit. Caffe is C++ based and can be compiled on various devices, and offers command line, Python, and Matlab interfaces. There are many useful examples provided. The cons of Caffe are that is is relatively hard to install, due to lack of documentation and not being developed by an organized company. For those interested in something other than image processing, (e.g. image classification, image segmentation), it is not really suitable for other areas, such as audio signal processing. TensorFlow[69] is arguably the most popular DL tool available. It's pros are that TensorFlow (1) is relatively easy to install both with CPU and GPU version on Ubuntu (The GPU version needs CUDA and cuDNN to be installed ahead of time, which is a little complicated); (2) has most of the state-of-the-art models implemented, and while some original implementation are not implemented in TensorFlow, but it is relatively easy to find a re-implementation in TensorFlow; (3) has very good documentation and regular updates; (4) supports both Python and C++ interfaces; and (5) is relatively easy to expand to other areas besides image processing, as long as you understand the tensor processing. One con of TensorFlow is that it is really restricted to Linux applications, as the windows version is barely usable. MatConvNet[70] is a convenient tool, with unique abstract implementations for those very comfortable with using Matlab. It offers many popular trained CNNs, and the data sets used to train them. It is fairly easy to install. Once the GPU setup is ready (installation of drivers + CUDA support), training with the GPU is very simple. It also offers Windows support. The cons are (1) there is a substantially smaller online community compared to TensorFlow and Caffe, (2) code documentation is not very detailed and in general does not have good online tutorials besides the manual. Lack of getting started help besides a very simple example, and (3) GPU setup can be quite tedious. For Windows, Visual Studio is required, due to restrictions on Matlab and its mex setup, as well as Nvidia drivers and CUDA support. On Linux, one has much more freedom, but must be willing to adapt to manual installations of Nvidia drivers, CUDA-support, and more. ### DL in other domains DL has been utilized in other areas than RS, namely human behavior analysis[76, 77, 78, 79], speech recognition[80, 81, 82], stereo vision[83], robotics[84], signal-to-text[85, 86, 87, 88, 89], physics[90, 91], cancer detection[92, 93, 94], time-series analysis[95, 96, 97], image synthesis[98, 99, 100, 101, 102, 103, 104], stock market analysis[105], and security applications[106]. These diverse set of applications show the power of DL. ## 3 DL approaches in RS There are many RS tasks that utilize RS data, including automated target detection, pansharpening, land cover and land use classification, time series analysis, change detection, etc. Many of these tasks utilize shape analysis, object recognition, dimensionality reduction, image enhancement, and other techniques, which are all amenable to DL approaches. Table 3 groups DL papers reviewed in this paper into these basic categories. From the table, it can be seen that there is a large diversity of applications, indicating that RS researchers have seen value in using DL methods in many different areas. Several representative papers are reviewed and discussed. Due to the large number of recent RS papers, we can't review all of the papers utilizing DL or FL in RS applications. Instead, herein we focus on several papers in different areas of interest that offer creative solutions to problems encountered in DL and FL and should also have a wide interest to the readers. We do provide a summary of all of the DL in RS papers we reviewed online at [http://www.cs-chan.com/source/FADL/Online_Paper_Summary_Table.pdf](http://www.cs-chan.com/source/FADL/Online_Paper_Summary_Table.pdf). ### Classification Classification is the task of labeling pixels (or regions in an image) into one of several classes. The DL methods outlined below utilize many forms of DL to learn features from the data itself and perform classification at state-of-the-art levels. The following discusses classification in HSI, 3D, satellite imagery, traffic sign detection and Synthetic Aperture Radar (SAR). **HSI:** HSI data classification is of major importance to RS applications, so many of the DL results we reviewed were on HSI classification. HSI processing has many challenges, including high data dimensionality and usually low numbers of training samples. Chen et al.[314] propose an DBN-based HSI classification framework. The input data is converted to a 1D vector and processed via a DBN with three RBM layers, and the class labels are output from a two-layer logistic regression NN. A spatial classifier using Principal Component Analysis (PCA) on the spectral dimension followed by 1D flattening of a 3D box, a three-level DBN and two level logistic regression classifier. A third architecture uses a combinations of the 1D spectrum and the spatial classifier architecture. He et al.[151] developed a DBN for HSI classification that does not require stochastic gradient descent (SGD) training. Nonlinear layers in the DBN allow for the nonlinear nature of HSI data and a logistic regression classifier is used to classify the outputs of the DBN layers. A parametric depth study showed depth of nine layers produced the best results of depths of 1 to 15, and after a depth of nine, no improvement resulted by adding more layers. Some of the HSI DL approaches utilize both spectral and spatial information. Ma et al.[169] created a HSI spatial updated deep AE which integrates spatial information. Small training sets are mitigated by a collaborative, representation-based classifier and salt-and-pepper noise is mitigated by a graph-cut-based spatial regularization. Their method is more efficient than comparable kernel-based methods, and the collaborative representation-based classification makes their system relatively robust to small training sets. Yang et al.[181] use a two-channel CNN to jointly learn spectral and spatial features. Transfer learning is used when the number of training samples is limited, where low-level and mid-level features are transferred from other scenes. The network has a spectral CNN and a spatial CNN, and the results are combined in three FC layers. A softmax classifier produces the final class labels. Pan et al.[175] proposed the so called rolling guidance filter and vertex component analysis network (R-VCANet), which also attempts to solve the common problem of lack of HSI training data. The network combines spectral and spatial information. The rolling guidance filter is an edge-preserving filter used to remove noise and small details from imagery. The VCANet is a combination of vertex component analysis[315], which is used to extract pure endmembers, and PCANet[316]. A parameter analysis of the number of training samples, rolling times, and the number and size of the convolution kernels. The system performs well even when the training ratio is only 4%. Lee et al.[158] designed a contextual deep fully convolutional DL network with fourteen layers that jointly exploits spatial and HSI spectral features. Variable size convolutional features are utilized to create a spectral-spatial feature map. A novel feature of the architecture is the initial layers uses both \\([3\\times 3\\times B]\\) convolutional masks to learn spatial features, and \\([1\\times 1\\times B]\\) for spectral features, where \\(B\\) is the number of spectral bands. The system is trained with a very small number of training samples (200/class). **3D:** In 3D analysis, there are several interesting DL approaches. Chen et al.[317] used a 3D CNN-based feature extraction model with regularization to extract effective spectral-spatial features from HSI. \\(L_{2}\\) regularization and dropout are used to help prevent overfitting. In addition, a virtual enhanced method imputes training samples. Three different CNN architectures are examined: (1) a 1D using only spectral information, consisting of convolution, pooling, convolution, pooling, stacking and logistic regression; (2) a 2D CNN with spatial features, with 2D convolution, pooling, 2D convolution, pooling, stacking, and logistic regression; (3) 3D convolution (2D for spatial and third dimension is spectral); the organization is same as 2D case except with 3D convolution. The 3D CNN achieves near-perfect classification on the data sets. Chen et al. [191] propose a novel 3D CNN to extract the spectral-spatial features of HSI data, a deep 2D CNN to extract the elevation features of Light Detection and Ranging (LiDAR) data, and then a FC DNN to fuse the 2D and 3D CNN outputs. The HSI data are processed via two layers of 3D convolution followed by pooling. The LiDAR elevation data are processed via two layers of 2D convolution followed by pooling. The results are stacked and processed by a FC layer followed by a logistic regression layer. Cheng et al. [226] developed a rotation-invariant CNN (RICNN), which is trained by optimizing a objective function with a regularization constraint that explicitly enforces the training feature representations before and after rotating to be mapped close to each other. New training samples are imputed by rotating the original samples by \\(k\\) rotation angles. The system is based on AlexNet [38], which has five convolutional layers followed by three FC layers. The AlexNet architecture is modified by adding a rotation-invariant layer that used the output of AlexNet's FC7 layer, and replacing the 1000-way softmax classification layer with a \\((C+1)\\)-layer softmax classifier layer. AlexNet is pretrained, then fine tuned using the small number of HSI training samples. Haque et al. [109] developed a attention-based human body detector that leverages 4D spatio-temporal signatures and detects humans in the dark (depth images with no RGB content). Their DL system extracts voxels then encodes data using a CNN, followed by a LSTM. An action network gives the class label and a location network selects the next glimpse location. The process repeats at the next time step. **Traffic Sign Recognition:** In the area of traffic sign recognition, a nice result came from Ciresan et al. [119], who created a biologically plausible DNN is based on the feline visual cortex. The network is composed of multiple columns of DNNs, coded for parallel GPU speedup. The output of the columns is averaged. It outperforms humans by a factor of two in traffic sign recognition. **Satellite Imagery:** In the area of satellite imagery analysis, Zhang et al. [186] propose a gradient-boosting random convolutional network (GBRCN) to classify very high resolution (VHR) satellite imagery. In GBRCN, a sum of functions (called boosts) are optimized. A modified multi-class softmax function is used for optimization, making the optimization task easier. SGD is used for optimization. Proposed future work was to utilize a variant of this method on HSI. Zhong et al. [190] use efficient small CNN kernels and a deep architecture to learn hierarchical spatial relationships in satellite imagery. A softmax classifier output class labels based on the CNN DL outputs. The CPU handles preprocessing (data splitting and normalization), while the GPU performs convolution, ReLU and pooling operations, and the the CPU handles dropout and softmax classification. Networks with one to three convolution layers are analyzed, with receptive fields of \\(10\\times 10\\) to \\(1000\\times 1000\\). SGD is used for optimization. A hyper-parameter analysis of the learning rate, momentum, training-to-test ratio, and number of kernels in the first convolutional layer were also performed. **SAR:** In the area of SAR processing, De et al. [288] use DL to classify urban areas, even when rotated. Rotated urban target exhibit different scattering mechanisms, and the network learns the \\(\\alpha\\) and \\(\\gamma\\) parameters from the HH, VV and HV bands (H=Horizontal, V-Vertical polarization). Bentes et al. [124] use a constant false alarm rate (CFAR) processor on SAR data followed by \\(N\\) AEs. The final layer associates the learned features with class labels. Geng et al. [149] used a eight-layer network with a convolutional layer to extract texture features from SAR imagery, a scale transformation layer to aggregate neighbor features, four Stacked AE (SAE) layers for feature optimization and classification, and a two-layer post processor. Gray level co-occurrence matrix and Gabor features are also extracted, and average pooling is used in layer two to mitigate noise. ### Transfer Learning Transfer learning utilizes training in one image (or domain) to enable better results in another image (or domain). If the learning crosses domains, then it may be possible to utilize lower to mid-level features learned from on domain in the other domain. Marmanis et al. [259] attacked the common problem in RS of limited training data by utilizing transfer learning across domains. They utilized a CNN pretrained on the ImageNet dataset, and extracted an initial set of representations from orthoimagery. These representations are then transferred to a CNN classifier. This paper developed a novel cross-domain feature fusion system. Their system has seven convolution layers followed by two long MLP layers, three convolution layers, two more large MLP layers, and finally a softmax classifier. They extract feature from the last layer, since the work of Donahue et al. [318] showed that most of the discriminative information is contained in the deeper layers. In addition, they take features from the large (\\(1\\times 1\\times 4096\\)) MLP, which is a very long vector output, and transform it into a 2D array followed by a large convolution (\\(91\\times\\)91) mask layer. This is done because the large feature vector is a computational bottleneck, while the 2D data can very effectively be processed via a second CNN. This approach will work if the second CNN can learn (disentangle) the information in the 2D representation through its layers. This approach is very unique and it raises some interesting questions about alternate DL architectures. This approach was also successful because the features learned by the original CNN were effective in the new image domain. Penatti et al. [219] asked if deep features generalize from everyday objects to remote sensing and aerial scene domains? A CNN was trained for recognizing everyday objects using ImageNet. The CNNs analyzed performed well, in areas well outside of their training. In a similar vein, Salberg [121] use CNNs pretrained on ImageNet to detect seal pups in aerial RS imagery. A linear SVM was used for classification. The system was able to detect seals with high accuracy. ### 3D Processing and Depth Estimation Cadena et al. [107] utilized multi-modal AEs for RGB imagery, depth images, and semantic labels. Through the AE, the system learns a shared representation of the distinct inputs. The AEs first denoise the given inputs. Depth information is processed as inverse depth (so sky can be handled). Three different architectures are investigated. Their system was able to make a sparse depth map more dense by fusing RGB data. Feng et al. [108] developed a content-based 3D shape retrieval system. The system uses a low-cost 3D sensor (e.g. Kinect or Realsense) and a database of 3D objects. An ensemble of AEs learns compressed representations of the 3D objects, and the AE act as probabilistic models which output a likelihood score. A domain adaptation layer uses weakly supervised learning to learn cross-domain representations (noisy imagery and 3D computer aided design (CAD)). The system uses the AE encoded objects to reconstruct the objects, and then additional layers rank the outputs based on similarity scores. Segaghat et al. [114] use a 3D voxel net that predicts the object pose as well as its class label, since 3D objects can appear very differently based on their poses. The results were tested on LiDAR data, CAD models, and RGB plus depth (RGBD) imagery. Finally, Zelener et al. [319] labels missing 3D LiDAR points to enable the CNN to have higher accuracy. A major contribution of this method is creating normalized patches of low-level features from the 3D LiDAR point cloud. The LiDAR data is divided into multiple scan lines, and positive and negative samples. Patches are randomly selected for training. A sliding block scheme is used to classify the entire image. ### Segmentation Segmentation means to process imagery and divide it into regions (segments) based on the content. Basaeed et al. [269] use a committee of CNNs that perform multi-scale analysis on each band to estimate region boundary confidence maps, which are then inter-fused to produce an overall confidence map. A morphological scheme integrates these maps into a hierarchical segmentation map for the satellite imagery. Couprie et al. [254] utilized a multi-scale CNN to learn features directly from RGBD imagery. The image RGB channels and the depth image are transformed through a Laplacian pyramid approach, where each scale is fed to a 3-stage convolutional network that create feature maps. The feature maps of all scales are concatenated (the coarser-scale feature maps are upsampled to match the size of the finest-scale map). A parallel segmentation of the image into superpixels is computed to exploit the natural contours of the image. The final labeling is obtained by the aggregation of the classifier predictions into the superpixels. In his Master's thesis, Kaiser [257] (1) generated new ground truth datasets for three different cities consisting of VHR aerial images with ground sampling distance on the order of centimeters and corresponding pixel-wise object labels, (2) developed FC networks (FCNs) were used to perform pixel-dense semantic segmentation, (3) created two modifications of the FCN architecture were found that gave performance improvements, and (4) utilized transfer learning was shown using FCN model was trained on huge and diverse ground truth data of the three cities, which achieved good semantic segmentations of areas not used for training. Langkvist et al. [157] applied a CNN to orthorectified multispectral imagery (MSI) and a digital surface model of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. ### Object Detection and tracking Object detection and tracking is important in many RS applications. It requires understanding at a higher level than just at the pixel-level. Tracking then takes the process one step further and estimates the location of the object over time. Diao et al. [228] propose a pixel-wise DBN for object recognition. A sparse RBM is trained in an unsupervised manner. Several layers of RBM are stacked to generate a DBN. For fine-tuning, a supervised layer is attached to the top of the DBN and the network is trained using BP with a sparse penalty constraint. Ondruska et al. [234] used RNN to track multiple objects from 2D laser data. This system uses no hand-coded plant or sensor models (these are required in Kalman filters). Their system uses an end-to-end RNN approach that maps raw sensor data to a hidden sensor space. The system then predicts the unoccluded state from the sensor space data. The system learns directly from the data and does not require a plant or sensor model. Schwegmann et al. [273] use a very deep Highway Network for ship discrimination in SAR imagery, and a three-class SAR dataset is also provided. Deep networks of 2, 20, 50 and 100 layerswere tested, and the 20 layer network had the best performance. Tang et al.[274] utilized a hybrid approach in both feature extraction and machine learning. For feature extraction, the Discrete Wavelet Transform (DWT) LL, LH, HL and HH (L=Low Frequency, H = High Frequency) features from the JPEG2000 CDF9/7 encoder were utilized. The LL features were inputs to a Stacked DAE (SDAE). The high frequency DWT subbands LH, HL and HH are inputs to a second SDAE. Thus the hand-coded wavelets provide features, while the two SDAEs learn features from the wavelet data. After initial segmentation, the segmentation area, major-to-minor axis ratio and compactness, which are classical machine learning features, are also used to reduce false positives. The training data are normalized to zero mean and unity variance, and the wavelet features are normalized to the \\([0,1]\\) range. The training batches have different class mixtures, and 20% of inputs are dropped to the SDAEs and there is a 50% dropout in the hidden units. The extreme learning machine is used to fuse the low-frequency and high-frequency subbands. An online-sequential extreme learning machine, which is a feedforward shallow NN, is used for classification. Two of the most interesting results were developed to handle incomplete training data, and how object detectors emerge from CNN scene classifiers. Mnih et al.[246] developed two robust loss functions to deal with incomplete training labeling and misregistration (location of object in map) is inaccurate. A NN is used to model pixel distributions (assuming they are independent). Optimization is performed using expectation maximization. Zhou et al.[233] show that object detectors emerge from CNNs trained to perform scene classification. They demonstrated that the same CNN can perform both scene recognition and object localization in a single forward pass, without having to explicitly learn the notion of objects. Images had their edges removed such that each edge removal produces the smallest change to the classification discriminant function. This process is repeated until the image is misclassified. The final product of that analysis is a set of simplified images which still have high classification accuracies. For instance, in bedroom scenes, 87% of these contained a bed. To estimate the empirical receptive field (RF), the images were replicated and random \\(11\\times 11\\) occluded patches were overlaid. Each occluded image is input to the trained DL network and the activation function changes are observed; a large discrepancy indicates the patch was important to the classification task. From this analysis, a discrepancy map is built for each image. As the layers get deeper in the network, the RF size gradually increases and the activation regions are semantically meaningful. Finally, the objects that emerging in one specific layer indicated that the network was learning object categories (dogs, humans, etc.) This work indicates there is still extensive research to be performed in this area. ### Super-resolution Super-resolution analysis attempts to infer sub-pixel information from the data. Dong et al.[277] utilized a DL network that learns a mapping between the low and high-resolution images. The CNN takes the low-resolution (LR) image as input and outputs the high-resolution (HR) image. In this method, all layers of the DL system are jointly optimized. In a typical super-resolution pipeline with sparse dictionary learning, image patches are densely sampled from the image and encoded in a sparse dictionary. The DL system does not explicitly learn the sparse dictionaries or manifolds for modeling the image patches. The proposed system provides better results than traditional methods and has a fast on-line implementation. The results improve when more data is available or when deeper networks are utilized. ### Weather Forecasting Weather forecasting attempts to use physical laws combined with atmospheric measurements to predict weather patterns, precipitation, etc. The weather effects virtually every person on the planet, so it is natural that there are several RS papers utilizing DL to improve weather forecasting. DL ability to learn from data and understand highly-nonlinear behavior shows much promise in this area of RS. Chen et al.[195] utilize DBNs for drought prediction. A three-step process (1) computes the Standardized Precipitation Index (SPI), which is effectively a probability of precipitation, (2) normalizes the SPI, and (3) determines the optimal network architecture (number of hidden layers) experimentally. Firth[311] introduced a Differential Integration Time Step network composed of a traditional NN and a weighted summation layer to produce weather predictions. The NN computes the derivatives of the inputs. These elemental building blocks are used to model the various equations that govern weather. Using time series data, forecast convolutions feed time derivative networks which perform time integration. The output images are then fed back to the inputs at the next time step. The recurrent deep network can be unrolled. The network is trained using backpropagation. A pipelined, parallel version is also developed for efficient computation. The model outperformed standard models. The model is efficient and works on a regional level, versus previous models which are constrained to local levels. Kovordanyi et al.[312] utilized NNs in cyclone track forecasting. The system uses a multi-layer NN designed to mimic portions of the human visual system to analyze National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer (NOAA AVHRR) imagery. At the first network level, shape recognition focuses on narrow spatial regions, e.g. detecting small cloud segments. Regions in the image can be processed in parallel using a matrix feature detector architecture. Rotational variations, which are paramount in cyclone analysis, are incorporated into the architecture. Later stages combine previous activations to learn more complex and larger structures from the imagery. The output at the end of the processing system is a directional estimator of cyclone motion. The simulation tool Leabra++ ([http://ccnbook.colorado.edu/](http://ccnbook.colorado.edu/)) was used. This tool is designed for simulating brain-like artificial NNs (ANNs). There are a total of five layers in the system: an input layer, three processing layers, and an output layer. During training, images were divided into smaller blocks and rotated, shifted, and enlarged. During training, the network was first given inputs and allowed to settle to steady state. Weak activations were suppressed, with at most \\(k\\) nodes were allowed to stay active. Then the inputs and correct outputs were presented to the network and the weights are all zeroed. The learned weights are a combination of the two schemes. Conditional PCA and contrastive Hebbian learning were used to train the network. The system was very effective if the Cyclone's center varied about 6% or less of the original image size, and less effective if there was more variation. Shi et al.[198] extended the FC LSTM (FC-LSTM) network that they call ConvLSTM, which has convolutional structures in the input-to-state and state-to-state transitions. The application is precipitation nowcasting, which takes weather data and predicts immediate future precipitation. ConvLSTM used 3D tensors whose last two dimensions are spatial to encode spatial data into the system. An encoding LSTM compresses the input sequence into a latent tensor, while the forecasting LSTM provides the predictions. ### Automated object and target detection and identification Automated object and automated target detection and identification is an important RS task for military applications, border security, intrusion detection, advanced driver assistance systems, etc. Both automated target detection and identification are hard tasks, because usually there are very few training samples for the target (but almost all samples of the training data are available as non-target), and often there are large variations in aspect angles, lighting, etc. Ghazi et al.[238] used DL to identify plants in photographs using transfer parameter optimization. The main contributions of this work are (1) a state-of-the-art plant detection transfer learning system, and (2) an extensive study of fine-tuning, iteration size, batch size and data augmentation (rotation, translation, reflection, and scaling). It was found that transfer learning (and fine tuning) provided better results than training from scratch. Also, if training from scratch, smaller networks performed better, probably due to smaller training data. The authors suggest using smaller networks in these cases. Performance was also directly related to the network depth. By varying the iteration sizes, it is seen that the validation accuracies rise quickly initially, then grow slowly. The networks studied are all resilient to overfitting. The batch sizes were varied, and larger batch sizes resulted in higher performance at the expense of longer training times. Data augmentation also had a significant effect on performance. The number of iterations had the most effect on the output, followed by the number of patches, and the batch size had the least significant effect. There were significant differences in training times of the systems. Li et al.[122] used DL for anomaly detection. In this work, a reference image with pixel pairs (a pair of samples from the same class, and a pair from different classes) is required. By using transfer learning, the system is utilized on another image from the same sensor. Using vicinal pixels, the algorithm recognizes central pixels as anomalies. A 16-level network contains layers of convolution followed by ReLUs. A fully-connected layer then provides output labels. ### Image Enhancement Image enhancement includes many areas such as pansharpening, denoising, image registration, etc. Image enhancement is often performed prior to feature extraction or other image processing steps. Huang et al.[236] utilize a modified sparse denoising AE (SPDAE), denoted MSDA, which uses the SPDAE to represent the relationship between the HR image patches as clean data to the lower spatial resolution, high spectral resolution MSI image as corrupted data. The reconstruction error drives the cost function and layer-by-layer training is utilized. Quan et al.[206] use DL for SAR image registration, which is in general a harder problem than RGB image registration due to high speckle noise. The RBM learns features useful for image registration, and the random sample consensus (RANSAC) algorithm is run multiple times to reduce outlier points. Wei et al.[204] applied a five-layer DL network to perform image quality improvement. In their approach, degraded images are modeled as downsampled images that are degraded by a blurring function and additive noise. Instead of trying to estimate the inverse function, a DL network performs feature extraction at layer 1, then the second layer learns a matrix of kernels and biases to perform non-linear operations to layer 1 outputs. Layers 3 and 4 repeat the operations of layers 1 and 2. Finally, an output layer reconstructs the enhanced imagery. They demonstrated results with non-uniform haze removal and random amounts of Gaussian noise. Zhang et al.[205] applied DL to enhance thermal imagery, based on first compensating for the camera transfer function (small-scale and large-scale nonlinearities), and then super-resolution target signature enhancement viaDL. Patches are extracted from low-resolution imagery, and the DL learns feature maps from this imagery. A nonlinear mapping of these feature maps to a HR image are then learned. SGD is utilized to train the network. ### Change Detection Change detection is the process of utilizing two registered RS images taken at different times and detecting the changes, which can be due to natural phenomenon such as drought or flooding, or due to man-made phenomenon, such as adding a new road or tearing down an old building. We note that there is a paucity of DL research into change detection. Pacifici et al. [137] used DL for change detection in VHR satellite imagery. The DL system exploits the multispectral and multitemporal nature of the imagery. Saturation is avoided by normalizing data to \\([-1,1]\\) range. To mitigate illumination changes, band ratios such as blue/green are utilized. These images are classified according to (1) man-made surfaces, (2) green vegetation, (3) bare soil and dry vegetation, and (4) water. Each image undergoes a classification and a multitemporal operator creates a change mask. The two classification maps and the change mask are fused using an AND operator. ### Semantic Labeling Semantic labeling attempts to label scenes or objects semantically, such as \"there is a truck next to the tree\". Sherrah et al. [263] utilized the recent development of FC NNs (FC-CNNs), which were developed by Long et al. [320] The FC-CNN is applied to remote sensed VHR imagery. In their network, there is no downsampling. The system labels images semantically pixel-by-pixel. Xie et al. [293] used transfer learning to avoid training issues due to scarce training data, transfer learning is utilized. A FC CNN trains in daytime imagery and predicts nighttime lights. The system also can infer poverty data from the night lights, as well as delineating man-made structures such as roads, buildings and farmlands. The CNN was trained on ImageNet and uses the NOAA nighttime remote sensing satellite imagery. Poverty data was derived from a living standards measurement survey in Uganda. Mini-batch gradient descent with momentum, random mirroring for data augmentation, and 50% dropout was used to help avoid overfitting. The transfer learning approach gave higher performance in accuracy, F1 scores, precision and area under the curve. ### Dimensionality reduction HSI are inherently highly dimensional, and often contain highly correlated data. Dimensionality reduction can significantly improve results in HSI processing. Ran et al. [192] split the spectrum into groups based on correlation, then apply \\(m\\) CNNs in parallel, one for each band group. The CNN output are concatenated and then classified via a two-layer FC-CNN. Zabalza et al. [193] used segmented SAEs are utilized for dimensionality reduction. The spectral data are segmented into \\(k\\) regions, each of which has a SAE to reduce dimensionality. Then the features are concatenated into a reduced profile vector. The segmented regions are determine by using the correlation matrix of the spectrum. In Ball et al. [321], it was shown that band selection is task and data dependent, and often better results can be found by fusing similarity measures versus using correlation, so both of these methods could be improved using similar approaches. Dimensionality reduction is an important processing step in many classification algorithms [322, 323], pixel unmixing [324, 325, 326, 327, 328, 329, 330, 331, 332 ## 4 Unsolved challenges and opportunities for DL in RS DL applied to RS has many challenges and open issues. Table 4 gives some representative DL and FL survey papers and discusses their main content. Based on these reviews, and the reviews of many survey papers in RS, we have identified the following major open issues in DL in RS. Herein, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets (4.1), (ii) human-understandable solutions for modelling physical phenomena (4.2), (iii) Big Data (4.3), (iv) non-traditional heterogeneous data sources (4.4), (v) DL architectures and learning algorithms for spectral, spatial and temporal data (4.5), (vi) transfer learning (4.6), (vii) an improved theoretical understanding of DL systems (4.7), (viii) high barriers to entry (4.8), and (ix) training and optimizing the DL (4.9). ### Inadequate data sets **Open Question #1a: How can DL systems work well with limited datasets?** There are two main issues with most current RS data sets. Table 5 provides a summary of the more common open-source datasets for the DL papers utilizing HSI data. Many of these papers utilized custom datasets, and these are not reported. Table 5 shows that the most commonly used datasets were Indian Pines, Pavia University, Pavia City Center, and Salinas. A very detailed online table (too large to put in this paper) is provided which lists each paper cited in Table 3. For each paper, a summary of the contributions is given, the datasets utilized are listed, and the papers are categorized in areas (e.g. HSI/MSI, SAR, 3D, etc.). The interested reader can find this at [http://www.cs-chan.com/source/FADL/Online_Dataset_Summary_Table.pdf](http://www.cs-chan.com/source/FADL/Online_Dataset_Summary_Table.pdf). While these are all good datasets, the accuracies from many of the DL papers are nearly saturated. This is shown clearly in Table 6. Table 6 shows results for the HSI DL papers against the commonly-used datasets Indian Pines, Kennedy Space Center, Pavia City Center, Pavia University, Salinas, and the Washington DC Mall. First, OA results must be taken with a grain of salt, since (1) the number of training samples per class can differ for each paper, (2) the number of testing samples can also differ, (3) classes with few relative training samples can even have 0% overall accuracy, and if there is a large number of test samples of the other classes, the final overall accuracies can still be high. Nevertheless, it is clear from examination of the table that the Indian Pines, Pavia City Center, Pavia University and Salinas datasets are basically saturated. In general, it is good to compare new methods to commonly-used datasets, new and challenging datasets are required. Cheng et al.[346, 1] point out that many existing RS datasets lack image variation, diversity, and have a small number of classes. Datasets are also saturating with accuracy. They created a large-scale benchmark dataset, \"NWPU-RESISC45\", which attempts to address all of these issues, and made it available to the RS community. The RS community can also benefit from a common practice in the CV community: publishing both datasets and algorithms online, allowing for more comparisons. A typical RS paper may only test their algorithm on two or three images and against only a few other methods. In the CV community, papers usually compare against a large amount of other methods and with many datasets, which may provide more insight about the proposed solution and how it compares to previous work. An extensive table of all the datasets utilized in the papers reviewed for this survey paper is made available online, because it is too large to include in this paper. The table is available at [http://www.cs-chan.com/source/FADL/Online_Dataset_Summary_Table.pdf](http://www.cs-chan.com/source/FADL/Online_Dataset_Summary_Table.pdf). The table lists the dataset name, briefly describes the datasets, provides a URL (if one is available), and a reference. Hopefully, this table will assist researchers starting work and looking for publicly available datasets. **Open Question #1b: How can DL systems work well with limited training data?** The second issue is that most RS data has a very small amount of training data available. Ironically, in the CV community, DL has an insatiable hunger for larger and larger data sets (millions or tens of millions of training images), while in the RS field, there is also a large amount of imagery, however, there is usually only a small amount with labeled training samples. RS training data is expensive, error-prone, and usually requires some expert interpretation, which is typically expensive (in terms of time, effort involved, and money) and often requires large amounts of field work and many hours or days post processing the data. Many DL systems, especially those with large numbers of parameters, require large amounts of training data, or else they can easily overtrain and not generalize well. This problem has also plagued other shallow systems as well, such as SVMs. Approaches used to mitigate small training samples are (1) transfer learning, where one trains on other imagery to obtain low-level to mid-level features which can still be used, or on other images from the same sensor - Transfer learning is discussed in section 4.6 below; (2) data augmentation, including affine transformations, rotations, small patch removal, etc.; (3) using ancillary data, such as data from other sensor modalities (e.g. LiDAR, digital elevation models (DEMs), etc.); and (4) unsupervised training, where training labels are not required, e.g. AEs and SAEs. SAEs that have a diabolo shape will force the AE network to learn a lower-dimensional representation. Ma et al.[169] utilized a DAE and employed a collaborative representation-based classification, where each test sample can be linearly represented by the training samples in the same class with the minimum residual. In classification, features of each sample are approximated with a linear combination of features of all training sample within each class, and the label can be derived according to the class which best approximates the test features. Interested reader is referred to references \\(46\\)-\\(48\\) in [169] for more information on collaborative representation. Tao et al.[349] utilized a Stacked Sparse AE (SSAE) that was shown to be very generalizable and performed well in cases when there were limited training samples. Ghamasi et al.[207] use Darwinian particle swarm optimization in conjunction with CNNs to select an optimal band set for classifying HSI data. By reducing the input dimensionality, fewer training samples are required. Yang et al.[181] utilized dual CNNs and transfer learning to improve performance. In this method, the lower and middle layers can be trained on other scenes, and train the top layers on the limited training samples. Ma et al.[170] imposed a relative distance prior on the SAE DL network to deal with training instabilities. This approach extends the SAE by adding the new distance prior term and corresponding SGD optimization. LeCun reviews a number of unsupervised learning algorithms using AEs, which can possibly aid when training data is minimal[350]. Pal[351] reviews kernel methods in RS and argues that SVMs are a good choice when there are a small number of training samples. Petersson et al.[335] suggest using SAEs to handle small training samples in HSI processing. ### Human-understandable solutions for modelling physical phenomena **Open Question #2a: How can DL improve model-based RS?** Many RS applications depend on models (e.g. a model of crop output given rain, fertilizer and soil nitrogen content, and time of year), many of which are very complicated and often highly nonlinear. Model outputs can be very inaccurate if the models don't adequately capture the true input data and properly handle the intricate inter-relationships between input variables. Abdel-Rahman et al. [352] pointed out that more accurate estimation of nitrogen content and water availability can aid biophysical parameter estimation for improving plant yield models. Ali et al. [353] examine biomass estimation, which is a nonlinear and highly complex problem. The retrieval problem is ill-posed and the electromagnetic response is the complex result of many contributions. The data pixels are usually mixed, making this a hard problem. ANNs and support vector regression have shown good results. They anticipate that DL models can provide good results. Both Adam et al. [354] and Ozesmi et al. [355] agree that there is need for improvement in wetland vegetation mapping. Wetland species are hard to detect and identify compared to terrestrial plants. Hyperspectral sensing with narrow bandwidths in frequency can aid. Pixel unmixing is important since canopy spectra are similar and combine with underlying hydrologic regime and atmospheric vapor. Vegetation spectra highly correlated among species, making separation difficult. Dorigo et al. [356] analyzed inversion-based models for plant analysis, which is inherently an ill-posed and hard task. They found that using ANN inversion techniques have shown good results. DL may be able to help improve results. Canopy reflections are governed by large number of canopy elements interacting and by external factors. Since DL networks can learn very complex non-linear systems, it seems like there is much room for improvement in applying DL models. DBNs or other DL systems seem like a natural fit for these types of problems. Kuenzer et al. [357] and Wang et al. [358] assess biodiversity modeling. Biodiversity occurs at all levels from molecular to individual animals, to ecosystem, to global. This requires a large variety of sensors and analysis at multiple scales. However, a main challenge is low temporal resolution. There needs to be a focus beyond just pixel-level processing, and utilizing spatial patterns and objects. DL systems have been shown to learn hierarchical features, with smaller scale features learned at the beginning of the network, and more complex and abstract features learned in the deeper portions. **Open Question #2b: What tools and techniques are required to \"understand\" how the DL works?** It is also worth mentioning that many of these applications involve biological and scientific end-users, who will definitely want to understand how the DL systems work. For instance, a linear model that models some biological process is easily understood - both the mathematical model and the statistics resulting from estimating the model parameters are well understood by scientists and biologists. However, a DL system can be so large and complex as to defy analysis. We note that this is not specific to RS, but a general problem in the broader DL community. The DL system is seen by many researchers, especially scientists and RS end-users, as a black box that is hard to understand what is happening \"under the hood\". Egmont-Peterson et al. [331] and Fassnacht et al. [359] both state that disadvantages of NNs are understanding what they are actually doing, which can be difficult to understand. In many RS applications, just making a decision is not enough; people need to understand how reliable the decision is and how the system arrived at that decision. Ali et al. [360] also echo this view in their review paper on improving biomass estimation. Visualization tools which show the convolutional filters, learning rates, and tools with deconvolution capabilities to localize the convolutional firings are all helpful [361, 362, 363, 364]. Visualization of what the DL is actually learning is an open area of research. Tools and techniques capable of visualizing what the network is learning and measures of how robust the network is (estimating how well it may genralize) would be of great benefit to the RS community (and the general DL community). ### Big Data **Open Question #3: What happens when DL meets Big Data?** As already discussed in Section 2.4.3, a number of mathematics, algorithms and hardware have been put forth to date relative to large scale DL networks and DL in Big Data. However, this challenge is not close to being solved. Most approaches to date have focused on Big Data challenges in RGB or RGBD data for tasks like face and object detection or speech. With respect to remote sensing, we have many of the same problems as CV, but there are unique challenges related to different sensors and data. First, we can break Big Data into its different so-called \"parts\", e.g., volume, variety and velocity. With respect to DBNs, CNNs, AEs, etc., we are primarily concerned with creating new robust and distributed mathematics, algorithms and hardware that can ingest massive streams of large, missing, noisy data from different sources, such as sensors, humans and machines. This means being able to combine image stills, video, audio, text, etc., with symbolic and semantic variation. Furthermore, we require real-time evaluation and possibly online learning. As Big Data in DL is a large topic, we restrict our focus herein to factors that are unique to remote sensing. The first factor that we focus on is high spatial and more-so spectral dimensionality. Traditional DLs operate on relatively small grayscale or RGB imagery. However, SAR imagery has challenges due to noise, and MSI and HSI can have from four to hundreds to possibly thousands of channels. As Arel et al.[7] pointed out, a very difficult question is how well DL architectures scale with dimensionality. To date, preliminary research has tried to combat dimensionality by applying dimensionality reduction or feature selection prior to DL, e.g., Benediktsson et al.[365] reference different band selection, grouping, feature extraction and subspace identification in HSI remote sensing. Ironically, most RS areas suffer from a lack of training data. Whereas they may have massive amounts of temporal and spatial data, there may not be the seasonal variations, times of day, object variation (e.g., plants, crops, etc.), and other factors that ultimately lead to sufficient variety needed to train a DL model. For example, most online hyperspectral data sets have little-to-no variety and it is questionable about what they are, and really can at that, learn. In stark contrast, most DL systems in CV use very large training sets, e.g., millions or billions of faces in different illuminations, poses, inner class variations, etc. Unless the remote sensing DL applies a method like transfer learning, DL in RS often have very limited training data. For example, in[170] Ma et at. tried to address this challenge by developing a new prior to deal with the instability of parameter estimation for HSI classification with small training samples. The SAE is modified by adding the relative distance prior in the fine-tuning process to cluster the samples with the same label and separate the ones with different labels. Instead of minimizing classification error, this network enforces intra-class compactness and attempts to increase inter-class discrepancy. ### Non-traditional heterogeneous data sources **Open Question #4a: How can DL work with non-traditional data sources?** Non-traditional data sources, such a twitter, YouTube, etc. offer data that can be useful to RS. These methods will probably never replace RS, but usually offer benefits to augment RS data, or provide quality real-time data before RS methods, which usually take longer, can provide RS-based data. Fohringer et al. [366] utilized information extracted from social media photos to enhance RS data for flood assessments. They found one major challenge was filtering posts to a manageable amount of relevant ones to further assess. The data from Twitter and Flickr proved useful for flood depth estimation prior to RS-based methods, which typically take 24-28 hours. Frias-Martinez et al. [367] take advantage of large amounts of geolocated content in social media by analyzing Twitter tweets as a complimentary source of data for urban land-use planning. Data from Manhattan (New York, USA), London (UK), and Madrid (Spain) was analyzed using a self-organizing map [368] followed by a Voronoi tesselation. Middleton et al. [369] match geolocated tweets and created real-time crisis maps via statistical analysis, which are compared to the US National Geospatial Agency post-event impact assessments. A major issue is that only about \\(1\\%\\) of tweets contain geolocation data. The tweets usually follow a pattern of a small number of first-hand reports and many retweets and comments. High-precision results were obtained. Singh et al. [370] aggregate user's social interest about any particular theme from any particular location into so called \"social pixels\", which are amenable to media processing techniques (e.g., segmentation, convolution), which allow semantic information to be derived. They also developed a declarative operator set to allow queries to visualize, characterize, and analyze social media data. Their approach would be a promising front-end to any social media analysis system. In the survey paper of Sui and Goodchild [371], the convergence of Geographic Information System (GIS) and social media are examined. They observed that GIS has moved from software helping a user at a desk to a means of communicating earth surface data to the masses (e.g. OpenStreetMap, Google Maps, etc.). In all of the above mentioned methods, DL can play a significant role of parsing data, analyzing data and estimating results from the data. It seems that social media is not going away, and data from social media can often be used to augment RS data in many applications. Thus the question is what novel work awaits the researcher in the area of using DL to combine non-traditional data sources with RS? **Open Question #4b: How does DL ingest heterogeneous data?** Fusion can take place at numerous so-called \"levels\", including signal, feature, algorithm and decision. For example, Signal In Signal Out (SISO) is where multiple signals are used to produce a signal out. For \\(\\Re\\)-valued signal data, a common example is the trivial concatenation of their underlying vectorial data, i.e., \\(X=\\{\\hat{x}_{1}, ,\\hat{x}_{N}\\}\\) becomes \\([\\hat{x}_{1}\\hat{x}_{2} \\hat{x}_{N}]\\) of length \\(|\\hat{x}_{1}|+ +|\\hat{x}_{N}|\\). Feature In Feature Out (FIFO), which is often related to if not the same as SISO, is where multiple features are combined, e.g., a HOG and a Local Binary Pattern (LBP), and the result is a new feature. One example is Multiple Kernel Learning (MKL), e.g., \\(\\ell\\)-\\(p\\) norm genetic algorithm MKL (GAMKLp) [372]. Typically the input is \\(N\\)\\(\\Re\\)-valued Cartesian spaces and the result is a Cartesian space. Most often, one engages in MKL to search for space in which pattern obey some property, e.g., they are nicely linearly separable and a machine learning tool like a SVM can be employed. On the other hand, Decision In Decision Out (DIDO), e.g., the Choquet integral (Chi), is often used for the fusion of input from _decision makers_, e.g., human experts, algorithms, classifiers, etc. [373]. Technically speaking, a CNN is typically a Signal In Decision Out (SIDO) or Feature In Decision Out (FIDO) system. Internally, the _feature learning_ part of the CNN is a SIFO or FIFO and the classifier is a FIDO. To date, most DL approaches have \"fused\" via (1) concatenation of \\(\\Re\\)-valued input data (SISO or FIFO) relative to a single DL, (2) each source has its own DL, minus classification, that is later combined into a single DL, or (3) multiple DLs are used, one for each source, and their results are once again concatenated and subjected to the classifier (either a MLP, SVM or other classifier). Herein, we highlight the challenges of syntactic and semantic fusion. Most DL approaches to date syntactically have addressed how \\(N\\) things, which are typically homogeneous mathematically, can be ingested by a DL. However, the more difficult challenge is semantically how should these sources be combined, what is a proper architecture, what is learned (can we understand it) and why should we trust the solution. This is of particular importance to numerous challenges in remote sensing that require a physically meaningful/grounded solution, e.g., model-based approaches. The most typical example of fusion in remote sensing is the combining of data from two (or more) sensors. Whereas there may be semantic variation but little-to-no semantic variation, e.g., both are possibly \\(\\Re\\)-valued vector data, the reality is most sensors record objective evidence about our universe. However, if human information (e.g., linguistic or text) is involved or algorithmic outputs (e.g., binary decisions, labels/symbols, probabilities, etc.), fusion becomes increasingly more difficult syntactically and semantically. Many theoretical (mathematical and philosophical) investigations, which are beyond the scope of this work, have concerned themselves with how to meaningfully combine objective vs. subjective data/information, qualitative vs. quantitative data/information, or evidences with beliefs other numerous other flavors information. It is a naive and dangerous belief that one can simply just \"cram\" data/information into a DL and get a meaningful and useful result. _How is fusion occurring? Where is it occurring?_ Fusion is further compounded if one is using uncertain information, e.g., probabilistic, possibilities, or other interval or distribution-based input. The point is, heterogeneous, be it mathematical representation, associated uncertainty, etc., is a real and serious challenge and if the DL community wishes to fuse multiple inputs or sources (humans, sensors and algorithms) then DL must theoretically rise to the occasion to ensure that the architectures and what is subsequently being learned is useful and meaningful. Example, and really preliminary at that, associated DL works to date include fusing hyperspectral with LiDAR[374] (two sensors yielding objective data) and text with imagery or video[375] (thus high-level human information sensor data), to name a few. The point is, the question remains, how can/does DL fuse data/information arising from one or more sources? ### DL architectures and learning algorithms for spectral, spatial and temporal data **DL Open Question #5: What architectural extensions will DL systems require in order to tackle complicated RS problems?** Current DL architectures, components (e.g. convolution), and optimization techniques may not be adequate to solve complex RS problems. In many cases, researchers have developed novel network architectures, new layer structures with their associated SGD or BP equations for training, or new combinations of multiple DL networks. This problem is also an open issue in the broader CV community. This question is at the heart of DL research. Other questions related to the open issues are: * _What architecture should be used?_ * _How deep should a DL system be, and what architectural elements will allow it to work at that depth?_ * _What architectural extensions (new components) are required to solve this problem?_ * _What training methods are required to solve this problem?_We examine several general areas where DL systems have evolved to handle RS data: (i) multi-sensor processing, (ii) utilizing multiple DL systems, (iii) Rotation and displacement-invariant DL systems, (iv) new DL architectures, (v) SAR, (vi) Ocean and atmospheric processing, (vii) 3D processing, (viii) spectral-spatial processing, and (ix) multi-temporal analysis. Furthermore, we examine some specific RS applications noted in several RS survey papers as areas that DL can benefit: (a) Oil spill detection, (b) pedestrian detection, (c) urban structure detection, (d) pixel unmixing, and (e) road extraction. This is by no means an exhaustive list, but meant to highlight some of the important areas. **Multi-Sensor Processing:** Chen et al. [191] utilize two deep networks, one analyzing HSI pixel neighbors (spatial data), and the other LiDAR data. The outputs are stacked and a FC and logistic regression lay provides outputs. Huang et al. [236] use a modified sparse DAE (MSDAE) to train the relationship between HR and LR image patches. The stacked MSDAE (S-MSDAE) are used to pretrain a DNN. The HR MSI image is then reconstructed from the observed LR MSI image using the trained DNN. **Multi-DL system:** In certain problems, multiple DL systems can provide significant benefit. Chen et al. [298] utilize parallel DNNs with no cross-connections to both speed up processing and provide good results in vehicle detection from satellite imagery. Ciresan et al. [119] utilize multiple parallel DNNs that are averaged for image classification. Firth et al. [311] use 186 RNNs to perform accurate weather prediction. Hou et al. [152] use RBMs to train from polarimetric SAR data and a three-layer DBN is used for classification. Kira et al. [376] used stereo-imaging for robotic human detection, utilizing a CNN which was trained on appearance and stereo disparity-based features, and a second CNN, which is used for long-range detection. Marmanis et al. [260] utilized an ensemble of CNNs to segment VHR aerial imagery using a FCN to perform pixel-based classification. They trained multiple networks with different initializations and average the ensemble results. The authors also found errors in the dataset, Vaihingen [377]. **Rotation- and displacement-invariant systems:** Some RS problems require systems that are rotation and displacement-invariant. CNNs have some robustness to translation, but not in general to rotations. Cheng et al. [226] incorporated a rotation-invarint layer into a DL CNN architecture to detect objects in satellite imagery. Du et al. [127] developed a displacement- and rotation-insensitive deep CNN for SAR Automated Target Recognition (ATR) processing that is trained by augmented dataset and specialized training procedure. **Novel DL architectures:** Some problems in RS require novel DL architectures. Dong et al. [277] use a CNN that takes the LR image and outputs the HR image. He et al. [151] proposed a deep stacking network for HSI classification that utilizes nonlinear activations in the hidden layers and does not require SGD for training. Kontschieder et al. [156] developed deep neural decision forests, which uses a stochastic and differentiable decision tree model that steers the representation learning usually conducted in the initial layers of a deep CNN. Lee et al. [158] analyze HSI by applying multiple local 3D convolutional filters of different sizes jointly exploiting spatial and spectral features, followed by a fully-convolutional layers to predict pixel classes. Zhang et al. [186] propose GBRCN to classify VHR satellite imagery. Ouyang et al. [202] developed a probabilistic parts-detector based model to robustly handle human detection with occlusions are large deformations utilizing a discriminative RBM to learn the visibility relationship among overlapping parts. The RBM has three layers that handle different size parts. Their results can possibly be improved by adding additional rotation invariance. **Novel DL SAR architectures:** SAR imagery has unique challenges due to noise and the grainy nature of the images. Geng et al.[149] developed a deep convolutional AE, which is a combination of a CNN, AE, classification, and post-processing layers to classify high-resolution SAR images. Hou et al.[152] developed a polarimetric SAR DBN. Filters are extracted from the RBMs and a final three-layer DBN performs classification. Liu et al.[210] utilize a Deep Sparse Filtering Network to classify terrain using polarimetric SAR data. The proposed network is based on sparse filtering[378], and the proposed network performs a minimization on the output \\(L_{1}\\) norm to enforce sparsity. Qin et al.[178] performed object-oriented classification of polarimetric SAR data using a RBM and built an adaptive boosting framework (AdaBoost[379] ) vice a stacked DBN in order to handle small training data. They also put forth the RBM-AdaBoost algorithm. Schwegmann et al.[273] utilized a very deep Highway Network configuration as a ship discrimination stage for SAR ship detection. They also presented a three-class SAR dataset that allows for more meaningful analysis of ship discrimination performances. Zhou et al.[380] proposed a three-class change detection approach for multitemporal SAR images using a RBM. These images either have increases or decreases in the backscattering values for changes, so the proposed approach classifies the changed areas into the positive and negative change classes, or no change if none is detected. **Oceanic and atmospheric studies:** Oceanic and atmospheric studies present unique challenges to DL systems that require novel developments. Ducournau et al.[278] developed a CNN architecture, which analyzes sea surface temperature fields and provides a significant gain in terms of peak signal-to-noise ratio compared to classical downscaling techniques. Shi et al.[198] extended the FC-LSTM network that they call ConvLSTM, which has convolutional structures in the input-to-state and state-to-state transitions for precipitation nowcasting. **3D Processing:** Guan et al.[239] use voxel-based filtering removes ground points from LiDAR data, then a DL architecture generates high-level features from the trees 3D geometric structure. Haque et al.[109] utilize both of CNN and RNN to process 4D spatio-temporal signatures to idenify humans in the dark. **Spectral-spatial HSI processing:** HSI processing can be improved by fusion of spectral and spatial information. Ma et al.[169] propose a spatial updated deep AE which adds a similarity regularization term to the energy function to enforce spectral similarity. The regularization terms is a cosine similarity term (basically the spectral angle mapper) between the edges of a graph, which the nodes are samples, which enforces keeping the sample correlations. Ran et al.[192] classify HSI data by learning multiple CNN-based submodels for each correlated set of bands, while in parallel a conventional CNN learns spatial-spectral characteristics. The models are combined at the end. Li et al.[162] incorporated vicinal pixel information by combining the center pixel and vicinal pixels, and utilizing a voting strategy to classify the pixels. **Multi-temporal analysis:** Multi-temporal analysis is a subset of RS analysis that has its own challenges. Data revisit rates are often long, and ground-truth data is even more expensive as multiple imagery sets have to be analyzed, and images must be co-registered for most applications. Jianya et al.[25] review multi-temporal analysis, and observe that it is hard, the changes are often non-linear, and changes occur on different timescales (seasons, weeks, years, etc.). The process from ground objects to images is not reversible, and image change to earth change is a very difficult task. Hybrid method involving classification, object analysis, physical modeling, and time series analysis can all potentially benefit from DL approaches. Arel et al.[7] ask if DL frameworks can understand trends over short, medium and long times? This is an open question for RNNs. Change detection is an important subset of multi-temporal analysis. Hussain et al.[24] state that change detection can benefit from texture analysis, accurate classifications, and ability to detect anomalies. DL has huge potential to address these issues, but it is recognized that DL algorithms are not common in image processing software in this field and large training sets and large training times may also be required. In cases of non-normal distributions, ANNs have shown superior results to other statistical methods. They also recognize that DL-based change detection can go beyond traditional pixel-based change detection methods. Tewkesbury et al.[381] observe that change detection can occur at the pixel, kernel (group of pixels), image-object, multi-temporal image-object (created by segmenting over time series), vector-polygon, and hybrid. While the pixel level is suitable for many applications, hybrid approaches can yield better results in many cases. Approaches to change detection can utilize DL to (1) co-register images and (2) detect changes at hierarchical (e.g. more than just pixel levels). **Some selected specific applications that can benefit from DL analysis:** This section discusses some selected applications where DL can benefit the results. This is by no means an exhaustive list, and many other areas can potentially benefit from DL approaches. In oil spill detection, Brekke et al.[382] point out that training data is scarce. Oil spills are very rare, which usually means oil spill detection approaches are anomaly detectors. Physical proximity, slick shape, and texture play important roles. SAR imagery is very useful, but there are look-alike phenomena that cause false positives. Algal information fusion from optical sensors and probability models can aid detection. Current algorithms are not reliable, and DL has great promise in this area. In the area of pedestrian detection, Dollar et al.[21] discuss that many images with pedestrians have only a small number of pixels. Robust detectors must handle occlusions. Motion features can achieve very high performance, but few have utilized them. Context (ground plane) approaches are needed, especially at lower resolutions. More datasets are needed, especially with occlusions. Again, DL can provide significant results in this area. For urban structure analysis, Mayer et al.[383] report that scale-space analysis is required due to different scales of urban structures. Local contexts can be utilized in the analysis. Analyzing parts (dormers, windows, etc) can improve results. Sensor fusion can aid results. Object variability is not treated sufficiently (e.g. highly non-planar roofs). The DL system's ability to learn hierarchical components and learn parts makes is a good candidate for improving results in this area. In pixel unmixing, Shi et al.[34] and Somers et al.[384] review papers both point out that whether an unmixing system uses a spectral library or extracts endmembers spectra from the imagery, the accuracy highly depends on the selection of appropriate endmembers. Adding information from a spatial neighborhood can enhance the unmixing results. DL methods such as CNNs or other tailored systems can potentially inherently combine spectral and spatial information. DL systems utilizing denoising, iterative unmixing, feature selection, spectral weighting, and spectral transformations can benefit unmixing. Finally in the area of road extraction, Wang et al.[337] point out that roads can have large variability, are often curves, and can change size. In bad weather, roads can be very hard to identify. Object shadows, occlusions, etc. can cause the road segmentation to miss sections. Multiple models and multiple features can improve results. The natural ability of DL to learn complicated hierarchical features from data makes them a good candidate for this application area also. ### Transfer Learning **Open Question #6: How can DL in RS successfully utilize transfer learning?** In general, we note that transfer learning is also an open question in DL in general, not just in DL related to remote sensing. Section 4.9 discusses transfer learning in the broader context of the entire field of DL, which this section discusses transfer learning in a RS context. According to Tuia et al.[385] and Pan et al.[28], transfer learning seeks to learn from one area to another in one of four ways: instance-transfer, feature-representation transfer, parameter transfer, and relational-knowledge transfer. Typically in remote sensing, when changing sensors or changing to a different part of a large image or other imagery collected at different times, the transfer fails. Remote sensing systems need to be robust, but doesn't necessarily require near-perfect knowledge. Transfer between HSI images where the number and types of endmembers are different has very few studies. Ghazi et al.[238] suggest that two options for transfer learning are to (1) utilize pre-trained network and learn new features in the imagery to be analyzed, or (2) fine-tune the weights of the pre-trained network using the imagery to be analyzed. The choice depends on the size and similarity of the training and testing datasets. There are many open questions about transfer learning in HSI RS: * _How does HSI transfer work when the number and type of endmembers are different?_ * _How can DL systems transfer low-level to mid-level features from other domains into RS?_ * _How can DL transfer learning be made robust to imagery collected at different times and under different atmospheric conditions?_ Although in general these open questions remain, we do note that the following papers have successfully utilized transfer learning in RS applications: Yang et al.[181] trained on other remote sensing imagery and transferred low-level to mid-level features to other imagery. Othman et al.[218] utilized transfer learning by training on the ILSVRC-12 challenge data set, which has 1.2 million \\(224\\times 224\\) RGB images belonging to 1,000 classes. The trained system was applied to the UC Merced Land Use[386] and Banja-Luka[387] datasets. Iftene et al.[154] applied a pretrained CaffeNet and GoogleNet models on the ImageNet dataset, and then applying the results to the VHR imagery denoted the WHU-RS dataset.[388, 389] Xie et al.[293] trained a CNN on night-time imagery and used it in a poverty mapping. Ghazi et al.[238] and Lee et al.[390] used a pre-trained networks AlexNet, GoogLeNet and VGGNet on the LifeCLEF 2015 plant task dataset[391] and MalayaKew dataset[392] for plant identification. Alexandre[223] used four independent CNNs, one for each channel of RGBD, instead of using a single CNN receiving the four input channels. The four independent CNNs are then trained in a sequence by using the weights of a trained CNN as starting point to train the other CNNs that will process the remaining channels. Ding et al.[393] utilized transfer learning for automatic target recognition from mid-wave infrared (MWIR) to longwave IR (LWIR). Li et al.[122] used transfer learning by utilizing pixel-pairs based o reference data with labeled sampled using Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) hyperspectral data. ### An improved theoretical understanding of DL systems **DL Open Question #7: What new developments will allow researchers to better understand DL systems theoretically?**The CV and NN image processing communities understand BP and SGD, but until recently, researchers struggled to train deep networks. One issue has been identified as vanishing or exploding gradients[394, 395]. Using normalized initialization and normalization layers can help alleviate this problem. Using special architectures, such as deep residual learning[41] or highway networks[396] feed data into the deeper layers, thus allowing very deep networks to be trained. FC networks[320] have achieved success in pixel-based semantic segmentation tasks, and are another alternative to going deep. Sokolic et al.[397] determined that the spectral norm of the NN's Jacobian matrix in the neighbourhood of the training samples must be bounded in order for the network to generalize well. All of these methods deal with a central problem in training very deep NNs: The gradients must not vanish, explode, or become too uncorrelated, or else learning is severely hindered. The DL field needs practical (and theoretical) methods to go deep, and ways to train efficiently with good generalization capabilities. Many DL RS systems will probably require new components, and these networks with the new components need to be analyzed to see if the methods above (or new methods not yet invented) will enable efficient and robust network training. Egmont-Peterson et al.[331] point out that DL training is sensitive to the initial training samples, and it is a well-known problem in SGD and BP of potentially reaching a local minimum solution but not being at the global minimum. In the past, seminal papers such as Hinton's[398] which allow efficient training of the network, allow researchers to break past a previously difficult barrier. What new algorithmic and theoretical developments will spur the next large surge in DL? ### High barriers to entry **DL Open Question #8: How to best handle high entry barriers to DL?** Most DL papers assume that the reader is familiar with DL concepts, backpropagation, etc. This is in reality a steep learning curve that takes a long time to master. Good tutorials and online training can aid students and practitioners who are willing to learn. Implementing BP or SGD on a large DL system is a difficult task, and simple errors can be hard to determine. Furthermore, BP can fail in large networks, so alternate architectures such as highway nets are usually required. Many DL systems have a large number of parameters to learn, and often require large amounts of training data. Computers with GPUs and GPU-capable DL programs can greatly benefit by offloading computations onto the GPUs. However, multi-GPU systems are expensive, and students often use laptops that cannot be equipped with a GPU. Some DL systems run under Microsoft Windows, while others run under variants of Linux (e.g. Ubuntu or Red Hat). Futhermore, DL systems are programmed in a variety of languages, including Matlab, C, C++, Lua, Python, etc. Thus practitioners and researchers have a potentially steep learning curve to create custom DL solutions. Finally, the large variety of data types in remote sensing, including RGB imagery, RGBD imagery, MSI, HSI, SAR, LiDAR, stereo imagery, tweets, GPS data, etc., all of which may require different architectures of DL systems. Often, many of the tasks in the RS community require components that are not part of a standard DL library tool. A good understanding of DL systems and programming is required to integrate these components into off-the-shelf DL systems. ### Training **Open Question #9: How to train and optimize the DL system?**Training a DL system can be difficult. Large systems can have millions of parameters. There are many methods that DL researchers use to effectively train systems. These methods are discussed below. **Data imputation:** Data imputation [398] is important in RS, since there are often a small number of training samples. In imagery, image patched can be extracted and stretched with affine transformations, rotated, and made lighter or darker (scaling). Also, patched can be zeroed (removed) from training data to help the DL be more robust to occlusions. Data can also be augmented by simulations. Another method that can be useful in some circumstances is domain transfer, discussed below (transfer learning). **Pre-training:** Erhan et al. [399] performed a detailed study trying to answer the questions \"_How does unsupervised pre-training work?_\" and \"_Why does unsupervised pre-training help DL?_\". Their empirical analysis shows that unsupervised pre-training guides the learning towards attraction basins of minima that support better generalization and pre-training also acts as a regularizer. Furthermore, early training example have a large impact on the overall DL performance. Of course, these are experimental results, and results on other datasets or using other DL methods can yield different results. Many DL systems utilize pre-training followed by fine-tuning. **Transfer Learning:** Transfer learning is also discussed in section 4.6. Transfer learning attempts to transfer learned features (which can also be thought of as DL layer activations or outputs) from one image to another, from one part of an image to another part, or from one sensor to another. This is a particularly thorny issue in RS, due to variations in atmosphere, lighting conditions, etc. Pan et al. [28] point out that typically in remote sensing, when changing sensors or changing to a different part of a large image or other imagery collected at different times, the transfer fails. Remote sensing systems need to be robust, but they don't necessarily require near-perfect knowledge. Also, transfer between images where the number and types of endmembers are different has very few studies. Zhang et al. [3] also cite transfer learning as an open issue in DL in general, and not just in RS. **Regularization:** Regularization is defined by Goodfellow et al. [11] as \"any modification we make to a learning algorithm that is intended to reduce its generalization error but not its training error.\" There are many forms of regularizer - parameter size penalty terms (such as the \\(L_{2}\\) or \\(L_{1}\\) norm, and other regularizers that enforce sparse solutions; diagonal loading of a matrix so the matrix inverse (which is required for some algorithms) is better conditioned; Dropout and early stopping (both are described below); adding noise to weights or inputs; semi-supervised learning, which usually means that some function that has a very similar representation to examples from the same class is learned by the NN; Bagging (combining multiple models); and adversarial training, where a weighted sum of the sample and an adversarial sample is used to boost performance. The interested reader is referred to chapter 7 of [11] for further information. An interesting DL example in RS is, Mei et al. [172], who utilized a Parametric Rectified Linear unit (PReLu) [400], which can help improve model fitting without adding computational cost and with little overfitting risk. **Early stopping:** Early stopping is a method where the training validation error is monitored and previous coefficient values are recorded. Once the training level reaches a stopping criteria, then the coefficients are used. Early stopping helps to mitigate overtraining. It also acts as a regularizer, constraining the parameter space to be close to the initial configuration [399]. **Dropout:** Dropout usually uses some number of randomly selected links (or a probability that a link will be dropped) [401]. As the network is trained, these links are zeroed, basically stopping data from flowing from the shallower to deeper layers in the DL system. Dropout basically allowsa bagging-like effect, but instead of the individual networks being independent, they share values, but the mixing occurs at the dropout layer, and the individual subnetworks share parameters [11]. **Batch Normalization:** Batch normalization was developed by Ioffe et al. [402]. Batch normalization breaks the data into small batches and then normalizes the data to be zero mean and unity variance. Batch normalization can also be added internally as layers in the network. Batch normalization reduces the so-called _internal covariate shift_ problem for each training mini-batch. Applying batch normalization had the following benefits: (1) allowed a higher learning rate; (2) the DL network was not as sensitive to initialization; (3) dropout was not required to mitigate overfitting; (4) The \\(L_{2}\\) weight regularization could be reduced, increasing accuracy. Adding batch normalization increases the two extra parameters per activation. **Optimization:** Optimization of DL networks is a major area of study in DL. It is nontrivial to train a DL netowrk, much less squeeze out high performance on both the training and testing datasets. SGD is a training method that uses small batches of training data to generate an estimate of the gradients. Li et al. [403] argues the SGD is not inherently parallel, and often requires training many models and choosing the one that performs best on the validation set. They also show that no one method works best in all cases. They found that optimization performance varies per problem. A nice review paper for many gradient descent algorithms is provided by Ruder [404]. According to Ruder, complications for gradient descent algorithms include: * _How to choose a proper learning rate?_ * _How to properly adjust learning-rate schedules for optimal performance?_ * _How to adjust learning rates independently for each parameter?_ * _How to avoid getting trapped in local minima and saddle points when one dimension slopes up and one down (the gradients can get very small and training halts)_ Various gradient descent methods such as Adagrad [405], which adapts the learning rate to the parameters, AdaDelta [406], which uses a fixed size window of past data, and Adam [407], which also has both mean and variance terms for the gradient descent, can be utilized for training. Another recent approach seeks to optimize the learning rate from the data is described in Schaul et al. [408]. Finally, Sokolic et al. [397] concluded experimentally that for a DNN to generalize well, the spectral norm of the NN's Jacobian matrix in the neighbourhood of the training samples must be bounded. They furthermore show that the generalization error can be bounded independent of the DL network's depth or width, provided that the Jacobian spectral norm is bounded. They also analyze residual networks, weight normalized networks, CNN's with batch normalization and Jacobian regularization, and residual networks with Jacobian regularization. The interested reader is referred to chapter 8 of [11] for further information. **Data Propagation:** Both highway networks [409] and residual networks [410] are methods that take data from one layer and incorporate it, either directly (highway networks) or as a difference (residual networks) into deeper layers. These methods both allow very deep networks to be trained, at the expense of some additional components. Balduzzi et al. [411] examined networks and determined that there is a so-called \"shattered gradient\" problem in DNN, which is manifested by the gradient correlation decaying exponentially with depth and thus gradients resemble white noise. A \"looks linear\" initialization is developed that prevents the gradient shattering. This method appears not to require skip connections (highway networks, residual networks). ## 5 Conclusions In this letter, we have performed a thorough review and analyzed 207 RS papers that utilize FL and DL, as well as 57 survey papers in DL and RS. We provide researches with a clustered set of 12 areas where DL RS papers have been applied. We examine why DL is popular and what is enabling DL. We examined many DL tools and provided opinions about the tools pros and cons. We critically looked at the DL RS field and identified nine general areas with unsolved challenges and opportunities, specifically enumerated 11 difficult and thought-provoking open questions in this area. We reviewed current DL research in CV and discussed recent methods that could be utilized in DL in RS. We provide a table of DL survey papers covering DL in RS and feature learning in RS. ### Disclosures The authors declare no conflict of interest. ### Acknowledgments The authors wish to thank graduate students Vivi Wei, Julie White and Charlie Veal for their valuable inputs related to DL tools. ## References * [1] G. Cheng, J. Han, and X. Lu, \"Remote Sensing Image Scene Classification: Benchmark and State of the Art,\" _arXiv:1703.00121 [cs.CV]_ (2017). DOI:10.1109/JPROC.2017.2675998. * [2] L. Deng, \"A tutorial survey of architectures, algorithms, and applications for deep learning,\" _APSIPA Transactions on Signal and Information Processing_**3 (e2)**(January), 1-29 (2014). * [3] L. Zhang, L. Zhang, and V. Kumar, \"Deep learning for Remote Sensing Data,\" _IEEE Geoscience and Remote Sensing Magazine_**4**(June), 22-40 (2016). DOI:10.1155/2016/7954154. * [4] H. Wang and B. Raj, \"A survey: Time travel in deep learning space: An introduction to deep learning models and how deep learning models evolved from the initial ideas,\" _arXiv preprint arXiv:1510.04781_**abs/1510.04781** (2015). * [5] J. Wan, D. Wang, S. C. H. Hoi, _et al._, \"Deep learning for content-based image retrieval: A comprehensive study,\" in _Proceedings of the 22nd ACM international conference on Multimedia_, 157-166 (2014). * [6] J. Ba and R. Caruana, \"Do deep nets really need to be deep?,\" in _Advances in neural information processing systems_, 2654-2662 (2014). * A New Frontier in Artificial Intelligence Research,\" _IEEE Computational Intelligence Magazine_**5**(4), 13-18 (2010). DOI:10.1109/MCI.2010.938364. * [8] W. Liu, Z. Wang, X. Liu, _et al._, \"A survey of deep neural network architectures and their applications,\" _Neurocomputing_**234**, 11-26 (2016). DOI:10.1016/j.neucom.2016.12.038. * [9] D. Yu and L. Deng, \"Deep Learning and Its Applications to Signal and Information Processing [Exploratory DSP],\" _Signal Processing Magazine, IEEE_**28**(1), 145-154 (2011). DOI:10.1109/MSP.2010.939038. * [10] J. Schmidhuber, \"Deep Learning in neural networks: An overview,\" _Neural Networks_**61**, 85-117 (2015). DOI:10.1016/j.neunet.2014.09.003. * [11] I. Goodfellow, Y. Bengio, and A. Courville, _Deep learning_, MIT Press (2016). * [12] L. Arnold, S. Rebecchi, S. Chevallier, _et al._, \"An introduction to deep learning,\" in _2012 11th International Conference on Information Science, Signal Processing and their Applications, ISSPA 2012_, (April), 1438-1439 (2012). * [13] Y. Bengio, A. C. Courville, and P. Vincent, \"Unsupervised feature learning and deep learning: A review and new perspectives,\" _CoRR, abs/1206.5538_**1** (2012). * [14] X.-w. Chen and X. Len, \"Big Data Deep Learning: Challenges and Perspectives,\" _IEEE Access_**2**, 514-525 (2014). DOI:10.1109/ACCESS.2014.2325029. * [15] L. Deng and D. Yu, \"Deep Learning: Methods and Applications,\" _Foundations and Trends(r) in Signal Processing_**7**(3-4), 197-387 (2013). * [16] M. M. Najafabadi, F. Villanustre, T. M. Khoshgoftaar, _et al._, \"Deep learning applications and challenges in big data analytics,\" _Journal of Big Data_**2**(1), 1-21 (2015). * [17] J. M. Bioucas-dias, A. Plaza, G. Camps-valls, _et al._, \"Hyperspectral Remote Sensing Data Analysis and Future Challenges,\" _IEEE Geoscience and Remote Sensing Magazine_ (June), 6-36 (2013). * [18] G. Camps-Valls and L. Bruzzone, \"Kernel-based methods for hyperspectral image classification,\" _Geoscience and Remote Sensing, IEEE Transactions on_**43**(6), 1351-1362 (2005). * [19] G. Camps-Valls, D. Tuia, L. Bruzzone, _et al._, \"Advances in hyperspectral image classification: Earth monitoring with statistical learning methods,\" _IEEE Signal Processing Magazine_**31**(1), 45-54 (2013). DOI:10.1109/MSP.2013.2279179. * [20] H. Deborah, N. Richard, and J. Y. Hardeberg, \"A comprehensive evaluation of spectral distance functions and metrics for hyperspectral image processing,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_**8**(6), 3224-3234 (2015). * [21] P. Dollar, C. Wojek, B. Schiele, _et al._, \"Pedestrian detection: An evaluation of the state of the art,\" _IEEE transactions on pattern analysis and machine intelligence_**34**(4), 743-761 (2012). DOI:10.1109/TPAMI.2011.155. * [22] P. Du, J. Xia, W. Zhang, _et al._, \"Multiple classifier system for remote sensing image classification: A review,\" _Sensors_**12**(4), 4764-4792 (2012). * [23] M. Fauvel, Y. Tarabalka, J. A. Benediktsson, _et al._, \"Advances in spectral-spatial classification of hyperspectral images,\" _Proceedings of the IEEE_**101**(3), 652-675 (2013). * [24] M. Hussain, D. Chen, A. Cheng, _et al._, \"Change detection from remotely sensed images: From pixel-based to object-based approaches,\" _ISPRS Journal of Photogrammetry and Remote Sensing_**80**, 91-106 (2013). DOI:10.1016/j.isprsjprs.2013.03.006. * [25] G. Jianya, S. Haigang, M. Guorui, _et al._, \"A review of multi-temporal remote sensing data change detection algorithms,\" _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_**37**(B7), 757-762 (2008). * [26] D. J. Lary, A. H. Alavi, A. H. Gandomi, _et al._, \"Machine learning in geosciences and remote sensing,\" _Geoscience Frontiers_**7**(1), 3-10 (2016). * [27] D. Lunga, S. Prasad, M. M. Crawford, _et al._, \"Manifold-learning-based feature extraction for classification of hyperspectral data: A review of advances in manifold learning,\" _IEEE Signal Processing Magazine_**31**(1), 55-66 (2014). * [28] S. J. Pan and Q. Yang, \"A survey on transfer learning,\" _IEEE Transactions on knowledge and data engineering_**22**(10), 1345-1359 (2010). DOI:10.1109/TKDE.2009.191. * [29] A. Plaza, P. Martinez, R. Perez, _et al._, \"A quantitative and comparative analysis of endmember extraction algorithms from hyperspectral data,\" _IEEE transactions on geoscience and remote sensing_**42**(3), 650-663 (2004). DOI:10.1109/TGRS.2003.820314. * [30] N. Keshava and J. F. Mustard, \"Spectral unmixing,\" _IEEE signal processing magazine_**19**(1), 44-57 (2002). * [31] N. Keshava, \"A survey of spectral unmixing algorithms,\" _Lincoln Laboratory Journal_**14**(1), 55-78 (2003). * [32] M. Parente and A. Plaza, \"Survey of geometric and statistical unmixing algorithms for hyperspectral images,\" in _Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), 2010 2nd Workshop on_, 1-4, IEEE (2010). * [33] A. Plaza, G. Martin, J. Plaza, _et al._, \"Recent developments in endmember extraction and spectral unmixing,\" in _Optical Remote Sensing_, S. Prasad, L. M. Bruce, and J. Chanussot, Eds., 235-267, Springer (2011). DOI:10.1007/978-3-642-14212-3_12. * [34] C. Shi and L. Wang, \"Incorporating spatial information in spectral unmixing: A review,\" _Remote Sensing of Environment_**149**, 70-87 (2014). DOI:10.1016/j.rse.2014.03.034. * [35] M. Chen, Z. Xu, K. Weinberger, _et al._, \"Marginalized denoising autoencoders for domain adaptation,\" _arXiv preprint arXiv:1206.4683_ (2012). * [36] ujjwalkarn, \"An Intuitive Explanation of Convolutional Neural Networks.\" [https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/](https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/) (2016). * [37] Y. LeCun, L. Bottou, Y. Bengio, _et al._, \"Gradient-based learning applied to document recognition,\" _Proceedings of the IEEE_**86**(11), 2278-2324 (1998). * [38] A. Krizhevsky, I. Sutskever, and G. E. Hinton, \"Imagenet classification with deep convolutional neural networks,\" in _Advances in neural information processing systems_, 1097-1105 (2012). * [39] M. D. Zeiler and R. Fergus, \"Visualizing and understanding convolutional networks,\" in _European conference on computer vision_, 818-833, Springer (2014). * [40] C. Szegedy, W. Liu, Y. Jia, _et al._, \"Going deeper with convolutions,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 1-9 (2015). * [41] K. He, X. Zhang, S. Ren, _et al._, \"Deep residual learning for image recognition,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 770-778 (2016). * [42] G. Huang, Z. Liu, K. Q. Weinberger, _et al._, \"Densely connected convolutional networks,\" _arXiv preprint arXiv:1608.06993_ (2016). * 507 (2006). * [44] S. Hochreiter and J. Schmidhuber, \"Long short-term memory,\" _Neural computation_**9**(8), 1735-1780 (1997). * [45] M. D. Zeiler, G. W. Taylor, and R. Fergus, \"Adaptive deconvolutional networks for mid and high level feature learning,\" in _2011 International Conference on Computer Vision_, 2018-2025 (2011). * [46] M. D. Zeiler, D. Krishnan, G. W. Taylor, _et al._, \"Deconvolutional networks,\" in _In CVPR_, (2010). * [47] H. Noh, S. Hong, and B. Han, \"Learning deconvolution network for semantic segmentation,\" _CoRR_**abs/1505.04366** (2015). * [48] O. Russakovsky, J. Deng, H. Su, _et al._, \"Imagenet large scale visual recognition challenge,\" _International Journal of Computer Vision_**115**(3), 211-252 (2015). * [49] L. Brown, \"Deep Learning with GPUs.\" www.nvidia.com/content/events/geoInt2015/LBrown_DL.pdf (2015). * [50] G. Cybenko, \"Approximation by superpositions of a sigmoidal function,\" _Mathematics of Control, Signals and Systems_**2**(4), 303-314 (1989). * [51] M. Telgarsky, \"Benefits of depth in neural networks,\" _arXiv preprint arXiv:1602.04485_ (2016). * [52] O. Sharir and A. Shashua, \"On the Expressive Power of Overlapping Operations of Deep Networks,\" _arXiv preprint arXiv:1703.02065_ (2017). * [53] S. Liang and R. Srikant, \"Why deep neural networks for function approximation?,\" _arXiv preprint arXiv:1610.04161_ (2016). * 60 (2015). Special Section: A Note on New Trends in Data-Aware Scheduling and Resource Provisioning in Modern HPC Systems. * [55] M. Chi, A. Plaza, J. A. Benediktsson, _et al._, \"Big data for remote sensing: Challenges and opportunities,\" _Proceedings of the IEEE_**104**, 2207-2219 (2016). * [56] R. Raina, A. Madhavan, and A. Y. Ng, \"Large-scale deep unsupervised learning using graphics processors,\" in _Proceedings of the 26th Annual International Conference on Machine Learning_, _ICML '09_, 873-880, ACM, (New York, NY, USA) (2009). * Volume Volume Two_, _IJCAI'11_, 1237-1242, AAAI Press (2011). * [58] D. Scherer, A. Muller, and S. Behnke, \"Evaluation of pooling operations in convolutional architectures for object recognition,\" in _Proceedings of the 20th International Conference on Artificial Neural Networks: Part III_, _ICANN'10_, 92-101, Springer-Verlag, (Berlin, Heidelberg) (2010). * [59] L. Deng, D. Yu, and J. Platt, \"Scalable stacking and learning for building deep architectures,\" in _2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, 2133-2136 (2012). * [60] B. Hutchinson, L. Deng, and D. Yu, \"Tensor deep stacking networks,\" _IEEE Trans. Pattern Anal. Mach. Intell._**35**, 1944-1957 (2013). * [61] J. Dean, G. S. Corrado, R. Monga, _et al._, \"Large scale distributed deep networks,\" in _Proceedings of the 25th International Conference on Neural Information Processing Systems_, _NIPS'12_, 1223-1231, Curran Associates Inc., (USA) (2012). * [62] J. Dean, G. Corrado, R. Monga, _et al._, \"Large scale distributed deep networks,\" in _Advances in neural information processing systems_, 1223-1231 (2012). * [63] K. He, X. Zhang, S. Ren, _et al._, \"Deep residual learning for image recognition,\" _CoRR_**abs/1512.03385** (2015). * [64] R. K. Srivastava, K. Greff, and J. Schmidhuber, \"Highway networks,\" _CoRR_**abs/1505.00387** (2015). * [65] K. Greff, R. K. Srivastava, and J. Schmidhuber, \"Highway and residual networks learn unrolled iterative estimation,\" _CoRR_**abs/1612.07771** (2016). * [66] J. G. Zilly, R. K. Srivastava, J. Koutnik, _et al._, \"Recurrent highway networks,\" _CoRR_**abs/1607.03474** (2016). * [67] R. K. Srivastava, K. Greff, and J. Schmidhuber, \"Training very deep networks,\" _CoRR_**abs/1507.06228** (2015). * [68] Y. Jia, E. Shelhamer, J. Donahue, _et al._, \"Caffe: Convolutional Architecture for Fast Feature Embedding,\" in _ACM International Conference on Multimedia_, 675-678 (2014). * [69] M. Abadi, P. Barham, J. Chen, _et al._, \"TensorFlow: A system for large-scale machine learning,\" in _Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI). Savannah, Georgia, USA_, (2016). * [70] A. Vedaldi and K. Lenc, \"Matconvnet: Convolutional neural networks for matlab,\" in _Proceedings of the 23rd ACM international conference on Multimedia_, 689-692, ACM (2015). * [71] A. Handa, M. Bloesch, V. Patraucean, _et al._, \"Gvnn: Neural Network Library for Geometric Computer Vision,\" in _Computer Vision-ECCV 2016 Workshop_, 67-82, Springer International (2016). * [72] F. Chollet, \"Keras,\" (2015). [https://github.com/fchollet/keras](https://github.com/fchollet/keras). * [73] T. Chen, M. Li, Y. Li, _et al._, \"MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems,\" _eprint arXiv:1512.01274_, 1-6 (2015). * [74] R. Al-Rfou, G. Alain, A. Almahairi, _et al._, \"Theano: A Python framework for fast computation of mathematical expressions,\" _arXiv preprint arXiv:1605.02688_ (2016). * [75] \"Deep Learning with Torch: The 60-minute Blitz.\" [https://github.com/soumith/cvpr2015/blob/master/Deep](https://github.com/soumith/cvpr2015/blob/master/Deep) Learning with Torch.ipynb. * [76] M. Baccouche, F. Mamalet, C. Wolf, _et al._, \"Sequential deep learning for human action recognition,\" in _International Workshop on Human Behavior Understanding_, 29-39, Springer (2011). * [77] Z. Shou, J. Chan, A. Zareian, _et al._, \"Cdc: Convolutional-de-convolutional networks for precise temporal action localization in untrimmed videos,\" _arXiv preprint arXiv:1703.01515_ (2017). * [78] D. Tran, L. Bourdev, R. Fergus, _et al._, \"Learning spatiotemporal features with 3d convolutional networks,\" in _Computer Vision (ICCV), 2015 IEEE International Conference on_, 4489-4497 (2015). * [79] L. Wang, Y. Xiong, Z. Wang, _et al._, \"Temporal segment networks: towards good practices for deep action recognition,\" in _European Conference on Computer Vision_, 20-36 (2016). * [80] G. E. Dahl, D. Yu, L. Deng, _et al._, \"Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition,\" _IEEE Transactions on Audio, Speech, and Language Processing_**20**(1), 30-42 (2012). * [81] G. Hinton, L. Deng, D. Yu, _et al._, \"Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,\" _IEEE Signal Processing Magazine_**29**(6), 82-97 (2012). * [82] A. Graves, A.-r. Mohamed, and G. Hinton, \"Speech recognition with deep recurrent neural networks,\" in _Acoustics, speech and signal processing (icassp), 2013 IEEE international conference on_, 6645-6649, IEEE (2013). * [83] W. Luo, A. G. Schwing, and R. Urtasun, \"Efficient deep learning for stereo matching,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 5695-5703 (2016). * [84] S. Levine, P. Pastor, A. Krizhevsky, _et al._, \"Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,\" _arXiv preprint arXiv:1603.02199_ (2016). * [85] S. Venugopalan, M. Rohrbach, J. Donahue, _et al._, \"Sequence to sequence-video to text,\" in _Proceedings of the IEEE International Conference on Computer Vision_, 4534-4542 (2015). * [86] R. Collobert, \"Deep learning for efficient discriminative parsing.,\" in _AISTATS_, **15**, 224-232 (2011). * [87] S. Venugopalan, H. Xu, J. Donahue, _et al._, \"Translating videos to natural language using deep recurrent neural networks,\" _arXiv preprint arXiv:1412.4729_ (2014). * [88] Y. H. Tan and C. S. Chan, \"phi-lstm: A phrase-based hierarchical LSTM model for image captioning,\" in _13th Asian Conference on Computer Vision (ACCV)_, 101-117 (2016). * [89] A. Karpathy and L. Fei-Fei, \"Deep visual-semantic alignments for generating image descriptions,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 3128-3137 (2015). * [90] P. Baldi, P. Sadowski, and D. Whiteson, \"Searching for exotic particles in high-energy physics with deep learning,\" _Nature communications_**5** (2014). * [91] J. Wu, I. Yildirim, J. J. Lim, _et al._, \"Galileo: Perceiving physical object properties by integrating a physics engine with deep learning,\" in _Advances in neural information processing systems_, 127-135 (2015). * [92] A. A. Cruz-Roa, J. E. A. Ovalle, A. Madabhushi, _et al._, \"A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection,\" in _International Conference on Medical Image Computing and Computer-Assisted Intervention_, 403-410, Springer (2013). * [93] R. Fakoor, F. Ladhak, A. Nazi, _et al._, \"Using deep learning to enhance cancer diagnosis and classification,\" in _Proceedings of the International Conference on Machine Learning_, (2013). * [94] K. Sirinukunwattana, S. E. A. Raza, Y.-W. Tsang, _et al._, \"Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images,\" _IEEE transactions on medical imaging_**35**(5), 1196-1206 (2016). * [95] M. Langkvist, L. Karlsson, and A. Loufti, \"A review of unsupervised feature learning and deep learning for time-series modeling,\" _Pattern Recognition Letters_**42**, 11-24 (2014). * [96] S. Sarkar, K. G. Lore, S. Sarkar, _et al._, \"Early detection of combustion instability from hi-speed flame images via deep learning and symbolic time series analysis,\" in _Annual Conference of The Prognostics and Health Management_, (2015). * [97] T. Kuremoto, S. Kimura, K. Kobayashi, _et al._, \"Time series forecasting using a deep belief network with restricted boltzmann machines,\" _Neurocomputing_**137**, 47-56 (2014). * [98] A. Dosovitskiy, J. Tobias Springenberg, and T. Brox, \"Learning to generate chairs with convolutional neural networks,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 1538-1546 (2015). * [99] J. Zhao, M. Mathieu, and Y. LeCun, \"Energy-based generative adversarial network,\" _arXiv preprint arXiv:1609.03126_ (2016). * [100] S. E. Reed, Z. Akata, S. Mohan, _et al._, \"Learning what and where to draw,\" in _Advances in Neural Information Processing Systems_, 217-225 (2016). * [101] D. Berthelot, T. Schumm, and L. Metz, \"Began: Boundary equilibrium generative adversarial networks,\" _arXiv preprint arXiv:1703.10717_ (2017). * [102] W. R. Tan, C. S. Chan, H. Aguirre, _et al._, \"Artgan: Artwork synthesis with conditional categorial gans,\" _arXiv preprint arXiv:1702.03410_ (2017). * [103] K. Gregor, I. Danihelka, A. Graves, _et al._, \"Draw: A recurrent neural network for image generation,\" _arXiv preprint arXiv:1502.04623_ (2015). * [104] A. Radford, L. Metz, and S. Chintala, \"Unsupervised representation learning with deep convolutional generative adversarial networks,\" _arXiv preprint arXiv:1511.06434_ (2015). * [105] X. Ding, Y. Zhang, T. Liu, _et al._, \"Deep learning for event-driven stock prediction,\" in _IJCAI_, 2327-2333 (2015). * [106] Z. Yuan, Y. Lu, Z. Wang, _et al._, \"Droid-sec: deep learning in android malware detection,\" in _ACM SIGCOMM Computer Communication Review_, **44**(4), 371-372, ACM (2014). * [107] C. Cadena, A. Dick, and I. D. Reid, \"Multi-modal Auto-Encoders as Joint Estimators for Robotics Scene Understanding,\" in _Proceedings of Robotics: Science and Systems Conference (RSS)_, (2016). * [108] J. Feng, Y. Wang, and S.-F. Chang, \"3D Shape Retrieval Using a Single Depth Image from Low-cost Sensors,\" in _2016 IEEE Winter Conference on Applications of Computer Vision (WACV)_, 1-9 (2016). * [109] A. Haque, A. Alahi, and L. Fei-fei, \"Recurrent Attention Models for Depth-Based Person Identification,\" in _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 1229-1238 (2016). * [110] V. Hegde Stanford and R. Zadeh Stanford, \"FusionNet: 3D Object Classification Using Multiple Data Representations,\" _arXiv preprint arXiv:1607.05695_ (2016). * [111] J. Huang and S. You, \"Point Cloud Labeling using 3D Convolutional Neural Network,\" in _International Conference on Pattern Recognition_, (December) (2016). * [112] W. Kehl, F. Milletari, F. Tombari, _et al._, \"Deep Learning of Local RGB-D Patches for 3D Object Detection and 6D Pose Estimation,\" in _European Conference on Computer Vision_, 205-220 (2016). * [113] C. Li, A. Reiter, and G. D. Hager, \"Beyond Spatial Pooling: Fine-Grained Representation Learning in Multiple Domains,\" _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_**2**(1), 4913-4922 (2015). * [114] N. Sedaghat, M. Zolfaghari, and T. Brox, \"Orientation-boosted Voxel Nets for 3D Object Recognition,\" _arXiv preprint arXiv:1604.03351_ (2016). * [115] Z. Xie, K. Xu, W. Shan, _et al._, \"Projective Feature Learning for 3D Shapes with Multi-View Depth Images,\" in _Computer Graphics Forum_, **34**(7), 1-11, Wiley Online Library (2015). * [116] C. Chen, \"DeepDriving : Learning Affordance for Direct Perception in Autonomous Driving,\" in _Proceedings of the IEEE International Conference on Computer Vision_, 2722-2730 (2015). * [117] X. Chen, H. Ma, J. Wan, _et al._, \"Multi-View 3D Object Detection Network for Autonomous Driving,\" _arXiv preprint arXiv:1611.07759_ (2016). * [118] A. Chigorin and A. Konushin, \"A system for large-scale automatic traffic sign recognition and mapping,\" _CMRT13-City Models, Roads and Traffic_**2013**, 13-17 (2013). * [119] D. Ciresan, U. Meier, and J. Schmidhuber, \"Multi-column deep neural networks for image classification,\" in _2012 IEEE Conf. on Computer Vision and Pattern Recog. (CVPR)_, (February), 3642-3649 (2012). * [120] Y. Zeng, X. Xu, Y. Fang, _et al._, \"Traffic sign recognition using extreme learning classifier with deep convolutional features,\" in _The 2015 international conference on intelligence science and big data engineering (IScIDE 2015), Suzhou, China_, (2015). * [121] A.-B. Salberg, \"Detection of seals in remote sensing images using features extracted from deep convolutional neural networks,\" in _2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, (0373), 1893-1896 (2015). * [122] W. Li, G. Wu, and Q. Du, \"Transferred Deep Learning for Anomaly Detection in Hyperspectral Imagery,\" _IEEE Geoscience and Remote Sensing Letters_**PP**(99), 1-5 (2017). * [123] J. Becker, T. C. Havens, A. Pinar, _et al._, \"Deep belief networks for false alarm rejection in forward-looking ground-penetrating radar,\" in _SPIE Defense+ Security_, 94540W-94540W, International Society for Optics and Photonics (2015). * [124] C. Bentes, D. Velotto, and S. Lehner, \"Target classification in oceanographic SAR images with deep neural networks: Architecture and initial results,\" in _2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, 3703-3706, IEEE (2015). * [125] L. E. Besaw, \"Detecting buried explosive hazards with handheld GPR and deep learning,\" in _SPIE Defense+ Security_, 98230N-98230N, International Society for Optics and Photonics (2016). * [126] L. E. Besaw and P. J. Stimac, \"Deep learning algorithms for detecting explosive hazards in ground penetrating radar data,\" in _SPIE Defense+ Security_, 90720Y-90720Y, International Society for Optics and Photonics (2014). * [127] K. Du, Y. Deng, R. Wang, _et al._, \"SAR ATR based on displacement-and rotation-insensitive CNN,\" _Remote Sensing Letters_**7**(9), 895-904 (2016). DOI:10.1080/2150704X.2016.1196837. * [128] S. Chen and H. Wang, \"SAR target recognition based on deep learning,\" in _Data Science and Advanced Analytics (DSAA), 2014 International Conference on_, 541-547, IEEE (2014). DOI:10.1109/dsaa.2014.7058124. * [129] D. A. E. Morgan, \"Deep convolutional neural networks for ATR from SAR imagery,\" in _SPIE Defense+ Security_, 94750F-94750F, International Society for Optics and Photonics (2015). DOI:10.1117/12.2176558. * [130] J. C. Ni and Y. L. Xu, \"SAR automatic target recognition based on a visual cortical system,\" in _Image and Signal Processing (CISP), 2013 6th International Congress on_, **2**, 778-782, IEEE (2013). DOI:10.1109/cisp.2013.6745270. * [131] Z. Sun, L. Xue, and Y. Xu, \"Recognition of sar target based on multilayer auto-encoder and snn,\" _International Journal of Innovative Computing, Information and Control_**9**(11), 4331-4341 (2013). * [132] H. Wang, S. Chen, F. Xu, _et al._, \"Application of deep-learning algorithms to MSTAR data,\" in _Geoscience and Remote Sensing Symposium (IGARSS), 2015 IEEE International_, 3743-3745, IEEE (2015). DOI:10.1109/igarss.2015.7326637. * [133] L. Zhang, L. Zhang, D. Tao, _et al._, \"A multifeature tensor for remote-sensing target recognition,\" _IEEE Geoscience and Remote Sensing Letters_**8**(2), 374-378 (2011). DOI:10.1109/LGRS.2010.2077272. * [134] L. Zhang, Z. Shi, and J. Wu, \"A hierarchical oil tank detector with deep surrounding features for high-resolution optical satellite imagery,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_**8**(10), 4895-4909 (2015). * [135] P. F. Alcantarilla, S. Stent, G. Ros, _et al._, \"Streetview change detection with deconvolutional networks,\" in _Robotics: Science and Systems Conference (RSS)_, (2016). * [136] M. Gong, Z. Zhou, and J. Ma, \"Change Detection in Synthetic aperture Radar Images Based on Deep Neural Networks,\" _IEEE Transactions on Neural Networks and Learning Systems_**27**(4), 2141-51 (2016). * [137] F. Pacifici, F. Del Frate, C. Solimini, _et al._, \"An Innovative Neural-Net Method to Detect Temporal Changes in High-Resolution Optical Satellite Imagery,\" _IEEE Transactions on Geoscience and Remote Sensing_**45**(9), 2940-2952 (2007). * [138] S. Stent, \"Detecting Change for Multi-View, Long-Term Surface Inspection,\" in _Proceedings of the 2015 British Machine Vision Conference (BCMV)_, 1-12 (2015). * [139] J. Zhao, M. Gong, J. Liu, _et al._, \"Deep learning to classify difference image for image change detection,\" in _Neural Networks (IJCNN), 2014 International Joint Conference on_, 411-417, IEEE (2014). * A Learning framework for Satellite Imagery,\" in _Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems_, 1-22 (2015). * [141] Y. Bazi, N. Alajlan, F. Melgani, _et al._, \"Differential Evolution Extreme Learning Machine for the Classification of Hyperspectral Images,\" _IEEE Geoscience and Remote Sensing Letters_**11**(6), 1066-1070 (2014). * [142] J. Cao, Z. Chen, and B. Wang, \"Deep Convolutional Networks With Superpixel Segmentation for Hyperspectral Image Classification,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 3310-3313 (2016). * [143] J. Cao, Z. Chen, and B. Wang, \"Graph-Based Deep Convolutional Networks With Superpixel Segmentation for Hyperspectral Image Classification,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 3310-3313 (2016). * [144] C. Chen, W. Li, H. Su, _et al._, \"Spectral-spatial classification of hyperspectral image based on kernel extreme learning machine,\" _Remote Sensing_**6**(6), 5795-5814 (2014). * [145] G. Cheng, C. Ma, P. Zhou, _et al._, \"Scene Classification of High Resolution Remote Sensing Images Using Convolutional Neural Networks,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 767-770 (2016). * [146] F. Del Frate, F. Pacifici, G. Schiavon, _et al._, \"Use of neural networks for automatic classification from high-resolution images,\" _IEEE Transactions on Geoscience and Remote Sensing_**45**(4), 800-809 (2007). * [147] Z. Fang, W. Li, and Q. Du, \"Using CNN-based high-level features for remote sensing scene classification,\" in _IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 2610-2613 (2016). * [148] Q. Fu, X. Yu, X. Wei, _et al._, \"Semi-supervised classification of hyperspectral imagery based on stacked autoencoders,\" in _Eighth International Conference on Digital Image Processing (ICDIP 2016)_, 100332B-100332B, International Society for Optics and Photonics (2016). * [149] J. Geng, J. Fan, H. Wang, _et al._, \"High-Resolution SAR Image Classification via Deep Convolutional Autoencoders,\" _IEEE Geoscience and Remote Sensing Letters_**12**(11), 2351-2355 (2015). * [150] X. Han, Y. Zhong, B. Zhao, _et al._, \"Scene classification based on a hierarchical convolutional sparse auto-encoder for high spatial resolution imagery,\" _International Journal of Remote Sensing_**38**(2), 514-536 (2017). * [151] M. He, X. Li, Y. Zhang, _et al._, \"Hyperspectral image classification based on deep stacking network,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 3286-3289 (2016). * [152] B. Hou, X. Luo, S. Wang, _et al._, \"Polarimetric Sar Images Classification Using Deep Belief Networks with Learning Features,\" in _2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, (2), 2366-2369 (2015). * [153] W. Hu, Y. Huang, L. Wei, _et al._, \"Deep convolutional neural networks for hyperspectral image classification,\" _Journal of Sensors_**2015**, 1-12 (2015). * [154] M. Iftene, Q. Liu, and Y. Wang, \"Very high resolution images classification by fine tuning deep convolutional neural networks,\" in _Eighth International Conference on Digital Image Processing (ICDIP 2016)_, 100332D-100332D, International Society for Optics and Photonics (2016). * [155] P. Jia, M. Zhang, Wenbo Yu, _et al._, \"Convolutional Neural Network Based Classification for Hyperspectral Data,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 5075-5078 (2016). * [156] P. Kontschieder, M. Fiterau, A. Criminisi, _et al._, \"Deep neural decision forests,\" in _Proceedings of the IEEE International Conference on Computer Vision_, 1467-1475 (2015). * [157] M. Langkvist, A. Kiselev, M. Alirezaie, _et al._, \"Classification and segmentation of satellite orthoimagery using convolutional neural networks,\" _Remote Sensing_**8**(4), 1-21 (2016). * [158] H. Lee and H. Kwon, \"Contextual Deep CNN Based Hyperspectral Classification,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, **1604.03519**, 2-4 (2016). * [159] J. Li, \"Active learning for hyperspectral image classification with a stacked autoencoders based neural network,\" in _2016 IEEE Internation Conference on Image Processing (ICIP)_, 1062-1065 (2016). * [160] J. Li, L. Bruzzone, and S. Liu, \"Deep feature representation for hyperspectral image classification,\" in _2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, 4951-4954 (2015). * [161] T. Li, J. Zhang, and Y. Zhang, \"Classification of hyperspectral image based on deep belief networks,\" in _2014 IEEE International Conference on Image Processing (ICIP)_, 5132-5136 (2014). * [162] W. Li, G. Wu, F. Zhang, _et al._, \"Hyperspectral image classification using deep pixel-pair features,\" _IEEE Transactions on Geoscience and Remote Sensing_**55**(2), 844-853 (2017). * [163] Y. Li, W. Xie, and H. Li, \"Hyperspectral image reconstruction by deep convolutional neural network for classification,\" _Pattern Recognition_**63**(August 2016), 371-383 (2016). * [164] Z. Lin, Y. Chen, X. Zhao, _et al._, \"Spectral-spatial classification of hyperspectral image using autoencoders,\" in _Information, Communications and Signal Processing (ICICS) 2013 9th International Conference on_, 1-5, IEEE (2013). * [165] Z. Lin, Y. Chen, X. Zhao, _et al._, \"Spectral-spatial classification of hyperspectral image using autoencoders,\" in _2013 9th International Conference on Information, Communications & Signal Processing_, (61301206), 1-5 (2013). * [166] Y. Liu, G. Cao, Q. Sun, _et al._, \"Hyperspectral classification via deep networks and superpixel segmentation,\" _International Journal of Remote Sensing_**36**(13), 3459-3482 (2015). * [167] Y. Liu, P. Lasang, M. Siegel, _et al._, \"Hyperspectral Classification via Learnt Features,\" in _International Conference on Image Processing (ICIP)_, 1-5 (2015). * [168] P. Liu, H. Zhang, and K. B. Eom, \"Active Deep Learning for Classification of Hyperspectral Images,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_**10**(2), 712-724 (2016). * [169] X. Ma, H. Wang, and J. Geng, \"Spectral-spatial classification of hyperspectral image based on deep auto-encoder,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_**9**(9), 4073-4085 (2016). * [170] X. Ma, H. Wang, J. Geng, _et al._, \"Hyperspectral image classification with small training set by deep network and relative distance prior,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 3282-3285, IEEE (2016). * [171] X. Mei, Y. Ma, F. Fan, _et al._, \"Infrared ultraspectral signature classification based on a restricted Boltzmann machine with sparse and prior constraints,\" _International Journal of Remote Sensing_**36**(18), 4724-4747 (2015). * [172] S. Mei, J. Ji, Q. Bi, _et al._, \"Integrating spectral and spatial information into deep convolutional Neural Networks for hyperspectral classification,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 5067-5070 (2016). * [173] A. Merentitis and C. Debes, \"Automatic Fusion and Classification Using Random Forests and Features Extracted with Deep Learning,\" in _2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, 2943-2946 (2015). * [174] K. Nogueira, O. A. B. Penatti, and J. A. dos Santos, \"Towards better exploiting convolutional neural networks for remote sensing scene classification,\" _Pattern Recognition_**61**, 539-556 (2017). * [175] B. Pan, Z. Shi, and X. Xu, \"R-VCANet: A New Deep-Learning-Based Hyperspectral Image Classification Method,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_**PP**(99), 1-12 (2017). * [176] M. Papadomanolaki, M. Vakalopoulou, S. Zagoruyko, _et al._, \"Benchmarking Deep Learning Frameworks for the Classification of Very High Resolution Satellite Multispectral Data,\" _ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences_, 83-88 (2016). * [177] S. Piramanayagam, W. Schwartzkopf, F. W. Koehler, _et al._, \"Classification of remote sensed images using random forests and deep learning framework,\" in _SPIE Remote Sensing_, 100040L-100040L, International Society for Optics and Photonics (2016). * [178] F. Qin, J. Guo, and W. Sun, \"Object-oriented ensemble classification for polarimetric SAR Imagery using restricted Boltzmann machines,\" _Remote Sensing Letters_**8**(3), 204-213 (2017). * [179] S. Rajan, J. Ghosh, and M. M. Crawford, \"An Active Learning Approach to Hyperspectral Data Classification,\" _IEEE Transactions on Geoscience and Remote Sensing_**46**(4), 1231-1242 (2008). * [180] Z. Wang, N. M. Nasrabadi, and T. S. Huang, \"Semisupervised hyperspectral classification using task-driven dictionary learning with laplacian regularization,\" _IEEE Transactions on Geoscience and Remote Sensing_**53**(3), 1161-1173 (2015). * [181] J. Yang, Y. Zhao, J. Cheung, _et al._, \"Hyperspectral Image Classification Using Two-Channel Deep Convolutional Neural Network,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 5079-5082 (2016). * [182] S. Yu, S. Jia, and C. Xu, \"Convolutional neural networks for hyperspectral image classification,\" _Neurocomputing_**219**, 88-98 (2017). * [183] J. Yue, S. Mao, and M. Li, \"A deep learning framework for hyperspectral image classification using spatial pyramid pooling,\" _Remote Sensing Letters_**7**(9), 875-884 (2016). * [184] J. Yue, W. Zhao, S. Mao, _et al._, \"Spectral-spatial classification of hyperspectral images using deep convolutional neural networks,\" _Remote Sensing Letters_**6**(6), 468-477 (2015). * [185] A. Zeggada and F. Melgani, \"Multilabel classification of UAV images with Convolutional Neural Networks,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 5083-5086, IEEE (2016). * [186] F. Zhang, B. Du, and L. Zhang, \"Scene classification via a gradient boosting random convolutional network framework,\" _IEEE Transactions on Geoscience and Remote Sensing_**54**(3), 1793-1802 (2016). * [187] H. Zhang, Y. Li, Y. Zhang, _et al._, \"Spectral-spatial classification of hyperspectral imagery using a dual-channel convolutional neural network,\" _Remote Sensing Letters_**8**(5), 438-447 (2017). * [188] W. Zhao and S. Du, \"Learning multiscale and deep representations for classifying remotely sensed imagery,\" _ISPRS Journal of Photogrammetry and Remote Sensing_**113**(March), 155-165 (2016). * [189] Y. Zhong, F. Fei, and L. Zhang, \"Large patch convolutional neural networks for the scene classification of high spatial resolution imagery,\" _Journal of Applied Remote Sensing_**10**(2), 25006 (2016). * [190] Y. Zhong, F. Fei, Y. Liu, _et al._, \"SatCNN: satellite image dataset classification using agile convolutional neural networks,\" _Remote Sensing Letters_**8**(2), 136-145 (2017). * [191] Y. Chen, C. Li, P. Ghamisi, _et al._, \"Deep Fusion Of Hyperspectral And Lidar Data For Thematic Classification,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 3591-3594 (2016). * [192] L. Ran, Y. Zhang, W. Wei, _et al._, \"Bands Sensitive Convolutional Network for Hyperspectral Image Classification,\" in _Proceedings of the International Conference on Internet Multimedia Computing and Service_, 268-272, ACM (2016). * [193] J. Zabalza, J. Ren, J. Zheng, _et al._, \"Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging,\" _Neurocomputing_**185**, 1-10 (2016). * [194] Y. Liu and L. Wu, \"Geological Disaster Recognition on Optical Remote Sensing Images Using Deep Learning,\" _Procedia Computer Science_**91**, 566-575 (2016). * [195] J. Chen, Q. Jin, and J. Chao, \"Design of deep belief networks for short-term prediction of drought index using data in the Huaihe river basin,\" _Mathematical Problems in Engineering_**2012** (2012). * [196] P. Landschutzer, N. Gruber, D. C. E. Bakker, _et al._, \"A neural network-based estimate of the seasonal to inter-annual variability of the Atlantic Ocean carbon sink,\" _Biogeosciences_**10**(11), 7793-7815 (2013). * [197] M. Shi, F. Xie, Y. Zi, _et al._, \"Cloud detection of remote sensing images by deep learning,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 701-704, IEEE (2016). * [198] X. Shi, Z. Chen, H. Wang, _et al._, \"Convolutional LSTM network: A machine learning approach for precipitation nowcasting,\" _Advances in Neural Information Processing Systems 28_, 802-810 (2015). * [199] S. Lee, H. Zhang, and D. J. Crandall, \"Predicting Geo-informative Attributes in Large-scale Image Collections using Convolutional Neural Networks,\" in _2015 IEEE Winter Conference on Applications of Computer Vision (WACV)_, 550-557 (2015). * [200] W. Kehl, F. Milletari, F. Tombari, _et al._, \"Deep Learning of Local RGB-D Patches for 3D Object Detection and 6D Pose Estimation,\" in _European Conference on Computer Vision_, 205-220 (2016). * [201] Y. Kim and T. Moon, \"Human Detection and Activity Classification Based on Micro-Doppler Signatures Using Deep Convolutional Neural Networks,\" _IEEE Geoscience and Remote Sensing Letters_**13**(1), 1-5 (2015). * [202] W. Ouyang and X. Wang, \"A discriminative deep model for pedestrian detection with occlusion handling,\" in _2012 IEEE Conf. on Computer Vision and Pattern Recog. (CVPR)_, 3258-3265, IEEE (2012). * [203] D. Tome, F. Monti, L. Baroffio, _et al._, \"Deep convolutional neural networks for pedestrian detection,\" _Signal Processing: Image Communication_**47**, 482-489 (2016). * [204] Y. Wei, Q. Yuan, H. Shen, _et al._, \"A Universal Remote Sensing Image Quality Improvement Method with Deep Learning,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 6950-6953 (2016). * [205] H. Zhang, P. Casaseca-de-la Higuera, C. Luo, _et al._, \"Systematic infrared image quality improvement using deep learning based techniques,\" in _SPIE Remote Sensing_, 100080P-100080P, International Society for Optics and Photonics (2016). * [206] D. Quan, S. Wang, M. Ning, _et al._, \"Using Deep Neural Networks for Synthetic Aperture Radar Image Registration,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 2799-2802 (2016). * [207] P. Ghamisi, Y. Chen, and X. X. Zhu, \"A Self-Improving Convolution Neural Network for the Classification of Hyperspectral Data,\" _IEEE Geoscience and Remote Sensing Letters_**13**(10), 1-5 (2016). * [208] N. Kussul, A. Shelestov, M. Lavreniuk, _et al._, \"Deep learning approach for large scale land cover mapping based on remote sensing data fusion,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 198-201, IEEE (2016). * [209] W. Li, H. Fu, L. Yu, _et al._, \"Stacked Autoencoder-based deep learning for remote-sensing image classification: a case study of African land-cover mapping,\" _International Journal of Remote Sensing_**37**(23), 5632-5646 (2016). * [210] H. Liu, Q. Min, C. Sun, _et al._, \"Terrain Classification With Polarimetric Sar Based on Deep Sparse Filtering Network,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 64-67 (2016). * [211] K. Makantasis, K. Karantzalos, A. Doulamis, _et al._, \"Deep Supervised Learning for Hyperspectral Data Classification through Convolutional Neural Networks,\" _2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, 4959-4962 (2015). * [212] M. Castelluccio, G. Poggi, C. Sansone, _et al._, \"Land Use Classification in Remote Sensing Images by Convolutional Neural Networks,\" _arXiv preprint arXiv:1508.00092_, 1-11 (2015). * [213] G. Cheng, J. Han, L. Guo, _et al._, \"Effective and Efficient Midlevel Visual Elements-Oriented Land-Use Classification using VHR Remote Sensing Images,\" _IEEE Transactions on Geoscience and Remote Sensing_**53**(8), 4238-4249 (2015). * [214] F. P. S. Luus, B. P. Salmon, F. Van Den Bergh, _et al._, \"Multiview Deep Learning for Land-Use Classification,\" _IEEE Geoscience and Remote Sensing Letters_**12**(12), 2448-2452 (2015). * [215] Q. Lv, Y. Dou, X. Niu, _et al._, \"Urban land use and land cover classification using remotely sensed SAR data through deep belief networks,\" _Journal of Sensors_**2015** (2015). * [216] X. Ma, H. Wang, and J. Wang, \"Semisupervised classification for hyperspectral image based on multi-decision labeling and deep feature learning,\" _ISPRS Journal of Photogrammetry and Remote Sensing_**120**, 99-107 (2016). * ICONIAAC '14_, 1-7 (2014). * [218] E. Othman, Y. Bazi, N. Alajlan, _et al._, \"Using convolutional features and a sparse autoencoder for land-use scene classification,\" _International Journal of Remote Sensing_**37**(10), 1977-1995 (2016). * [219] A. B. Penatti, K. Nogueira, and J. A. Santos, \"Do Deep Features Generalize from Everyday Objects to Remote Sensing and Aerial Scenes Domains?,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_, 44-51 (2015). * [220] A. Romero, C. Gatta, and G. Camps-Valls, \"Unsupervised deep feature extraction for remote sensing image classification,\" _IEEE Transactions on Geoscience and Remote Sensing_**54**(3), 1349-1362 (2016). * [221] Y. Sun, J. Li, W. Wang, _et al._, \"Active learning based autoencoder for hyperspectral imagery classification,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 469-472 (2016). * [222] N. K. Uba, _Land Use and Land Cover Classification Using Deep Learning Techniques_. PhD thesis, Arizona State University (2016). * [223] L. Alexandre, \"3D Object Recognition using Convolutional Neural Networks with Transfer Learning between Input Channels,\" in _13th International Conference on Intelligent Autonomous Systems_, 889-898, Springer International (2016). * 2nd IAPR Asian Conference on Pattern Recognition, ACPR 2013_, 54-58 (2013). * [225] X. Chen and Y. Zhu, \"3D Object Proposals for Accurate Object Class Detection,\" _Advances in Neural Information Processing Systems_, 424-432 (2015). * [226] G. Cheng, P. Zhou, and J. Han, \"Learning Rotation-Invariant Convolutional Neural Networks for Object Detection in VHR Optical Remote Sensing Images,\" _IEEE Transactions on Geoscience and Remote Sensing_**54**(12), 7405-7415 (2016). * [227] M. Dahmane, S. Foucher, M. Beaulieu, _et al._, \"Object Detection in Pleiades Images using Deep Features,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 1552-1555 (2016). * [228] W. Diao, X. Sun, F. Dou, _et al._, \"Object recognition in remote sensing images using sparse deep belief networks,\" _Remote Sensing Letters_**6**(10), 745-754 (2015). * [229] G. Georgakis, M. A. Reza, A. Mousavian, _et al._, \"Multiview RGB-D Dataset for Object Instance Detection,\" in _2016 IEEE Fourth International Conference on 3D Vision (3DV)_, 426-434 (2016). * [230] D. Maturana and S. Scherer, \"3D Convolutional Neural Networks for Landing Zone Detection from LiDAR,\" _International Conference on Robotics and Automation_ (Figure 1), 3471-3478 (2015). * [231] C. Wang and K. Siddiqi, \"Differential geometry boosts convolutional neural networks for object detection,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_, 51-58 (2016). * [232] Q. Wu, W. Diao, F. Dou, _et al._, \"Shape-based object extraction in high-resolution remote-sensing images using deep Boltzmann machine,\" _International Journal of Remote Sensing_**37**(24), 6012-6022 (2016). * [233] B. Zhou, A. Khosla, A. Lapedriza, _et al._, \"Object Detectors Emerge in Deep Scene CNNs.\" [http://hdl.handle.net/1721.1/96942](http://hdl.handle.net/1721.1/96942) (2015). * [234] P. Ondruska and I. Posner, \"Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks,\" in _Proceedings of the 30th Conference on Artificial Intelligence (AAAI 2016)_, 3361-3367 (2016). * [235] G. Masi, D. Cozzolino, L. Verdoliva, _et al._, \"Pansharpening by Convolutional Neural Networks,\" _Remote Sensing_**8**(7), 594 (2016). * [236] W. Huang, L. Xiao, Z. Wei, _et al._, \"A new pan-sharpening method with deep neural networks,\" _IEEE Geoscience and Remote Sensing Letters_**12**(5), 1037-1041 (2015). * [237] L. Palafox, A. Alvarez, and C. Hamilton, \"Automated Detection of Impact Craters and Volcanic Rootless Cones in Mars Satellite Imagery Using Convolutional Neural Networks and Support Vector Machines,\" in _46th Lunar and Planetary Science Conference_, 1-2 (2015). * [238] M. M. Ghazi, B. Yanikoglu, and E. Aptoula, \"Plant Identification Using Deep Neural Networks via Optimization of Transfer Learning Parameters,\" _Neurocomputing_ (2017). * [239] H. Guan, Y. Yu, Z. Ji, _et al._, \"Deep learning-based tree classification using mobile LiDAR data,\" _Remote Sensing Letters_**6**(11), 864-873 (2015). * [240] P. K. Goel, S. O. Prasher, R. M. Patel, _et al._, \"Classification of hyperspectral data by decision trees and artificial neural networks to identify weed stress and nitrogen status of corn,\" _Computers and Electronics in Agriculture_**39**(2), 67-93 (2003). * [241] K. Kuwata and R. Shibasaki, \"Estimating crop yields with deep learning and remotely sensed data,\" in _2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, 858-861 (2015). * [242] J. Rebetez, H. F. Satizabal, M. Mota, _et al._, \"Augmenting a convolutional neural network with local histograms-A case study in crop classification from high-resolution UAV imagery,\" in _European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning_, 515-520 (2016). * [243] S. Sladojevic, M. Arsenovic, A. Anderla, _et al._, \"Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification,\" _Computational Intelligence and Neuroscience_**2016**, 1-11 (2016). * [244] D. Levi, N. Garnett, and E. Fetaya, \"StixelNet: a deep convolutional network for obstacle detection and road segmentation,\" in _BMCV_, (2015). * [245] P. Li, Y. Zang, C. Wang, _et al._, \"Road network extraction via deep learning and line integral convolution,\" in _Geoscience and Remote Sensing Symposium (IGARSS), 2016 IEEE International_, 1599-1602, IEEE (2016). * [246] V. Mnih and G. Hinton, \"Learning to Label Aerial Images from Noisy Data,\" in _Proceedings of the 29th International Conference on Machine Learning (ICML-12)_, 567-574 (2012). * [247] J. Wang, J. Song, M. Chen, _et al._, \"Road network extraction: a neural-dynamic framework based on deep learning and a finite state machine,\" _International Journal of Remote Sensing_**36**(12), 3144-3169 (2015). * International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_**XL-5**(June), 629-632 (2014). * [249] Y. Yu, H. Guan, and Z. Ji, \"Automated Extraction of Urban Road Manhole Covers Using Mobile Laser Scanning Data,\" _IEEE Transactions on Intelligent Transportation Systems_**16**(4), 1-12 (2015). * [250] Z. Zhong, J. Li, W. Cui, _et al._, \"Fully convolutional networks for building and road extraction: preliminary results,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 1591-1594 (2016). * [251] R. Hadsell, P. Sermanet, J. Ben, _et al._, \"Learning long-range vision for autonomous off-road driving,\" _Journal of Field Robotics_**26**(2), 120-144 (2009). * [252] L. Mou and X. X. Zhu, \"Spatiotemporal scene interpretation of space videos via deep neural network and tracklet analysis,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 1823-1826, IEEE (2016). * [253] Y. Yuan, L. Mou, and X. Lu, \"Scene Recognition by Manifold Regularized Deep Learning Architecture,\" _IEEE Transactions on Neural Networks and Learning Systems_**26**(10), 2222-2233 (2015). * [254] C. Couprie, C. Farabet, L. Najman, _et al._, \"Indoor Semantic Segmentation using depth information,\" _arXiv preprint arXiv:1301.3572_, 1-8 (2013). * [255] Y. Gong, Y. Jia, T. Leung, _et al._, \"Deep Convolutional Ranking for Multilabel Image Annotation,\" _arXiv preprint arXiv:1312.4894_, 1-9 (2013). * [256] M. Kampffmeyer, A.-B. Salberg, and R. Jenssen, \"Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_, 1-9 (2016). * [257] P. Kaiser, \"Learning City Structures from Online Maps,\" Master's thesis, ETH Zurich (2016). * [258] A. Lagrange, B. L. Saux, A. Beaup, _et al._, \"Benchmarking classification of earth-observation data: from learning explicit features to convolutional networks,\" in _2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, 4173-4176 (2015). * [259] D. Marmanis, M. Datcu, T. Esch, _et al._, \"Deep learning earth observation classification using ImageNet pretrained networks,\" _IEEE Geoscience and Remote Sensing Letters_**13**(1), 105-109 (2016). * [260] D. Marmanis, J. D. Wegner, S. Galliani, _et al._, \"Semantic Segmentation of Aerial Images with an Ensemble of CNNs,\" in _ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2016_, **3**, 473-480 (2016). * [261] S. Paisitkriangkrai, J. Sherrah, P. Janney, _et al._, \"Effective semantic pixel labelling with convolutional networks and Conditional Random Fields,\" in _IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops_, **2015-Oct**, 36-43 (2015). * [262] B. Qu, X. Li, D. Tao, _et al._, \"Deep Semantic Understanding of High Resolution Remote Sensing Image,\" in _International Conference on Computer, Information and Telecommunication Systems (CITS)_, 1-5 (2016). * [263] J. Sherrah, \"Fully convolutional networks for dense semantic labelling of high-resolution aerial imagery,\" _arXiv preprint arXiv:1606.02585_ (2016). * [264] C. Vaduva, I. Gavat, and M. Datcu, \"Deep learning in very high resolution remote sensing image information mining communication concept,\" in _Signal Processing Conference (EUSIPCO), 2012 Proceedings of the 20th European_, 2506-2510, IEEE (2012). * [265] M. Volpi and D. Tuia, \"Dense semantic labeling of sub-decimeter resolution images with convolutional neural networks,\" _IEEE Transactions on Geoscience and Remote Sensing_, 1-13 (2016). * [266] D. Zhang, J. Han, C. Li, _et al._, \"Co-saliency detection via looking deep and wide,\" _Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition **07-12-June**_, 2994-3002 (2015). * [267] F. I. Alam, J. Zhou, A. W.-C. Liew, _et al._, \"CRF learning with CNN features for hyperspectral image segmentation,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 6890-6893 (2016). * [268] N. Audebert, B. L. Saux, and S. Lefevre, \"How Useful is Region-based Classification of Remote Sensing Images in a Deep Learning Framework?,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 5091-5094 (2016). * [269] E. Basaeed, H. Bhaskar, P. Hill, _et al._, \"A supervised hierarchical segmentation of remote-sensing images using a committee of multi-scale convolutional neural networks,\" _International Journal of Remote Sensing_**37**(7), 1671-1691 (2016). * [270] E. Basaeed, H. Bhaskar, and M. Al-Mualla, \"Supervised remote sensing image segmentation using boosted convolutional neural networks,\" _Knowledge-Based Systems_**99**, 19-27 (2016). * [271] S. Pal, S. Chowdhury, and S. K. Ghosh, \"DCAP: A deep convolution architecture for prediction of urban growth,\" in _Geoscience and Remote Sensing Symposium (IGARSS), 2016 IEEE International_, 1812-1815, IEEE (2016). * [272] J. Wang, Q. Qin, Z. Li, _et al._, \"Deep hierarchical representation and segmentation of high resolution remote sensing images,\" in _2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, 4320-4323 (2015). * [273] C. P. Schwegmann, W. Kleynhans, B. P. Salmon, _et al._, \"Very deep learning for ship discrimination in Synthetic Aperture Radar imagery,\" in _2016 IEEE Geoscience and Remote Sensing Symposium (IGARSS)_, 104-107, IEEE (2016). * [274] J. Tang, C. Deng, G.-b. Huang, _et al._, \"Compressed-Domain Ship Detection on Spaceborne Optical Image Using Deep Neural Network and Extreme Learning Machine,\" _IEEE Transactions on Geoscience and Remote Sensing_**53**(3), 1174-1185 (2015). * [275] R. Zhang, J. Yaoa, K. Zhanga, _et al._, \"S-CNN-Based Ship Detection From High-Resolution Remote Sensing Images,\" in _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B7_, 423-430 (2016). * [276] Z. Cui, H. Chang, S. Shan, _et al._, \"Deep network cascade for image super-resolution,\" _Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)_**8693 LNCS**(PART 5), 49-64 (2014). * [277] C. Dong, C. C. Loy, K. He, _et al._, \"Image super-resolution using deep convolutional networks,\" _IEEE transactions on pattern analysis and machine intelligence_**38**(2), 295-307 (2016). * [278] A. Ducournau and R. Fablet, \"Deep Learning for Ocean Remote Sensing : An Application of Convolutional Neural Networks for Super-Resolution on Satellite-Derived SST Data,\" in _9th Workshop on Pattern Recognition in Remote Sensing_, (October) (2016). * [279] L. Liebel and M. Korner, \"Single-Image Super Resolution for Multispectral Remote Sensing Data Using Convolutional Neural Networks,\" in _XXIII ISPRS Congress proceedings_, **XXIII**(July), 883-890 (2016). * [280] W. Huang, G. Song, H. Hong, _et al._, \"Deep architecture for traffic flow prediction: Deep belief networks with multitask learning,\" _IEEE Transactions on Intelligent Transportation Systems_**15**(5), 2191-2201 (2014). * [281] Y. Lv, Y. Duan, W. Kang, _et al._, \"Traffic Flow Prediction With Big Data: A Deep Learning Approach,\" _IEEE Transactions on Intelligent Transportation Systems_**16**(2), 865-873 (2015). * [282] M. Elawady, _Sparse Coral Classification Using Deep Convolutional Neural Networks_. PhD thesis, University of Burgundy, University of Girona, Heriot-Watt University (2014). * [283] H. Qin, X. Li, J. Liang, _et al._, \"DeepFish: Accurate underwater live fish recognition with a deep architecture,\" _Neurocomputing_**187**, 49-58 (2016). * [284] H. Qin, X. Li, Z. Yang, _et al._, \"When Underwater Imagery Analysis Meets Deep Learning : a Solution at the Age of Big Visual Data,\" in _2011 7th International Symposium on Image and Signal Processing and Analysis (ISPA)_, 259-264 (2015). * [285] D. P. Williams, \"Underwater Target Classification in Synthetic Aperture Sonar Imagery Using Deep Convolutional Neural Networks,\" in _Proc. 23rd International Conference on Pattern Recognition (ICPR)_, (2016). * [286] F. Alidoost and H. Arefi, \"Knowledge Based 3D Building Model Recognition Using Convolutional Neural Networks From Lidar and Aerial Imageries,\" _ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_**XLI-B3**(July), 833-840 (2016). * [287] C.-A. Brust, S. Sickert, M. Simon, _et al._, \"Efficient Convolutional Patch Networks for Scene Understanding,\" in _CVPR Workshop_, 1-9 (2015). * [288] S. De and A. Bhattacharya, \"Urban classification using PolSAR data and deep learning,\" in _2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, 353-356, IEEE (2015). * [289] Z. Huang, G. Cheng, H. Wang, _et al._, \"Building extraction from multi-source remote sensing images via deep deconvolution neural networks,\" in _Geoscience and Remote Sensing Symposium (IGARSS), 2016 IEEE International_, 1835-1838, IEEE (2016). * [290] D. Marmanis, F. Adam, M. Datcu, _et al._, \"Deep Neural Networks for Above-Ground Detection in Very High Spatial Resolution Digital Elevation Models,\" _ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences_**II-3/W4**(March), 103-110 (2015). * [291] S. Saito and Y. Aoki, \"Building and road detection from large aerial imagery,\" in _SPIE/IS&T Electronic Imaging_, 94050K-94050K, International Society for Optics and Photonics (2015). * [292] M. Vakalopoulou, K. Karantzalos, N. Komodakis, _et al._, \"Building detection in very high resolution multispectral data with deep learning features,\" _2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, 1873-1876 (2015). * [293] M. Xie, N. Jean, M. Burke, _et al._, \"Transfer learning from deep features for remote sensing and poverty mapping,\" _arXiv preprint arXiv:1510.00098_, 16 (2015). * ISPRS Archives_**41**(July), 405-410 (2016). * [295] Q. Zhang, Y. Wang, Q. Liu, _et al._, \"CNN based suburban building detection using monocular high resolution Google Earth images,\" in _Geoscience and Remote Sensing Symposium (IGARSS), 2016 IEEE International_, 661-664, IEEE (2016). * [296] Z. Zhang, Y. Wang, Q. Liu, _et al._, \"A CNN based functional zone classification method for aerial images,\" in _Geoscience and Remote Sensing Symposium (IGARSS), 2016 IEEE International_, 5449-5452, IEEE (2016). * [297] L. Cao, Q. Jiang, M. Cheng, _et al._, \"Robust vehicle detection by combining deep features with exemplar classification,\" _Neurocomputing_**215**, 225-231 (2016). * [298] X. Chen, S. Xiang, C.-L. Liu, _et al._, \"Vehicle detection in satellite images by hybrid deep convolutional neural networks,\" _IEEE Geoscience and remote sensing letters_**11**(10), 1797-1801 (2014). * [299] K. Goyal and D. Kaur, \"A Novel Vehicle Classification Model for Urban Traffic Surveillance Using the Deep Neural Network Model,\" _International Journal of Education and Management Engineering_**6**(1), 18-31 (2016). * [300] A. Hu, H. Li, F. Zhang, _et al._, \"Deep Boltzmann Machines based vehicle recognition,\" in _The 26th Chinese Control and Decision Conference (2014 CCDC)_, 3033-3038 (2014). * [301] J. Huang and S. You, \"Vehicle detection in urban point clouds with orthogonal-view convolutional neural network,\" in _2016 IEEE International Conference on Image Processing (ICIP)_, (2), 2593-2597 (2016). * [302] B. Huval, T. Wang, S. Tandon, _et al._, \"An Empirical Evaluation of Deep Learning on Highway Driving,\" _arXiv preprint arXiv:1504.01716_, 1-7 (2015). * [303] Q. Jiang, L. Cao, M. Cheng, _et al._, \"Deep neural networks-based vehicle detection in satellite images,\" in _Bioelectronics and Bioinformatics (ISBB), 2015 International Symposium on_, 184-187, IEEE (2015). * [304] G. V. Konoplich, E. O. Putin, and A. A. Filchenkov, \"Application of deep learning to the problem of vehicle detection in UAV images,\" in _Soft Computing and Measurements (SCM), 2016 XIX IEEE International Conference on_, 4-6, IEEE (2016). * [305] A. Krishnan and J. Larsson, _Vehicle Detection and Road Scene Segmentation using Deep Learning_. PhD thesis, Chalmers University of Technology (2016). * [306] S. Lange, F. Ulbrich, and D. Goehring, \"Online vehicle detection using deep neural networks and lidar based preselected image patches,\" _IEEE Intelligent Vehicles Symposium, Proceedings_**2016-Augus**(Iv), 954-959 (2016). * [307] B. Li, \"3D Fully Convolutional Network for Vehicle Detection in Point Cloud,\" _arXiv preprint arXiv:1611.08069_ (2016). * [308] H. Wang, Y. Cai, and L. Chen, \"A vehicle detection algorithm based on deep belief network,\" _The scientific world journal_**2014** (2014). * [309] J.-G. Wang, L. Zhou, Y. Pan, _et al._, \"Appearance-based Brake-Lights recognition using deep learning and vehicle detection,\" in _Intelligent Vehicles Symposium (IV), 2016 IEEE_, (Iv), 815-820, IEEE (2016). * [310] H. Wang, Y. Cai, X. Chen, _et al._, \"Night-Time Vehicle Sensing in Far Infrared Image with Deep Learning,\" _Journal of Sensors_**2016** (2015). * [311] R. J. Firth, _A Novel Recurrent Convolutional Neural Network for Ocean and Weather Forecasting_. PhD thesis, Louisiana State University (2016). * [312] R. Kovordanyi and C. Roy, \"Cyclone track forecasting based on satellite images using artificial neural networks,\" _ISPRS Journal of Photogrammetry and Remote Sensing_**64**(6), 513-521 (2009). * [313] C. Yang and J. Guo, \"Improved cloud phase retrieval approaches for China's FY-3A/VIRR multi-channel data using Artificial Neural Networks,\" _Optik-International Journal for Light and Electron Optics_**127**(4), 1797-1803 (2016). * [314] Y. Chen, X. Zhao, and X. Jia, \"Spectral Spatial Classification of Hyperspectral Data Based on Deep Belief Network,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_**8**(6), 2381-2392 (2015). * [315] J. M. P. Nascimento and J. M. B. Dias, \"Vertex component analysis: A fast algorithm to unmix hyperspectral data,\" _IEEE transactions on Geoscience and Remote Sensing_**43**(4), 898-910 (2005). * [316] T. H. Chan, K. Jia, S. Gao, _et al._, \"PCANet: A Simple Deep Learning Baseline for Image Classification?,\" _IEEE Transactions on Image Processing_**24**(12), 5017-5032 (2015). * [317] Y. Chen, H. Jiang, C. Li, _et al._, \"Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks,\" _IEEE Transactions on Geoscience and Remote Sensing_**54**(10), 6232-6251 (2016). * [318] J. Donahue, Y. Jia, O. Vinyals, _et al._, \"DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition,\" _Icml_**32**, 647-655 (2014). * [319] A. Zelener and I. Stamos, \"CNN-based Object Segmentation in Urban LIDAR With Missing Points,\" in _3D Vision (3DV), 2016 Fourth International Conference on_, 417-425, IEEE (2016). * [320] J. Long, E. Shelhamer, and T. Darrell, \"Fully convolutional networks for semantic segmentation,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 3431-3440 (2015). * [321] J. E. Ball, D. T. Anderson, and S. Samiappan, \"Hyperspectral band selection based on the aggregation of proximity measures for automated target detection,\" in _SPIE Defense+ Security_, 908814-908814, International Society for Optics and Photonics (2014). * [322] J. E. Ball and L. M. Bruce, \"Level Set Hyperspectral Image Classification Using Best Band Analysis,\" _IEEE Transactions on Geoscience and Remote Sensing_**45**(10), 3022-3027 (2007). DOI:10.1109/TGRS.2007.905629. * [323] J. E. Ball, L. M. Bruce, T. West, _et al._, \"Level set hyperspectral image segmentation using spectral information divergence-based best band selection,\" in _Geoscience and Remote Sensing Symposium, 2007. IGARSS 2007. IEEE International_, 4053-4056, IEEE (2007). DOI:10.1109/IGARSS.2007.4423739. * [324] D. T. Anderson and A. Zare, \"Spectral unmixing cluster validity index for multiple sets of endmembers,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_**5**(4), 1282-1295 (2012). * [325] M. E. Winter, \"Comparison of approaches for determining end-members in hyperspectral data,\" in _Aerospace Conference Proceedings, 2000 IEEE_, **3**, 305-313, IEEE (2000). * [326] A. S. Charles, B. A. Olshausen, and C. J. Rozell, \"Learning sparse codes for hyperspectral imagery,\" _IEEE Journal of Selected Topics in Signal Processing_**5**(5), 963-978 (2011). * [327] A. Romero, C. Gatta, and G. Camps-Valls, \"Unsupervised deep feature extraction of hyperspectral images,\" in _Proc. 6th Workshop Hyperspectral Image Signal Process. Evol. Remote Sens.(WHISPERS)_, (2014). * [328] J. E. Ball, L. M. Bruce, and N. H. Younan, \"Hyperspectral pixel unmixing via spectral band selection and DC-insensitive singular value decomposition,\" _IEEE Geoscience and Remote Sensing Letters_**4**(3), 382-386 (2007). * [329] P. M. Atkinson and A. Tatnall, \"Introduction neural networks in remote sensing,\" _International Journal of remote sensing_**18**(4), 699-709 (1997). DOI:10.1080/014311697218700. * [330] G. Cavallaro, M. Riedel, M. Richerzhagen, _et al._, \"On understanding big data impacts in remotely sensed image classification using support vector machine methods,\" _IEEE journal of selected topics in applied earth observations and remote sensing_**8**(10), 4634-4646 (2015). DOI:10.1109/JSTARS.2015.2458855. * a review,\" _Pattern Recognition_**35**, 2279-2301 (2002). DOI:10.1016/S0031-3203(01)00178-9. * [332] G. Huang, G.-B. Huang, S. Song, _et al._, \"Trends in extreme learning machines: A review,\" _Neural Networks_**61**, 32-48 (2015). DOI:10.1016/j.neunet.2014.10.001. * [333] X. Jia, B.-C. Kuo, and M. M. Crawford, \"Feature mining for hyperspectral image classification,\" _Proceedings of the IEEE_**101**(3), 676-697 (2013). DOI:10.1109/JPROC.2012.2229082. * [334] D. Lu and Q. Weng, \"A survey of image classification methods and techniques for improving classification performance,\" _International journal of Remote sensing_**28**(5), 823-870 (2007). DOI:10.1080/01431160600746456. * [335] H. Petersson, D. Gustafsson, and D. Bergstrom, \"Hyperspectral image analysis using deep learning-A review,\" in _Image Processing Theory Tools and Applications (IPTA), 2016 6th International Conference on_, 1-6, IEEE (2016). DOI:10.1109/ipta.2016.7820963. * [336] A. Plaza, J. A. Benediktsson, J. W. Boardman, _et al._, \"Recent advances in techniques for hyperspectral image processing,\" _Remote sensing of environment_**113**, S110-S122 (2009). DOI:10.1016/j.rse.2007.07.028. * [337] W. Wang, N. Yang, Y. Zhang, _et al._, \"A review of road extraction from remote sensing images,\" _Journal of Traffic and Transportation Engineering (English Edition)_**3**(3), 271-282 (2016). * [338] \"2013 IEEE GRSS Data Fusion Contest.\" [http://www.grss-ieee.org/community/technical-committees/data-fusion/](http://www.grss-ieee.org/community/technical-committees/data-fusion/). * [339] \"2015 IEEE GRSS Data Fusion Contest.\" [http://www.grss-ieee.org/community/technical-committees/data-fusion/](http://www.grss-ieee.org/community/technical-committees/data-fusion/). * [340] \"2016 IEEE GRSS Data Fusion Contest.\" [http://www.grss-ieee.org/community/technical-committees/data-fusion/](http://www.grss-ieee.org/community/technical-committees/data-fusion/). * [341] \"Indian Pines Dataset.\" [http://dynamo.ecn.purdue.edu/biehl/MultiSpec](http://dynamo.ecn.purdue.edu/biehl/MultiSpec). * [342] \"Kennedy Space Center.\" [http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes#Kennedy_Space_Center_28KSC.29](http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes#Kennedy_Space_Center_28KSC.29). * [343] \"Pavia Dataset.\" [http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes#Pavia_Centre_and_University](http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes#Pavia_Centre_and_University). * [344] \"Salinas Dataset.\" [http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes#Salinas](http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes#Salinas). * [345] \"Washington DC Mall.\" [https://engineering.purdue.edu/](https://engineering.purdue.edu/) biehl/MultiSpec/hyperspectral.html. * [346] G. Cheng and J. Han, \"A survey on object detection in optical remote sensing images,\" _ISPRS Journal of Photogrammetry and Remote Sensing_**117**, 11-28 (2016). DOI:10.1016/j.isprsjprs.2016.03.014. * [347] Y. Chen, Z. Lin, X. Zhao, _et al._, \"Deep Learning-Based Classification of Hyperspectral Data,\" _Ieee Journal of Selected Topics in Applied Earth Observations and Remote Sensing_**7**(6), 2094-2107 (2014). * [348] V. Slavkovikj, S. Verstockt, W. De Neve, _et al._, \"Hyperspectral image classification with convolutional neural networks,\" in _Proceedings of the 23rd ACM international conference on Multimedia_, 1159-1162, ACM (2015). * [349] C. Tao, H. Pan, Y. Li, _et al._, \"Unsupervised Spectral-Spatial Feature Learning With Stacked Sparse Autoencoder for Hyperspectral Imagery Classification,\" _IEEE Geoscience and Remote Sensing Letters_**12**(12), 2438-2442 (2015). * [350] Y. LeCun, \"Learning invariant feature hierarchies,\" in _Computer vision-ECCV 2012. Workshops and demonstrations_, 496-505, Springer (2012). * [351] M. Pal, \"Kernel methods in remote sensing: a review,\" _ISH Journal of Hydraulic Engineering_**15**(sup1), 194-215 (2009). * [352] E. M. Abdel-Rahman and F. B. Ahmed, \"The application of remote sensing techniques to sugarcane (Saccharum spp. hybrid) production: a review of the literature,\" _International Journal of Remote Sensing_**29**(13), 3753-3767 (2008). * [353] I. Ali, F. Greifeneder, J. Stamenkovic, _et al._, \"Review of machine learning approaches for biomass and soil moisture retrievals from remote sensing data,\" _Remote Sensing_**7**(12), 16398-16421 (2015). * [354] E. Adam, O. Mutanga, and D. Rugege, \"Multispectral and hyperspectral remote sensing for identification and mapping of wetland vegetation: A review,\" _Wetlands Ecology and Management_**18**(3), 281-296 (2010). * [355] S. L. Ozesmi and M. E. Bauer, \"Satellite remote sensing of wetlands,\" _Wetlands ecology and management_**10**(5), 381-402 (2002). DOI:10.1023/A:1020908432489. * [356] W. A. Dorigo, R. Zurita-Milla, A. J. W. de Wit, _et al._, \"A review on reflective remote sensing and data assimilation techniques for enhanced agroecosystem modeling,\" _International journal of applied earth observation and geoinformation_**9**(2), 165-193 (2007). DOI:10.1016/j.jag.2006.05.003. * [357] C. Kuenzer, M. Ottinger, M. Wegmann, _et al._, \"Earth observation satellite sensors for biodiversity monitoring: potentials and bottlenecks,\" _International Journal of Remote Sensing_**35**(18), 6599-6647 (2014). DOI:10.1080/01431161.2014.964349. * [358] K. Wang, S. E. Franklin, X. Guo, _et al._, \"Remote sensing of ecology, biodiversity and conservation: a review from the perspective of remote sensing specialists,\" _Sensors_**10**(11), 9647-9667 (2010). DOI:10.3390/s101109647. * [359] F. E. Fassnacht, H. Latifi, K. Sterenczak, _et al._, \"Review of studies on tree species classification from remotely sensed data,\" _Remote Sensing of Environment_**186**, 64-87 (2016). DOI:10.1016/j.rse.2016.08.013. * [360] I. Ali, F. Greifeneder, J. Stamenkovic, _et al._, \"Review of machine learning approaches for biomass and soil moisture retrievals from remote sensing data,\" _Remote Sensing_**7**(12), 16398-16421 (2015). DOI:10.3390/rs71215841. * [361] A. Mahendran and A. Vedaldi, \"Understanding deep image representations by inverting them,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 5188-5196 (2015). * [362] J. Yosinski, J. Clune, A. Nguyen, _et al._, \"Understanding neural networks through deep visualization,\" _arXiv preprint arXiv:1506.06579_ (2015). * [363] K. Simonyan, A. Vedaldi, and A. Zisserman, \"Deep inside convolutional networks: Visualising image classification models and saliency maps,\" _arXiv preprint arXiv:1312.6034_ (2013). * [364] D. Erhan, Y. Bengio, A. Courville, _et al._, \"Visualizing higher-layer features of a deep network,\" _University of Montreal_**1341**, 3 (2009). * [365] J. A. Benediktsson, J. Chanussot, and W. W. Moon, \"Very High-Resolution Remote Sensing : Challenges and Opportunities,\" _Proceedings of the IEEE_**100**(6), 1907-1910 (2012). DOI:10.1109/JPROC.2012.2190811. * [366] J. Fohringer, D. Dransch, H. Kreibich, _et al._, \"Social media as an information source for rapid flood inundation mapping,\" _Natural Hazards and Earth System Sciences_**15**(12), 2725-2738 (2015). * [367] V. Frias-Martinez and E. Frias-Martinez, \"Spectral clustering for sensing urban land use using Twitter activity,\" _Engineering Applications of Artificial Intelligence_**35**, 237-245 (2014). * [368] T. Kohonen, \"The self-organizing map,\" _Neurocomputing_**21**(1), 1-6 (1998). * [369] S. E. Middleton, L. Middleton, and S. Modafferi, \"Real-time crisis mapping of natural disasters using social media,\" _IEEE Intelligent Systems_**29**(2), 9-17 (2014). * [370] V. K. Singh, \"Social Pixels : Genesis and Evaluation,\" in _Proceedings of the 18th ACM international conference on Multimedia (ACMM)_, 481-490 (2010). * [371] D. Sui and M. Goodchild, \"The convergence of GIS and social media: challenges for GIScience,\" _International Journal of Geographical Information Science_**25**(11), 1737-1748 (2011). * [372] A. J. Pinar, J. Rice, L. Hu, _et al._, \"Efficient multiple kernel classification using feature and decision level fusion,\" _IEEE Transactions on Fuzzy Systems_**PP**(99), 1-1 (2016). * [373] D. T. Anderson, T. C. Havens, C. Wagner, _et al._, \"Extension of the fuzzy integral for general fuzzy set-valued information,\" _IEEE Transactions on Fuzzy Systems_**22**, 1625-1639 (2014). * [374] P. Ghamisi, B. Hfle, and X. X. Zhu, \"Hyperspectral and lidar data fusion using extinction profiles and deep convolutional neural network,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_**PP**(99), 1-14 (2016). * [375] J. Ngiam, A. Khosla, M. Kim, _et al._, \"Multimodal deep learning,\" in _ICML_, L. Getoor and T. Scheffer, Eds., 689-696, Omnipress (2011). * [376] Z. Kira, R. Hadsell, G. Salgan, _et al._, \"Long-Range Pedestrian Detection using stereo and a cascade of convolutional network classifiers,\" in _IEEE International Conference on Intelligent Robots and Systems_, 2396-2403 (2012). * [377] F. Rottensteiner, G. Sohn, J. Jung, _et al._, \"The ISPRS benchmark on urban object classification and 3D building reconstruction,\" _ISPRS Ann. Photogramm. Remote Sens. Stat. Inf. Sci_**1**, 293-298 (2012). * [378] J. Ngiam, Z. Chen, S. A. Bhaskar, _et al._, \"Sparse filtering,\" in _Advances in neural information processing systems_, 1125-1133 (2011). * [379] Y. Freund and R. E. Schapire, \"A desicion-theoretic generalization of on-line learning and an application to boosting,\" in _European conference on computational learning theory_, 23-37, Springer (1995). * [380] Q. Zhao, M. Gong, H. Li, _et al._, \"Three-Class Change Detection in Synthetic Aperture Radar Images Based on Deep Belief Network,\" in _Bio-Inspired Computing-Theories and Applications_, 696-705, Springer (2015). * [381] A. P. Tewkesbury, A. J. Comber, N. J. Tate, _et al._, \"A critical synthesis of remotely sensed optical image change detection techniques,\" _Remote Sensing of Environment_**160**, 1-14 (2015). * [382] C. Brekke and A. H. S. Solberg, \"Oil spill detection by satellite remote sensing,\" _Remote sensing of environment_**95**(1), 1-13 (2005). [DOI:10.1016/j.rse.2004.11.015]. * [383] H. Mayer, \"Automatic object extraction from aerial imagerya survey focusing on buildings,\" _Computer vision and image understanding_**74**(2), 138-149 (1999). DOI:10.1006/cviu.1999.0750. * [384] B. Somers, G. P. Asner, L. Tits, _et al._, \"Endmember variability in spectral mixture analysis: A review,\" _Remote Sensing of Environment_**115**(7), 1603-1616 (2011). DOI:10.1016/j.rse.2011.03.003. * [385] D. Tuia, C. Persello, and L. Bruzzone, \"Domain Adaptation for the Classification of Remote Sensing Data: An Overview of Recent Advances,\" _IEEE Geoscience and Remote Sensing Magazine_**4**(2), 41-57 (2016). DOI:10.1109/MGRS.2016.2548504. * [386] Y. Yang and S. Newsam, \"Bag-of-visual-words and spatial extensions for land-use classification,\" in _Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems_, 270-279, ACM (2010). * [387] V. Risojevic, S. Momic, and Z. Babic, \"Gabor descriptors for aerial image classification,\" in _International Conference on Adaptive and Natural Computing Algorithms_, 51-60, Springer (2011). * [388] K. Chatfield, K. Simonyan, A. Vedaldi, _et al._, \"Return of the devil in the details: Delving deep into convolutional nets,\" _arXiv preprint arXiv:1405.3531_ (2014). * [389] G. Sheng, W. Yang, T. Xu, _et al._, \"High-resolution satellite scene classification using a sparse coding based multiple feature combination,\" _International journal of remote sensing_**33**(8), 2395-2412 (2012). * 13 (2017). * [391] A. Joly, H. Goeau, H. Glotin, _et al._, \"LifeCLEF 2016: multimedia life species identification challenges,\" in _International Conference of the Cross-Language Evaluation Forum for European Languages_, 286-310, Springer (2016). * [392] S. H. Lee, C. S. Chan, P. Wilkin, _et al._, \"Deep-plant: Plant identification with convolutional neural networks,\" in _2015 IEEE International Conference on Image Processing (ICIP)_, 452-456 (2015). * [393] Z. Ding, N. Nasrabadi, and Y. Fu, \"Deep transfer learning for automatic target classification: MWIR to LWIR,\" in _SPIE Defense+ Security_, 984408, International Society for Optics and Photonics (2016). * [394] Y. Bengio, P. Simard, and P. Frasconi, \"Learning long-term dependencies with gradient descent is difficult,\" _IEEE transactions on neural networks_**5**(2), 157-166 (1994). * [395] X. Glorot and Y. Bengio, \"Understanding the difficulty of training deep feedforward neural networks.,\" in _Aistats_, **9**, 249-256 (2010). * [396] R. K. Srivastava, K. Greff, and J. Schmidhuber, \"Training very deep networks,\" in _Advances in neural information processing systems_, 2377-2385 (2015). * [397] J. Sokolic, R. Giryes, G. Sapiro, _et al._, \"Robust large margin deep neural networks,\" _arXiv preprint arXiv:1605.08254_ (2016). * [398] G. E. Hinton, S. Osindero, and Y.-W. Teh, \"A fast learning algorithm for deep belief nets,\" _Neural computation_**18**(7), 1527-1554 (2006). * [399] D. Erhan, A. Courville, and P. Vincent, \"Why Does Unsupervised Pre-training Help Deep Learning?,\" _Journal of Machine Learning Research_**11**, 625-660 (2010). DOI:10.1145/1756006.1756025. * [400] K. He, X. Zhang, S. Ren, _et al._, \"Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,\" in _Proceedings of the IEEE International Conference on Computer Vision_, 1026-1034 (2015). * [401] N. Srivastava, G. E. Hinton, A. Krizhevsky, _et al._, \"Dropout: A Simple Way to Prevent Neural Networks from Overfitting,\" _Journal of Machine Learning Research (JMLR)_**15**(1), 1929-1958 (2014). * [402] S. Ioffe and C. Szegedy, \"Batch normalization: Accelerating deep network training by reducing internal covariate shift,\" _arXiv preprint arXiv:1502.03167_ (2015). * [403] Q. V. Le, A. Coates, B. Prochnow, _et al._, \"On Optimization Methods for Deep Learning,\" in _Proceedings of The 28th International Conference on Machine Learning (ICML)_, 265-272 (2011). * [404] S. Ruder, \"An overview of gradient descent optimization algorithms,\" _arXiv preprint arXiv:1609.04747_ (2016). * [405] J. Duchi, E. Hazan, and Y. Singer, \"Adaptive subgradient methods for online learning and stochastic optimization,\" _Journal of Machine Learning Research_**12**(Jul), 2121-2159 (2011). * [406] M. D. Zeiler, \"Adadelta: an adaptive learning rate method,\" _arXiv preprint arXiv:1212.5701_ (2012). * [407] D. Kingma and J. Ba, \"Adam: A method for stochastic optimization,\" _arXiv preprint arXiv:1412.6980_ (2014). * [408] T. Schaul, S. Zhang, and Y. LeCun, \"No more pesky learning rates.,\" _ICML (3)_**28**, 343-351 (2013). * [409] R. K. Srivastava, K. Greff, and J. Schmidhuber, \"Highway networks,\" _arXiv preprint arXiv:1505.00387_ (2015). * [410] K. He, X. Zhang, S. Ren, _et al._, \"Deep residual learning for image recognition,\" _arXiv preprint arXiv:1512.03385_ (2015). * [411] D. Balduzzi, B. McWilliams, and T. Butler-Yeoman, \"Neural taylor approximations: Convergence and exploration in rectifier networks,\" _arXiv preprint arXiv:1611.02345_ (2016). \\begin{tabular}{c} John E. Ball is an Assistant professor of Electrical and Computer Engineering at Mississippi State University (MSU), USA. He received the Ph.D. degree in in Electrical Engineering from Mississippi State University in 2007, with a certificate in remote sensing. He is a co-director of the Sensor Analysis and Intelligence Laboratory (SAIL) at MSU and the director of the Simrall Radar Laboratory. He is the author of 45 journal and conference papers, and 22 technical tutorials, white papers, and technical reports, and has written one book chapter. He received best research paper of the year from the Veterinary and Comparative Orthopaedics and Traumatology in 2016 and technical paper of the year award from the Georgia Tech Research Institute in 2012. His current research interests include deep learning, remote sensing, remote sensing, machine learning, digital signal and image processing, and radar systems. Dr. Ball is an associate editor for the SPIE Journal of Applied Remote Sensing. **Derek T. Anderson** received the Ph.D. in electrical and computer engineering (ECE) in 2010 from the University of Missouri, Columbia, MO, USA. He is currently an Associate Professor and the Robert D. Guyton Chair in ECE at Mississippi State University (MSU), USA, an Intermittent Faculty Member with the Naval Research Laboratory, co-director of the Sensor Analysis and Intelligence Laboratory (SAIL) at MSU and an Associate Editor for the IEEE Transactions on Fuzzy Systems. His research interests include new frontiers in data/information fusion for pattern recognition and automated decision making in signal/image understanding and computer vision with an emphasis on uncertainty and heterogeneity. Prof. Anderson's primary research contributions to date include multi-source (sensor, algorithm and human) fusion, Choquet integrals (extensions, embeddings, learning), signal/image feature learning, multi-kernel learning, cluster validation, hyperspectral image understanding and linguistic summarization of video. He has published 100+ (journal, conference and book chapter) articles, he is the program co-chair of FUZZ-IEEE 2019, he co-authored the 2013 best student paper in Automatic Target Recognition at SPIE, he received the best overall paper award at the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) 2012, and he received the 2008 FUZZ-IEEE best student paper award. **Chee Seng Chan** received the Ph.D. degree from the University of Portsmouth, Hampshire, U.K., in 2008. He is currently a Senior Lecturer with the Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, Malaysia. His current research interests include computer vision and fuzzy qualitative reasoning, with an emphasis on image and video understanding. Dr. Chan was a recipient of the Institution of Engineering and Technology (Malaysia) Young Engineer Award in 2010, the Hitachi Research Fellowship in 2013, and the Young Scientist Network-Academy of Sciences Malaysia in 2015. He is the Founding Chair of the IEEE Computational Intelligence Society, Malaysia Chapter, and the Founder of Malaysian Image Analysis and Machine Intelligence Association. He is a Chartered Engineer of the Institution of Engineering and Technology, U.K. List of Figures 1 Block diagrams of DL architectures. (a) AE. (b) CNN. (c) DBN. (d) RNN. List of Tables 1 Acronym list. 2 Some popular DL tools. 3 DL paper subject areas in remote sensing. 4 Representative DL and FL Survey papers. 5 HSI Dataset Usage. 6 HSI Overall Accuracy Results in percent. IP = Indian Pines, KSC = Kennedy Space Center, PaCC = Pavia City Center, Pau = Pavia University, Sal = Salinas, DCM = Washington DC Mall. Results higher than 99% are in bold. \\begin{table} \\begin{tabular}{|p{142.3pt}|p{284.5pt}|} \\hline **Tool \\& Custom & Tool Summary and Website \\\\ **Citation** & \\\\ \\hline AlexNet[38] & A large-scale CNN with a non-saturating,neurons and a very efficient GPU parallel implementation of the convolution operation to make training faster. Website: [http://code.google.com/p/cuda-convnet/](http://code.google.com/p/cuda-convnet/) \\\\ \\hline Caffe[68] & C++ library with Python and Matlab interfaces. Website: [http://caffe.berkeleyvision.org/](http://caffe.berkeleyvision.org/) \\\\ \\hline cuda-convnet2[38] & The DL tool cuda-convnet2 is a fast C++/CUDA CNN implementation, and can also model any directed acyclic graphs. Training is performed using back-propagation. Offers faster training on Kepler-generation GPUs and multi-GPU training support. Website: [https://code.google.com/p/cuda-convnet2/](https://code.google.com/p/cuda-convnet2/) \\\\ \\hline gvnn[71] & The DL package gvnn is a NN library in Torch aimed towards bridging the gap between classic geometric computer vision and DL. This DL package is used for recognition, end-to-end visual odometry, depth estimation, etc. Website: [https://github.com/ankurhanda/gvnn](https://github.com/ankurhanda/gvnn) \\\\ \\hline Keras[72] & Keras is a high-level Python NN library capable of running on top of either TensorFlow or Theano and was developed with a focus on enabling fast experimentation. Keras (1) allows for easy and fast prototyping, (2) supports both convolutional networks and recurrent networks, (3) supports arbitrary connectivity schemes, and (4) runs seamlessly on CPUs and GPUs. Website: [https://keras.io/](https://keras.io/) and [https://github.com/fchollet/keras](https://github.com/fchollet/keras) \\\\ \\hline MatConvNet[70] & A Matlab toolbox implementing CNNs with many pre-trained CNNs for image classification, segmentation, etc. Website: [http://www.vlfeat.org/matconvnet/](http://www.vlfeat.org/matconvnet/) \\\\ \\hline MXNet[73] & MXNet is a DL library. Features include declarative symbolic expression with imperative tensor computation and differentiation to derive gradients. MXNet runs on mobile devices to distributed GPU clusters. Website: [https://github.com/dmlc/mxnet/](https://github.com/dmlc/mxnet/) \\\\ \\hline TensorFlow[69] & An open source software library for tensor data flow graph computation. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile devices. Website: [https://www.tensorflow.org/](https://www.tensorflow.org/) \\\\ \\hline Theano[74] & A Python library that allows you to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays. Theano features (1) tight integration with NumPy, (2) transparent use of a GPU, (3) efficient symbolic differentiation, and (4) dynamic C code generation. Website: [http://deeplearning.net/software/theano](http://deeplearning.net/software/theano) \\\\ \\hline Torch[75] & Torch is an embeddable scientific computing framework with GPU optimizations, which uses the LuaJIT scripting language and a C/CUDA implementation. Torch includes (1) optimized linear algebra and numeric routines, (2) neural network and energy-based models, and (3) GPU support. Website: [http://torch.ch/](http://torch.ch/) \\\\ \\hline \\end{tabular} \\end{table} Table 2: Some popular DL tools. \\begin{table} \\begin{tabular}{|l|l|l|l|} \\hline **Area** & **References** & **Area** & **References** \\\\ \\hline 3D (depth and shape) analysis & 107–115 & Advanced driver assistance systems & 116–120 \\\\ \\hline Animal detection & 121 & Anomaly detection & 122 \\\\ \\hline Automated Target Recognition & 123–134 & Change detection & 135–139 \\\\ \\hline Classification & 140–190 & Data fusion & 191 \\\\ \\hline Dimensionality reduction & 192, 193 & Disaster analysis/assessment & 194 \\\\ \\hline Environment and water analysis & 195–198 & Geo-information extraction & 199 \\\\ \\hline Human detection & 200–203 & Image denoising/enhancement & 204, 205 \\\\ \\hline Image Registration & 206 & Land cover classification & 207–211 \\\\ \\hline Land use/classification & 212–222 & Object recognition and detection & 223–233 \\\\ \\hline Object tracking & 234, 235 & Pansharpening & 236 \\\\ \\hline Planetary studies & 237 & Plant and agricultural analysis & 238–243 \\\\ \\hline Road segmentation/extraction & 244–250 & Scene understanding & 251–253 \\\\ \\hline Semantic segmentation/annotation & 254–266 & Segmentation & 267–272 \\\\ \\hline Ship classification/detection & 273–275 & Super-resolution & 276–279 \\\\ \\hline Traffic flow analysis & 280, 281 & Underwater detection & 282–285 \\\\ \\hline Urban/building & 286–296 & Vehicle detection/recognition & 297–310 \\\\ \\hline Weather forecasting & 311–313 & & \\\\ \\hline \\end{tabular} \\end{table} Table 3: DL paper subject areas in remote sensing. \\begin{table} \\begin{tabular}{|l|l|} \\hline **Ref.** & **Paper Contents** \\\\ \\hline 7 & A survey paper on DL. Covers CNNs, DBNs, etc. \\\\ \\hline 329 & Brief intro to neural networks in remote sensing. \\\\ \\hline 13 & Overview of unsupervised feature learning and deep learning. Provides overview of probabilistic models (undirected graphical, RBM, AE, SAE, DAE, contractive autoencoders, manifold learning, difficulty in training deep networks, handling high-dimensional inputs, evaluating performance, etc.) \\\\ \\hline 330 & Examines big-data impacts on SVM machine learning. \\\\ \\hline 1 & Covers about 170 publications in the area of scene classification and discusses limitations of datasets and problems associated with high-resolution imagery. They discuss limitations of handcrafted features such as texture descriptors, GIST, SIFT, HOG. \\\\ \\hline 2 & A good overview of architectures, algorithms, and applications for DL. Three important reasons for DL success are (1) GPU units, (2) recent advances in DL research. In addition, we note that (3) would be success of DL in many image processing challenges. DL is at the intersection of machine learning, Neural Networks, optimization, graphical modeling, pattern recognition, probability theory and signal processing. They discuss generative, discriminative, and hybrid deep architectures. They show there is vast room to improve the current optimization techniques in DL. \\\\ \\hline 331 & Overview of NN in image processing. \\\\ \\hline 332 & Discusses trends in extreme learning machines, which are linear, single hidden layer feedforward neural networks. ELMs are comparable or better than SVMs in generalization ability. In some cases, ELMs have comparable performance to DL approaches. They generally have high generalization capability, are universal approximators, don’t require iterative learning, and have a unified learning theory. \\\\ \\hline 333 & Provides overview of feature reduction in remote sensing imagery. \\\\ \\hline 8 & A survey of deep neural networks, including the AE, the CNN, and applications. \\\\ \\hline 334 & Survey of image classification methods in remote sensing. \\\\ \\hline 335 & Short survey of DL in hyperspectral remote sensing. In particular, in one study, there was a definite sweet spot shown in the DL depth. \\\\ \\hline 336 & Overview of shallow HSI processing. \\\\ \\hline 29 & Overview of shallow endmember extraction algorithms. \\\\ \\hline 10 & An in-depth historical overview of DL. \\\\ \\hline 4 & History of DL. \\\\ \\hline 337 & A review of road extraction from remote sensing imagery. \\\\ \\hline 9 & A review of DL in signal and image processing. Comparisons are made to shallow learning, and DL advantages are given. \\\\ \\hline 3 & Provides a general framework for DL in remote sensing. Covers four RS perspectives: (1) image processing, (2) pixel-based classification, (3) target recognition, and (4) scene understanding. \\\\ \\hline \\end{tabular} \\end{table} Table 4: Representative DL and FL Survey papers. \\begin{table} \\begin{tabular}{|l|c|} \\hline **Dataset and Reference** & **Number of uses** \\\\ \\hline \\hline IEEE GRSS 2013 Data Fusion Contest[338] & 4 \\\\ \\hline IEEE GRSS 2015 Data Fusion Contest[339] & 1 \\\\ \\hline IEEE GRSS 2016 Data Fusion Contest[340] & 2 \\\\ \\hline Indian Pines[341] & 27 \\\\ \\hline Kennedy Space Center[342] & 8 \\\\ \\hline Pavia City Center[343] & 13 \\\\ \\hline Pavia University[343] & 19 \\\\ \\hline Salinas[344] & 11 \\\\ \\hline Washington DC Mall[345] & 2 \\\\ \\hline \\end{tabular} \\end{table} Table 5: HSI Dataset Usage.
In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL. Remote Sensing, Deep Learning, Hyperspectral, Multispectral, Big Data, Computer Vision. *John E. Ball, [email protected]
Provide a brief summary of the text.
309
isprs/6657b642_821a_45da_9748_1687e1b3ce87.md
# Data Fusion, the Core Technology for Future on-Board Data Processing System Wang Chao Director Qu Jishuang Doctor Student **Liu Zhi** Associate Professor Institute of Remote Sensing Applications, Chinese Academy of Sciences, Beijing, China, 100101 [email protected], [email protected], [email protected] ###### Data fusion technique has been widely used to process earth observation data on the ground, which can generate data with higher quality and extract better information from multisource or multitemporal data. Furthermore, data fusion can also be used to extract better information from these data on-board, simultaneously, the redundant data will be eliminated greatly so as to accelerate data processing and reduce data for storage and transmission. However, on-board data fusion processing will confront more difficulty, one of the most principal troubles is that on-board data processing system must be completely autonomous, which results in some procedures such as image registration, feature extraction, change detection, object recognition becoming more complicated, while they can be processed by help of manual operates despite being difficult on the ground. Of course, the tremendous advantage of data fusion for on-board data processing will promote investigators to remove the obstacles on the road to on-board data fusion-based information extraction. ## 1 Current Status, Trend and Strategic Direction of on-Board Data Processing With the rapid development of information technology, the users' requirement to information is transform-ing from static state, un-real-time to dynamic state and real-time. The dynamic state and real-time information acquired by remote sensing technology has been used successfully in the area of urban planning, precision farming vegetation coverage, ocean observation, disaster monitoring, etc. How to utilize the great amount data and the higher and higher resolution for the earth observation has been a focus to the users and investigators. Currently, earth observation data are always processed after being transmitted to ground data processing center (GDPC), then, in GDPC, the data are de-noised, and corrected geometric and radiometric bias, and then accurate images can be acquired. After that, image classification, object category, target recognition, and decision generation are executed. Finally the processed result will be distributed to the entity units. For the sake of earth's curvature, sometimes the earth observation data and processed result are transmitted on the help of the ground delay station. From foresaid procedure we can conclude that data processing and maintaining are very complicated because of the complex linking route, data type, and data format in the traditional earth observation data pro-cessing mode. Moreover, distribution of the processed data is also a intricate problem, sometimes the processed results need to be transmitted to the communicating satellite, and then distributed to the end users. Not only does the increase of number and depth of the linking route result in the increase of the complexity of data maintain-ing, but it reduces the reliability of the data processing system, and also increases the processing and transmit-ting time, going against real-time data processing and distribution. Furthermore, with rapid increase of the earth observation data quantity and image resolution, users are more and more eager to acquire information to detect objects' change, recognize target. ## 2 Data Fusion, the Core Technology for Future on-Board Data Processing System ### Pecora 15/Land Satellite Information IV/ISPRS Commission I/FIEOS 2002 Conference Proceedings Figure 1: Future On-Board Data Processing-Distributing System of IEOS. Processed results by data processing system on each earth observation satellite are to be distributed to the end users (such as aircrafts, ships, vehicles, and so on) by communicating satellites. There is a bi-directional linking route between on-board data processing system and ground monitoring center. 1 exhibits an on-board data processing-distributing system. After acquiring data by earth observation system, at first, on-board data processing system will preprocess these raw data as denoising, geometric and radiometric correction, registration, etc. In order to increase efficiency of on-board data processing to satisfy the real-time processing, simultaneously, in order to satisfy change detection and target classification and recognition, further object detection there we think that \"object\" is different from \"target\", the former is an abstract concept including targets, areas, and so on; however, the later is an embodying object) and object segmentation are held on the preprocessed images. At first, ROI (Region of Interest) and OOI (Object of Interest) detection are executed, and then the region or object is segmented away from background. It is Obvious that the data quantity is reduced greatly by this procedure. After that, Features are extracted from the ROI or OOI, and these Features are extended to a new feature space by combining the same region or object's feature of multiplatform and multisensor, then feature-level fusion is held on the help of some auxiliary information according to the user task's requirement. Furthermore, the fused features are applied on object change detection and target recognition, and decision-level fusion is taken combining the auxiliary knowledge and expert system, then decision information is extracted from raw data and auxiliary knowledge. Finally the decision information is distributed to the end users by communicating satellite. Simultaneously, there is a bi-directional linking route between on-board data processing system and GMC (Ground Monitoring Center), and necessary processed data on-board are to be transmitted to GMC by the down-linking route. GMC monitors the on-board data processing procedure, and corrects the processed result once bias of the on-board processed result being out of permitted bound, the correcting information is transmitted to on-board data processing system by the up-linking route. However, storage, transmission, and real-time processing of the great amount data on-board have been tremendous challenges to on-board data processing technology. Not only can data fusion technology extract higher quality information by adequately utilizing complementariness among multisource data, but eliminate their redundancy, therefore valid information can be extracted from great amount data by it at very rapid speed, and greatly reduce data quantity for processing, storage, and transmission and processing time, so data fusion technology is very propitious to on-board data processing. Despite data fusion is different from object detection, change detection, target classification and recognition, however, there are also so many sameness among them that they intercross to great extent, for instance, feature-level and decision-level fusion are often used to increase accuracy of object detection, change detection, target classification and recognition, may, methods used by data fusion such as statistics theory, neural network, fuzzy logic, expert system are also widely applied on them. All of those show that data fusion technology has combined with object detection, change detection, target classification and recognition, so much as, some investigators partition image classification and change detection into data fusion category. In order to process these tremendous data observed by on-board sensors, at the same time of exploring and applying novel technology, more and more complicated, intelligent and powerful data processing systems (Marchisio, 1999, [http://research.hq.nasa.gov/code_y](http://research.hq.nasa.gov/code_y)), including software and hardware, are developed to execute specific tasks. Nevertheless, increase of system complexity results in very rapid development cost increase, and longer and longer development period, accordingly, larger and larger risk. Therefore, besides possessing powerful and rapid capability for data processing, future on-board data processing system will dedicate to following objectives: * Reduce development risk, cost, period of space-based system for information acquiring, processing and distributing; * Adequately and validly utilize earth science data; * Possess capability to take up new earth observation data, information product, and data processing modules; * Have adequate security mechanism; * Try to reduce system's mass and save satellite's payload; and * Possess robustness to resist execable environment such as disturbance, radiometry. OES of NASA has proposed a series of programs ([http://research.hq.nasa.gov/code_y](http://research.hq.nasa.gov/code_y)), such as AIST (Advanced Information Systems Technology), IIP (Instrument Incubator Program), NMP (New Millennium Program), and so on, in which on-board data processing is an important sub-system. ## 2 Requirements for on-board data processing It will take tremendous time and system resource to directly process earth observation images pixel by pixel, which is obviously not advantageous to real-time processing on-board. In fact, users' task requirements generally focus on object detection, change detection, object classification, and target recognition, etc. However, there are many environment background data in observational data, so eliminating these data before further processing will save much time and resource so as to improve performance of on-board data processing system greatly, which is an important step to real-time processing. Figure 2 gives users' requirements and application execution flows. This eliminating procedure is implemented by object detection, including ROI and OOI (In fact, this paper regards ROI as part of OOI). While detecting OOI, scale-space method will be very useful to accelerate the processing. For example, on the help of auxiliary information system can compute the top bound of scale not to filter object, and then executing OOI detection at this scale will be more efficient than at scale of original detailed image. After detecting OOI in images, they will be segmented and taken further processing. Figure 2: Users’ requirements for earth observation. The dashed lines represent users’ requirements flow, and the real lines exhibit execution flow. With more and more requirement for change monitoring of climate, environment, ecosystem, precise agriculture, etc, space based multisensor data processing system for earth observation and change detection is widely used for the reason of their expansive coverage and some sensors all-weather capacity (Lunetta, 1999). Currently two detection modes have been used to implement change detection, which are post- classification approach and pre-classification approach. The former mode classifies multitemporal images by thematic features at first, and then compare the classification result to acquire change results. This mode has been considered a reliable approach for change detection, so it is regarded as a normal technique to quantity evaluation for image difference. But, computation resulting in consumption of system resource, consistency, and false propagating are all constraints to it. Pre-classification approach compares the change of multitemporal images at first, afterwards, the change features are to be classified, which includes composite analysis method using standard pattern recognition and spectral classification techniques, image differencing method comparing spectral value of multitemporal images directly and judge the change by threshold, principal components analysis method reducing redundancy so as to reduce features dimensions, change vector analysis method using the temporal trend of mean vectors to categorize object change, and spectral mixture analysis method decomposing and analyzing spectrums of multispectral images, etc. Object classification is one principal objective of on-board data processing (Solaiman, 1999), which is used to classify objects in images by thematic rules, such as urban area, vegetation area, agriculture area, etc. Furthermore, with more and more on-board earth observation data being acquired, higher and higher image resolution, for example, ground resolution of either optical or SAR sensor for commerce having been in the range of 10m-1m, which provides the possibility to recognize embodying target, then target such as road, ship, harbor, aerodrome recognition, has been widely investigated and advanced to some extent. Methods for object classification are generally classified supervised classification methods and unsupervised classification. The former define all kinds of objects in images as different numerical descriptors, which is implemented by analyzing specific training examples, and the later directly categorize objects into different clusters by object features. From which it can be concluded that the execution flow is contrary between these two kinds of methods, the former define class information at first, and then categorize objects into these classes; on the contrary, the later categorize objects into different clusters, afterwards, use other information to merge or delete these clusters, until no further action can be taken. Conventional statistic based supervised classification methods always assume that spectral features of different objects can be described by probability distribution function in spectral feature space, as normal, normal or gaussian distribution. These methods include maximum likeness method, minimum distance method, parallelepiped method, context classification method, etc. ## 3 Challenges of on-board data processing Two principal performance indexes, accuracy and false alert, are **DATA FUSION, THE CORE TECHOLOGY FOR FUT ON-BOARD DATA PROCESSING SYSTEM** **Pecora 15/Land Satellite Information IV/ISPRS Commission I/FIEOS 2002 Conference Proceedings** But then, multisource or multitemporal data fusion can solve these troubles commendably **B**enedikksson, 1999; Gateguille, 2000). Different sensors such as visible, infrared, microwave radar, multispectral, hyperspectral can capture different features of object (Myler, 2000), and extracting features from raw data can reduce processing data greatly, by which it is very advantageous to accelerate data processing and transmit data between satellites (Petrou, 1999). However, there must be any information lost during this procedure, which will descend the effect, so these features can be fused to improve both accuracy and false alert. Principal objects of change detection are those in Multitemporal images. In fact, multitemporal data processing is also an important part of data fusion, which decides whether data fusion can highly integrate with change detection or not. In fact, in post-classification mode, data fusion technology can improve classification accuracy, which is obviously helpful to later change detection. Furthermore, while detecting changes, fusing features of changed object and enhancing feature information will make change detection more accurate, and can reduce false alert markedly. Moreover, in pre-classification method, using Raw Data-Level fusion to enhance images can provide higher quality information, and in later change classification, extracting changed objects' features and fusing them can also improve classification accuracy. On higher and higher requirement for accuracy and intelligence of object classification and target recognition by users, knowledge based methods such as neural network and fuzzy logic methods are more and more used to classify objects and recognize targets for reason of many data not able to be accurately depicted by model or statistic theory. Generally, traditional supervised classification and unsupervised classification methods are always used to process single image, so while image quality is high, execution procedure such as OOI extraction, segmentation, and feature extraction will be successful, (McConnell, 1999). However, while earth observation is executed all-weather, there will be many data whose quality is reduced, which will results in reducing accuracy and increasing false alert for object classification and target and increasing false alert for object classification and target Figure 3: Challenges during on-board intelligent data processing, for which data fusion can provide valid solutions. recognition, severely, classification and recognition can not be successful. Data fusion provide more ideal classification and recognition methods, that is, take raw data level fusion to multisensor or multitemporal images at first so as to enhance object features in images, and then extract OOI features form fused images, afterwards, execute feature level fusion to intensity object features and decision level fusion [12] combining external knowledge database and expert system, finally, these fused results are to be used to classify objects and recognize targets. Upon that, data fusion based classification and recognition methods are formed, which can improve effect of classification and recognition. Currently, data fusion based classification and recognition methods are statistic methods, neural network methods, fuzzy logic methods, and so on. In fact, these methods are always combined to compensate defects each other. For example, neural network methods combining statistic methods can be applied on classifying objects or recognizing targets in data whose statistic information is completely known, from which we can acquire more accurate results than by single method, and in which neural network methods need to spend less training time for using part of statistic information. ## 4 Data Fusion Framework As the core technology of future on-board data processing, data fusion is widely relative to many research areas. Basing on theories as digital signal processing, control theory, artificial intelligence, pattern recognition, and combining spectral analysis and reliability theory, simultaneously, utilizing many mathematic and statistic tools, data fusion is technology combining statistics analysis, wavelet analysis, neural network, fuzzy logic, expert system with many applications such as image processing, object detection, change detection and target recognition [13]. After fusing many remote sensing data and analyzing the experimental result, L. Wald [14] gave the following definition to data fusion: data fusion is a formal framework that expresses the means and tools for the alliance of data originating from different sources; It aims at obtaining information of greater quality; the extract definition of \"greater quality\" will depend upon the application. Above definition exhibits two main purposes of data fusion: * Combine data information with higher quality; * Refine data and eliminate redundancy. Data fusion technology formed with the development of multisensor data processing methods, and applied widely on remote sensing image processing, robotic vision, industry procedure monitoring, medical image processing, etc [15]. Specially, data fusion has been successfully applied on processing remote sensed data of Landsat series, SPOT series, ADEOS, SIR-C/L, ERS-1 and 2, JERS-1, Radarast-1. Generally, application analysis and design for data fusion are held at different levels, therefore, data fusion is divided into three levels, that is, raw data level fusion, feature level fusion, and decision level fusion. Data from multisensor always include any noise and distortion, which will take disadvantage to later processing, so preprocessing such as denoising, geometric correction, radiometric correction have to be executed. During procedure of every level data fusion, in order to increase accuracy, external knowledge is always utilized as auxiliary information, and extracted information during processing procedure can also be saved to the external knowledge database. Figure 4 gives a framework of data fusion. While images are input into the framework, they are preprocessed firstly, including denoising, geometric correction, radiometric correction, and so on. Then, raw data level fusion can be executed after multisource images being coregistered. Raw data level fusion is the most mature among the three levels fusion, and has formed rich and valid fusion methods, which are generally concluded following three types: * Color transformation method, taking advantage of possibility of presenting data in different color channels, typical one of them is IHS method; * Statistical and numerical method, utilizing arithmetic operators and others to combine images in different bands, such as PCA, PCS method; and * Multiresolution analysis method, taking advantage of different scale to decompose, fuse, and restore multisource images. Applications of raw data based level fusion are generally image enhancement, image classification and image compress, which can be advantageous to manually understand images, or provide better input images for feature level fusion. Processed result by these will form images product, and on-board data processing system will put them into images product database. After extracting features from raw data, data fusion system will save them to object feature database, on the other hand, the features from multiplatform and multisensor will be fused to acquire higher quality attributes, whose quantity is also much less than before. Feature extraction [1, 13] means that extract all kinds of characters of data and transfer them to another attributes which is advantageous to processing. These features form a feature space, then, investigators can fuse, identify, classify them to serve for object detection and target recognition. In general, features of earth observation data include: * Geometric features, such as line, curve, edge, ridge, corner, etc; * Structural features, for example, area, relative orientation; * Statistic features, including number of object's surfaces, perimeter of plane, and texture character; and * Spectral features, such as color, spectral signature. ## 5 Data Fusion, the Core Technology for Future on-Board Data Processing System Figure 4: Framework design for three-level data fusion structure. Feature level fusion methods include assumption and statistics based methods, and knowledge based methods. The former include Bayesian theory statistic method, D-S evidence method, correlation clustering correlation method, and so on; and the later comprises of neural network learning method [14], fuzzy logic de-uncertainty method [15], expert knowledge intelligence method, etc. Following further processing will be executed utilizing fused features: * Save to objects' characters database; * As the input information for feature identity; * Apply on image classification, change detection, target detection and recognition. Decision level fusion is the highest level among all levels data fusion, which is validated by that the fused result can be used to provide decision for mankind. Decision level fusion methods are generally divided into identity based and knowledge based methods. The former takes advantage of assumption and probability to classify object, including MAP method, ML method, BC method, D-S method, and so on; the later utilizes logic template, syntax rule, and context to fuse data, comprising of expert knowledge method, Neural network method, fuzzy logic method, etc. The inherent principal of these methods is almost same as that of Feature-Level fusion methods', except for following difference: * Object fused are not the same, that is, the object of Feature-Level fusion is characters space, while the decision fusion's is decision action space; and * Extent of depending on supporting knowledge is different, that means Decision-Level fusion depends on external knowledge absolutely, which depends much more inference and decision inferring from external knowledge than Feature-Level fusion. Results fused by decision fusion can also be used to classify images, detect changes, detect and recognize targets, at the same time, they are saved to information product database, and then these product can be distributed to end users by on-board distributing system. Figure 5 shows an integrated highly global logic architecture of on-board data processing system. The base kernel, space from the global origin to real curve, comprises some elementary data processing procedure and methods, such as denoising, geometric and radiometric correction, image co-registration, etc. Outer of the base kernel, space between real curve and the dashed curve, is data fusion processing. As the core of whole system, it will provide powerful tools and functions to all kinds of applications, such as enhancing images, improving images' resolution, extracting objects' features in image, and executing feature level and decision level fusion, and so on, so as to provide support of inference and decision for outer applications. Then outer of data fusion level, that is, space between the two dotted curves, there are some basic applications framework such as object classification, object change detection, target recognition. There are some space overlapped among these applications, as the same as overlapping with the inner data fusion level, which denote that there are many methods and applications shared by them. On the surface of the global architecture, space from dashed curve to surface, there are many embodied applications processing, such as land cover classification, urban change detection, road recognition, and so on. ## 5 Data Fusion, the Core Technology for Future on-Board Data Processing System ### Pecora 15/Land Satellite Information IV/ISPRS Commission I/FIEOS 2002 Conference Proceedings Figure 6 exhibits a framework of on-board data processing system with data fusion technology as the core, which is driven by users' requirement, and provided data by autonomous subsystem for earth observation. At first, end users (such as vehicles, ships, aircrafts, etc.) send a task requirement to on-board data processing system, then task decomposing subsystem will decompose the task and perceive user's requirement, and analyze required resource, data resolution, data quantity, etc for implementing the task. Afterwards, it will drive earth observation execution system (including autonomous satellite controlling sub-system, autonomous sensor controlling sub-system, and earth observation sub-system) to acquire data needed. After acquiring data needed and preprocessing, ROI and OOI detection will be executed. Figure 5: The global logical structure of data fusion integrating with Applications, and it’s a procedure from abstract processing to idigraphic applications along every core-to-surface direction. 1. (Space from the global origin to real curve) The framework’s base kernel, including preprocessing such as denoising, correction, co-registration, etc. 2. (Space between real curve and the dashed curve) Data fusion Processing. 3. (Space between the two dotted curves) Basic applications framework as classification, object detection, change detection, and target recognition. Due to using many similar methods with data fusion, it has any overlapped space with data fusion space. 4. (Space from dashed curve to surface) Idiographic applications, including image classification (urban classification, etc), object detection (motional target detection, etc), change detection (urban change detection, etc), and target recognition (harbor recognition, etc), they are also overlapping with the basic applications framework. Figure 6: Structure of on-board data processing system with the core of data fusion engine. Then features of these ROI and OOI will be extracted, on the one hand, which are to be saved to features database; on the other hand, which are to be input into data fusion description module, and then interpreted by data fusion lexicon or produced description model by data fusion model. Furthermore, the description model will be fused by fusion inference engine, including multiplatform and multisensor data fusion, so as to enhance object's characters. Simultaneously, how to fuse the description model will be decomposed and optimal algorithm will be scheduled to execute so as to satisfy requirement of task as fast as possible, which will be advantageous to ensuring real-time execution of task by the system to the most extent, while more complicated algorithms will provide more accurate result but be apparently too costly for real-time processing. In addition, images' features database and external information database will be used during fusion inference procedure, and fusion tools will also be used. Results of fusion inference will be enhanced features and decision information, and then these results can be used to detect object's change, classify and recognize targets. Afterwards, the detection and recognition results will be produced information required by users, and finally these products will be distributed to the end users. ## 6 Conclusion Currently, more and more data acquired by various kinds of sensors on-board, how to extract information from these data intelligently, with a real-time processing and distributing to all kinds of users. There is no doubt that it's more convenient processing data on-board and distributing information from satellite to end users directly than from ground processing and distributing, because the later needs many ground station to transmit, process data and distribute information, or distributing by a returning linking from ground station to communicating satellite. Therefore, data processing and information distributing on-board have been the future development trends of earth observation data utilizing. In fact, users' application requirements for earth observation focus on object detection, change detection, image classification, target recognition, etc. Therefore, how to provide solutions to these applications will be a crucial technology for on-board intelligent processing system. Nay, Data processing on-board will confront more challenge such as autonomous processing, real-time processing, accuracy and false alert, safety, and so on. Data fusion is a valid technology having been mature stage by stage and used widely on earth observation data processing, which has provided powerful tools for users of earth observation. Divided into three levels, each of them can provide solutions to some applications. That is, raw data level fusion can fight for image enhancement, image classification, image code and compress, change detection, however, by utilizing features of image, feature level fusion can strike object detection, change detection, target classification and recognition. The decision level fusion will provide further solutions to applications in feature level fusion, so that more accurate and lower false alert information can be generated. There have been many methods for data fusion, which are stepping to intelligence more and more, such as neural network methods with ability to learning, fuzzy logic methods providing depiction to uncertainty, and experts system, combining many human's knowledge. As an integrated highly system for information processing, including preprocessing, data fusion, abstract applications, and idiographic applications, data fusion system can be integrated into whole on-board data processing system, provide powerful and valid tools for earth observation. ## Acknowledgements The authors are grateful to Guo Ziqi, Gaoxing, Zhang Hong Zhang Weiguo, Ge Jianjun, members of Radar Group, Remote Sensing Information Key Lab of Institute of Remote Sensing Applications, Chinese Academy of Sciences. ## References * Benediktsson (1999) Benediktsson, J. A. Kanellopoulos, I. (1999). Information extraction based on multisensor data fusion and neural network, _Information processing for remote sensing_, pp. 369-395, World Scientific. * Gatepaille (2000) Gatepaille, S. Brunessaux, S. Abdulrab, H. (2000). Data fusion multi-agent framework, _Proceedings of SPIE_, vol. 4051, pp. 172-179. * Gee (2000) Gee, L. A. and Abidi, M. A. (2000). Multisensor fusion for decision-based control cues, _Proceedings of SPIE_, vol. 4052, pp. 249-257. * Haralick (1973) Haralick, R. M. Shanmugam, K. and Dinstein, I. (1973). Textural features for image classification, _IEEE Transactions on Systems, Man and Cybernetics_, vol. 3, pp. 610-621. [http://research.hq.nasa.gov/code](http://research.hq.nasa.gov/code) v. * Jousselme (2000) Jousselme, A. L. Grenier, D. and Bosse, E. 2000). Conceptual exploration package for data fusion, _Proceedings of SPIE_, vol. 4051, pp. 203-214. * Lunetta (1999) Lunetta, R. S. Elvidge, C. D. (1999). _Remote sensing change detection, environmental monitoring methods and applications_, Taylor & Francis Ltd., London. * Mahler (2000) Mahler, R. (2000). Optimal/robust distributed data fusion: A unified approach, _Proceedings of SPIE_, vol. 4052, pp. 128-138. * Marchisio (1999) Marchisio, G. B. Li A. Q. (1999). Intelligent system technologies for remote sensing repositories, _Information processing for remote sensing_, pp. 541-562, World Scientific. * McConnell (1999) McConnell, I Oliver, C. (1999). Segmentation-based target detection in SAR. _Proc. SPIE_ vol. 3869, Florence, Italy. pp. 45-54. * Myler (2000) Myler, H. R. (2000). Characterization of disagreement in multiplatform and multisensor fusion analysis, _Proceedings of SPIE_, vol. 4052, pp. 240-248. * Petrou (1999) Petrou, M. Stassopoulou, A. (1999). Advanced techniques for Fusion of Information in Remote Sensing: An Overview. _Proc. SPIE_, vol. 3871. Florence, Italy, pp. 264-275. * Pigeon (2000) Pigeon, L. Solaiman, Toutin, B. T. and Thomson, K. P. B. (2000). modeling for multi-sensors fusion in remote sensing, _Proceedings of SPIE_, vol. 4051, pp. 420-427. * [Solaiman1999] Solaiman, B. (1999). Information fusion for multispectral image classification post-processing, _Information processing for remote sensing_, pp. World Scientific. * [Solaiman et al.2000] Solaiman, B. Lecornu L. and Roux C. (2000). Edge detection through information fusion using fuzzy and evidential reasoning concepts, _Proceedings of SPIE_, vol. 4051, pp. 267-278. * [Wald1999] Wald, L. (1999). Some terms of reference in data fusion. _IEEE Transactions on Geoscience and Remote Sensing_. Vol. 37, No. 3. pp. 1190-1193. * [Zhang Sun2000] Zhang, Z. Sun, S. (2000). Image fusion based on the self-organizing feature map neural networks, _Proceedings of SPIE_, vol. 4052, pp. 270-275. ## Data Fusion, the Core Technology for Future on-Board Data Processing System **Pecora 15/Land Satellite Information IV/ISPRS Commission I/FIECOS 2002 Conference Proceedings**
Currently, more and more earth observation data have been acquired by many kinds of sensors on different platform, such as optic sensors, microwave sensors, infrared sensors, hyperspectral sensors, etc. Thanks to giant resource being required to store and transmit these tremendous data so that the cost is very large and the efficiency is low, investigators are compelled to process them on-board as possible as they can. So far, on-board data processing only settles on some simple preprocessing, such as correction, denoising, compensation, etc. Information extraction not only is the objective of earth observation, but can distill large amount data so that amount of data needing to be stored and transmitted is reduced greatly. Feature extraction, change detection, and object recognition executed on-board will provide us an efficient information extraction system for earth observation.
Write a summary of the passage below.
161
arxiv-format/1609_09267v1.md
# Robust Moving Objects Detection in Lidar Data Exploiting Visual Cues Gheorghii Postica\\({}^{1}\\) Andrea Romanoni\\({}^{1}\\) Matteo Matteucci\\({}^{1}\\) \\({}^{1}\\)Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Milano, Italy [email protected]@polimi.it (corresponding author) [email protected] ## I Introduction Moving object detection has been acknowledged to be a crucial step in many applications (e.g., autonomous driving, advanced driver assistance systems, robot navigation, video surveillance, etc.) where specific targets such as people, vehicles, or animals, have to be detected before operating more complex processes. In robotics the observer is moving while it operates in the environment, and it becomes hard to distinguish which object is moving with respect to the static scene due to egomotion effects; this affects all sensors used in mobile robotics being these laser range finders or cameras. An example of moving object detection in laser data is the work by Azim and Aycard [1]; in their work, the authors propose to store perceived point clouds in an octree-based occupancy grid, and look for inconsistencies between subsequent scans. Each voxel (octree cell) of the occupancy grid is classified as free or occupied through ray tracing; voxels classified as both occupied and free in different scans, are called as dynamic. Dynamic voxels are then clustered and filtered such that clusters whose bounding box shape differs significantly from fixed size boxes, are removed. In the authors scenario, fixed sized boxes represent cars, trucks and pedestrians, therefore, the approach was targeted at a specific set of objects. The former example is one of the few cases of laser-based moving objects detection algorithm. Indeed an extended laser-based literature focuses on the closely related, and possibly simpler, change detection problem [2, 3, 4, 5]. Change detection aims at detecting changes in an observed scene with respect to a previously stored map of the environment, e.g., to understand if an object appears or disappears. Conversely, in moving objects detection, the map is unknown a-priori and the moving objects can only partially disappear, i.e., between two consecutive observations a region of a moving object remains occupied, therefore appearing as a static item. Andreasson _et al._[3] and Nunes _et al._[6] represent laser scans through a set of distributions, respectively the Normal Distribution Transform and the Gaussian Mixture Model, to detect changes where the distributions differ significantly. Vieira _et al._[2] cluster the laser points into implicit volumes an through Boolean operators detect the regions of change. Xiao _et al._[5] model the physical scanning mechanism using Dempster-Shafer Theory (DST), and provide sound statistical tools to evaluate the occupancy of a scan and to compare the consistency among scans, i.e., to detect the moving object. Our contribution, in this paper, is inspired by the latter work. As far as cameras are concerned, classical image-based methods to detect moving objects in a video sequence are based on the difference between a model of the background, i.e., the static scene, and the current frame (see [7] and [8]). Such algorithms require the camera to be fixed. Some extensions are able to handle jittering or moving cameras by registering the images against the background model [9, 10, 11, 12]. However this class of algorithms needs information about the appearance of the background scene and in most cases, e.g., with a surveying vehicle, this assumption does not hold. Other approaches cluster optical flow vectors [13], or rely on deep learning [14]. Laser range finders and cameras have complementary features; the former are able to provide 3D 360-degree accurate measurements of the environment, the latter capture the appearance of the environment. Only few authors proposed hybrid approaches to combine laser data with the visual information provided by a camera in moving object detection. Premebida _et al._[15] proposed to join two classifiers based on laser camera features to detect pedestrians moving in front of the observer; in this case the scope was limited and the proposed algorithm would need a not trivial extension of the training to deal with general moving objects. Vallet _et al._[16] extended the change detection algorithm presented by Xiao _et al._ in [5] to detect moving objects. Moreover they exploit visual information by projecting into the image the laser 3D points and by segmenting the moving objects through a graph cut algorithm that takes into account laser label consistency, a smoothness term, and a penalization inthe labeling where the image shows edges. In this paper, we propose a novel hybrid approach to improve the accuracy of state-of-the-art laser-based moving objects estimation and speed up its computation thanks to a novel ground plane detection algorithm and octree representation; in addition we propose an image based validation test to diminish false positives detection. Section II introduces the novel laser-based moving objects detection method. In Section III we show how we improve its robustness against false positive detection by exploiting image information. In Section IV we illustrate the results of our algorithm over the KITTI [17] public dataset, and in Section V we provide some insights on future developments in the paper conclusion. ## II Laser-based Moving Objects Detection In the following we focus on the laser-based moving object detection setting in which we process a sequence of 3D point-clouds incrementally; as an example, consider a Velodyne lidar on the top of a car moving in a urban area with the aim of building a map of it. We keep a model of the static scene, initialized from the first point cloud, in the form of a 3D map and we update it by fusing subsequent point clouds after dynamic objects removal. The reference pipeline for this task is depicted in the upper part of Fig. 1 where, in the filtering block, we include also the novel ground plane removal algorithm. ### _Point cloud registration and filtering_ As a new point cloud is generated by the laser range finder, we align it to an existing map, initialized with the first scan, through the Generalized Iterative Closest Point (GICP) algorithm [18]. After point cloud alignment, we remove the points having a distance from the point cloud center greater than a given threshold \\(\\tau=30m\\), along any of the three main axis to neglect points too faraway from the sensor. Directly adding the aligned points to the map would lead to a very dense result, with possibly repeated points; instead, we compare the new point cloud with the last \\(W=10\\) point clouds we recently aggregated, and we add each point only if no other close point exists already in the map. This fills the gaps in point clouds and makes the global cloud free of duplicates1. Once the new points have been selected for addition, we further simplify the point cloud by ground plane removal. Footnote 1: The use of the term duplicate, in this context, is improper since it is very unlikely the lidar samples exactly the very same point, but, assuming the sampling beam has non negligible size, we have overlapping regions sampled repeatedly and this would induce an unnecessary oversampling of the environment. ### _Ground plane removal_ Subsequent laser measurements that lay on the ground plane convey redundant and negligible information about moving objects since the ground plane is expected to be mostly static. Therefore, as a further filtering step, we classify and remove ground plane points. A naive approach to do that would discard all points which are under a certain negative height from the laser sensor. A slightly better approach fits a horizontal plane, e.g., with RANSAC, and removes points which lay on it. The drawback of both approaches arises whenever we deal with a non-planar ground surface, as in Fig. 2, or errors in extrinsic sensor calibration. In this paper we propose to remove the unnecessary ground points by modeling the ground as a Markov Random Fields and applying belief propagation as it follows. First, we divide the point cloud according to a 2D grid of \\(0.4mx0.4m\\) tiles on the \\(XY\\) plane, where \\(X\\) represents the forward direction, and \\(Y\\) points to the left side of the moving vehicle. Starting from the cell at the origin of this grid, supposedly being ground, we move iteratively to the surrounding cells in order to propagate the ground height and to classify the tiles between ground and non-ground (see Fig. 3). Let consider the cell \\(C_{ij}\\) and the set \\(P_{ij}\\) of the points projecting on this cell. We define \\(\\hat{h}_{G}^{ij}=max\\left\\{h_{G}^{N}\\right\\}\\) where \\(N\\) is the set of neighboring cells, belonging to the inner ring, that propagate to \\(C_{ij}\\); then \\(H_{ij}=max\\left\\{P_{ij}^{z}\\right\\}\\) and \\(h_{ij}=min\\left\\{P_{ij}^{z}\\right\\}\\) are the maximum and minimum heights of the points in the cell (recall that coordinate \\(z\\) represents the height of a Fig. 1: Moving Object Detection process. Fig. 3: Schema of ground height propagation. Fig. 2: Non trivial example in which naive and plane fitting based ground removal fail. point). Given a maximum expected slope of \\(22\\%\\) of the cell dimension, which is about \\(s=0.09m\\); a cell is classified as ground plane if and only if: \\[H_{ij}-h_{ij}<s\\qquad\\text{and}\\qquad H_{ij}<\\hat{h}_{G}^{ij}+s. \\tag{1}\\] Then the current propagated ground height \\(h_{G}^{ij}\\) is \\(H_{ij}\\) if \\(C_{ij}\\) is ground, otherwise it is \\(\\hat{h}_{G}^{ij}\\). In Fig. 4 we illustrate an example of the ground points detected in a single scan. ### _Moving Points detection_ After registration, point filtering, and ground removal we apply the laser-based moving object detection algorithm, which borrows some ideas from [5] and [16]. From the former we borrow the use of Dempster-Shafer Theory (DST) for occupancy space representation and the Dempster combination rule for intra-scan evidence fusion; from the latter we borrow the idea of using previous and future scans. At first, we evaluate the occupancy of a point \\(P\\) belonging to scan \\(S_{\\text{k}}\\) induced by another scan \\(S_{\\text{i}}\\) by representing the occupancy space using DST. The space occupancy is represented using a set \\(X=\\{empty,occupied\\}\\); the DST operates on the power set of \\(X\\), i.e., \\(2^{X}=\\{\\emptyset\\},\\{empty\\},\\{occupied\\},\\{empty,occupied\\}\\}\\), where the subset \\(\\{empty,occupied\\}\\) represents the \\(unknown\\) state, i.e., the space not reached by the beams. DST defines a degree of belief \\(m(\\cdot)\\) for each subset: for the empty set it is 0 and for the other subsets they are within the range of \\([0,1]\\) and they add up to a total of 1. Let \\(e=m(\\{empty\\})\\), \\(o=m(\\{occupied\\})\\) and \\(u=m(\\{unknown\\})\\) be the degrees of belief for the three possible labels such that \\(e+o+u=1\\). Let \\(OQ\\) be a laser beam of \\(S_{i}\\), and \\(r=length(P^{\\prime}Q)\\), where \\(P^{\\prime}\\) is the projection of \\(P\\) on \\(OQ\\) (see Fig. 5); then we define the degree of belief \\(e_{r}\\) and \\(o_{r}\\) parametrized over \\(r\\) as it follows: \\[e_{r} =\\begin{cases}1&\\text{ if }Q\\text{ is behind }P^{\\prime}\\\\ 0&\\text{otherwise}\\end{cases}, \\tag{2}\\] \\[o_{r} =\\begin{cases}e^{-\\frac{r^{2}}{2}}&\\text{ if }P^{\\prime}\\text{ is behind }Q\\\\ 0&\\text{otherwise}\\end{cases}. \\tag{3}\\] The occupancy values at point \\(P\\) due to the beam \\(OQ\\) becomes then: \\[m(P,Q)=\\left\\{\\begin{array}{c}e\\\\ o\\\\ u\\end{array}\\right\\}=\\left\\{\\begin{array}{c}f_{\\theta}\\cdot e_{r}\\\\ o_{r}\\\\ 1{-}e-o\\end{array}\\right\\} \\tag{4}\\] where \\(f_{\\theta}=e^{-\\frac{\\theta^{2}}{2\\lambda_{\\theta}^{2}}}\\) is the rotation occupancy function, \\(\\lambda_{\\theta}\\) is the angular resolution of the sensor, and \\(\\theta\\) is the angle between rays \\(OP\\) and \\(OQ\\). To embed uncertainty in this framework, we propose to model noise as a Gaussian variable, then we define \\(\\sigma_{m}\\), \\(\\sigma_{r}\\) and \\(\\sigma_{\\theta}\\) as, respectively, measurement, registration, and angle standard deviations (with \\(\\sigma_{m}=0.05m\\), \\(\\sigma_{r}=0.15m\\), \\(\\sigma_{\\theta}=0.1\\pi rad\\)). By defining \\(g(m)=\\mathcal{N}(0,\\sigma_{m}^{2})\\), \\(g(r)=\\mathcal{N}(0,\\sigma_{r}^{2})\\), \\(F=g(m)\\otimes g(r)\\), we modify (4) as it follows: \\[m^{\\prime}(P,Q)=\\left\\{\\begin{array}{c}e^{\\prime}\\\\ o^{\\prime}\\\\ u^{\\prime}\\end{array}\\right\\}=\\left\\{\\begin{array}{c}f_{\\theta}\\cdot(e_{r} \\otimes F)\\\\ o_{r}\\otimes F\\\\ 1{-}e^{\\prime}-o^{\\prime}\\end{array}\\right\\} \\tag{5}\\] where \\(\\otimes\\) represents the convolution operator. We aggregate the occupancy induced by two beams through the Dempster combination rule applied to the occupancy induced by two beams with \\(m(P,Q_{1})=(e_{1},o_{1},u_{1})\\) and \\(m(P,Q_{2})=(e_{2},o_{2},u_{2})\\): \\[\\left\\{\\begin{array}{c}e_{1}\\\\ o_{1}\\\\ u_{1}\\end{array}\\right\\}\\oplus\\left\\{\\begin{array}{c}e_{2}\\\\ o_{2}\\\\ u_{2}\\end{array}\\right\\}=\\frac{1}{1-K}\\left\\{\\begin{array}{c}e_{1}\\cdot e_{2}+ e_{1}\\cdot u_{2}+u_{1}\\cdot e_{2}\\\\ o_{1}\\cdot o_{2}+o_{1}\\cdot u_{2}+u_{1}\\cdot o_{2}\\\\ u_{1}\\cdot u_{2}\\end{array}\\right\\} \\tag{6}\\] where \\(\\oplus\\) is the fusion operator defined by DST which is commutative and associative, and \\(K=o_{1}\\cdot e_{2}+e_{1}\\cdot o_{2}\\). From this, the overall occupancy at location \\(P\\) due to the \\(I\\) neighboring rays \\(Q_{i}\\) is then given by \\(m(P)=\\bigoplus_{i\\in I}m(P,Q_{i})\\). To classify a point \\(P\\) belonging to a scan \\(S_{\\text{k}}\\) as static or moving, we compute and combine its occupancy values due to previous and future2 scans \\(\\mathbb{S}=\\{S_{\\text{k-k}},\\dots,S_{\\text{k-1}},S_{\\text{k+1}},\\dots,S_{ \\text{k+k}}\\}\\). By comparing two scans having the degree of belief \\(m(P,Q_{1})\\) and \\(m(P,Q_{2})\\), a moving object corresponds to the not consistent degree of belief. To this extent, we compute: Footnote 2: We use a time window of \\(2K\\) scans around the current one, with \\(K=10\\) in our experiments, introducing a K scans delay in the whole pipeline. \\[Conf =e_{1}\\cdot o_{2}+o_{1}\\cdot e_{2} \\tag{7}\\] \\[Cons =e_{1}\\cdot e_{2}+o_{1}\\cdot o_{2}+u_{1}\\cdot u_{2}\\] \\[Unc =u_{1}\\cdot(e_{2}+o_{2})+u_{2}\\cdot(e_{1}+o_{1})\\] where \\(Conf\\) means conflicting, \\(Cons\\) is consistent and \\(Unc\\) uncertain. Moving points regions are those where \\(Conf>Cons\\) and \\(Conf>Unc\\). We have extended this procedure, Fig. 4: A scan (left) and the ground points to be removed (right). Fig. 5: Occupancy at point \\(P\\) computed with respect to the beam \\(OQ\\). originally proposed in [5] for 2 scans, to compare \\(2K\\) subsequent scans. To do so, we propose to change the occupancy computation procedure in order to make the classification more robust by a novel discretized version of the original DST approach we just explained. Let consider the most distant point \\(B\\) in each scan \\(S_{\\text{i}}\\in\\mathbb{S}\\), we approximate the occupancy values of \\(P\\) with respect to \\(S_{\\text{i}}\\) in the following way. Let's define \\[l=r_{sup}-\\delta r\\frac{||\\overrightarrow{OP}||}{||\\overrightarrow{OB}||}\\] where \\(r_{sup}\\) and \\(r_{inf}\\) are user defined upper and lower bounds and \\(\\delta r=r_{sup}-r_{inf}\\) (in our case \\(r_{sup}=0.8\\) and \\(r_{inf}=0.6\\)); \\(l\\) is used to define a belief stronger in the neighborhood of the sensor. Then, from the original occupancy \\(m(P,Q)=(e,o,u)\\) we derive the new occupancy of \\(P\\) for any \\(Q\\in S_{\\text{i}}\\): \\[e_{\\text{new}}=\\begin{cases}l&\\text{ if }e>o\\wedge e>u\\\\ 0&\\text{ otherwise}\\end{cases} \\tag{8}\\] \\[o_{\\text{new}}=\\begin{cases}l&\\text{ if }o>e\\wedge o>u\\\\ 0&\\text{ otherwise}\\end{cases} \\tag{9}\\] \\[u_{\\text{new}}=1-e_{\\text{new}}-o_{\\text{new}}. \\tag{10}\\] This way the occupancy value of each point is discretized based on its distance from the sample scan origin. With these discretized values we apply again the Dempster combination rule among the set \\(\\mathbb{S}\\) of scans, and the outcome of this combination defines the classification of the point: if its prevalent occupancy state is \\(empty\\) then the point is considered to be dynamic, otherwise it is a static point. Testing every point \\(P\\) from a scan \\(S_{\\text{k}}\\) against every neighboring ray in the \\(\\mathbb{S}\\) scans is a very expensive procedure. We avoid such expensive computations by indexing the \\(S_{\\text{k}}\\) point cloud with an octree data structure, with a resolution of 0.3m, and we perform the tests only for a small set of points in its nodes, i.e., in the neighborhood of the point \\(P\\). Since dynamic points in real world are not sparse, as they are part of a moving object, we assume they have neighboring dynamic points, and, if a small set of neighboring points are classified as dynamic, their neighbors should also be considered dynamic as well. Thus, to improve the performance of our algorithm, for each leaf of the octree, we perform the moving object detection test on a random subset of points (\\(\\frac{1}{6}\\) of the total amount) and if there are at least half of the tested points classified as dynamic, then we classify all the points in the leaf as such. Otherwise, the leaf is assumed to contain only static points. If the number of points in a leaf is small, i.e., less than \\(\\tau_{np}=6\\), they are sparse and they all get tested. By doing this way, we do not only improve the computational efficiency of the algorithm, but we also reduces the amount of misclassified dynamic points in static objects. ## III Image-based Moving Object Validation Even if the outcome of our laser-based moving object detection algorithm is often satisfying, some false positives may arise due to the noise in the laser measurements and the inaccuracy of the point cloud registration step. We propose thus two additional image-based validation tests to filter out false positives: both tests compare image patches around the projection of 3D points classified as moving objects respectively in the (color) images corresponding to the scans in \\(\\mathbb{S}\\) and in the depth-maps estimated from the point clouds themselves. If a candidate point passes these tests, then it is confirmed to be a moving point. In the proposed tests a 3D point \\(P_{i}\\) is projected into a pixel \\(p_{ik}\\) of the (color) image \\(I_{k}\\) and depth-map \\(D_{k}\\) by using the camera calibration matrix. A squared image patch \\(patch_{ik}\\) around this pixel is selected having a side length \\(b_{ik}\\), measured in number of pixels, according to the following formula: \\[b_{ik}=\\frac{h}{d_{ik}}f_{xy}, \\tag{11}\\] where the \\(h=0.15m\\) parameter refers to the patch height in the real world, \\(d_{ik}\\) is the distance of the point \\(P_{i}\\) from the camera \\(k\\) corresponding to pixel \\(p_{ik},\\) and \\(f_{xy}\\) is the focal length of the camera (see Fig. 6). Since each comparison needs two patches of the same size, one of the two patches, in turn, is resized. Before computing any similarity between images patches, these are checked for uniformity in their intensities. If their intensity standard deviation is above a certain threshold (0.02 in our case), then we compare the color patches, otherwise, the test would fail, so we compare the depth-maps. ### _Image patch test_ Once patches are extracted and resized, we test their similarity through Normalized Cross Correlation (NCC) on each color channel independently. Let \\(\\pi_{i}\\) and \\(\\pi_{j}\\) be the two patches and \\(NCC_{c}(u,v)\\) the NCC between two images at location \\((u,v)\\) computed for the \\(c\\)-th channel, then we define: \\[E_{c}(\\pi_{i},\\pi_{j})=1-\\max_{u,v}NCC_{c}(u,v). \\tag{12}\\] Two patches are considered to be similar if, for all cameras in \\(\\mathbb{S}\\), \\(E_{c}<\\tau\\) (in our case \\(\\tau=0.1\\)). Fig. 6: Example of two color patches compared via NCC. We opted for NCC measure with respect to the classical Sum of Squared Differences (SSD), since it is not affected by illumination changes issue, even thought it is computationally more expensive than SSD. Note that, if one of the patches has one of its channels flat, the formula above fails, as we get a division by zero problem for the NCC computation. Our system handles this case with the test on the depth-maps. ### _Depth-map patch test_ When the color image patch test fails, we apply a comparison between the depth-maps extracted from the lidar data as it follows. We project lidar points into the \\(k\\)-th camera, by using the camera matrix, and we create a sparse depth-map having for each pixel the camera-to-point distance. Then we apply a disk shaped dilation such that we close all the gaps between close points. The resulting image is a rough estimation of the depth-map of the lidar points, nevertheless its computation is fast and the result has been sufficiently discriminative in our tests (Fig. 7 shows an example of a depth-map extracted from the laser data). Since the laser sensor is moving between the two scans, we need to correct the depth-maps for this movement before we can actually compare the selected patches. To perform this correction, we assume a small motion between two cameras and we use the following formula: \\[D_{l}^{\\prime}=D_{l}+\\frac{||\\mathbf{t}_{k}-\\mathbf{t}_{l}||}{d_{l,max}} \\tag{13}\\] where \\(D_{l}^{\\prime}\\) and \\(D_{l}\\) are respectively the new and old intensity values of depth-map \\(l\\), \\(\\mathbf{t}_{k}\\) and \\(\\mathbf{t}_{l}\\) are translation vectors, with respect to a global reference frame, for cameras \\(k\\) and \\(l\\) respectively, and \\(d_{l,max}\\) is the maximum depth distance from the camera \\(l\\). Then patches are extracted and resized the same way we do for (color) images, but in the depth-map comparison we use the SSD metric. This metric is suitable in this case because depth-maps are not affected by illumination changes. ## IV Experimental results To the best our knowledge a dataset with surveying camera having annotated moving objects is not available, so we tested the proposed algorithm with three sequences of the KITTI [17] dataset, which provides 1392x512px images, camera calibration information, and Velodyne HDL-64E point clouds, where we manually annotated the moving object regions on the images. To evaluate the accuracy of the classification, we project the 3D points of each point cloud on the corresponding image plane and we check if points classified as dynamic objects project into the manually annotated masks. An example of the comparison between the resulting dynamic points and ground truth mask is shown in Fig. 8. We run the tests on a Intel Core i7-3537u (2 Cores), 2GHz with 8GB of DDR3 RAM. We compare our approach with the state-of-the-art Vallet et al. [16] which is the approach closer to the proposed. In Table I we list the precision/recall results; our laser-based algorithm, by paying a very small decrease in recall, it increases significantly the precision of [16], and image validation refines the results further. In Table II we show that the proposed algorithm detects or partially detects a higher number of moving object. Here partially detected means that a subset of the moving points remains in the final global cloud. In Fig. 9 we report the Receiver Operating Characteristics (ROC) curve obtained with the 0095 sequence of the KITTI dataset: here we compare the ground truth mask against an image-based mask of moving objects obtained with a simple dilation of the points classified as moving and projected in the image plane, to have a result similar to classic background subtraction algorithms. In the ROC curve, the highest the area subtended by the curve, the better the classifier performance; precision and recall reported in the plot are obtained by varying the \\(\\sigma_{r}\\) and \\(\\sigma_{\\theta}\\) parameters such that \\(0.1m<\\sigma_{r}<0.45m\\) and \\(0.0035rad<\\sigma_{\\theta}<0.0088rad\\). The results enforce the conclusion that the proposed approach performs better that the algorithm by Vallet et al. already in the lidar-based only version and shows the overall improvement with the image-based validation. The novel laser-based pipeline and the image validation procedure improve significantly the precision of the proposed algorithm, in particular the validation has been able to discard a huge number of false positive in moving objects detection. Discretization and diffusion of occupancy information lead to smoother and more precise results. Our algorithm outperforms the work by Vallet et al. also in terms of computing speed, thanks to the use of octree indexing and subsampling. Indeed, our algorithm takes, on average, 0.6 seconds per point cloud, while Vallet et al. approach takes 4.9 seconds. Timing does not include the image-based validation step, this has been implemented as a prototype in MATLAB and, at the current stage, it works off-line at 25 seconds per frame to estimate the depth-map and it requires 1-2 seconds to validate the moving points. Nevertheless, this step can be easily parallelized on GPU leading to real time computations. Fig. 8: Points classified as moving projected against the ground-truth mask. Fig. 7: Example of a depth-map computed from a lidar point cloud. ## V Conclusion and Future Work In this paper we propose a novel moving objects detection algorithm which improves the current state-of-the-art in laser-based approaches. The proposed approach relies on Demspter-Shafer Theory to model the occupancy induced by the data in a point cloud and to detect which point is dynamic or static. Moreover, we added a novel image validation step to remove false positive detections. Experiments show that ground plane removal and scan comparison discretization improve on precision with respect to current state-of-the-art with a speed-up in the execution thanks to the use of an efficient indexing data structure. As a future work we aim at applying the proposed approach to an existing urban reconstruction method [19] and refine it with [20] in order to obtain a 3D urban map without moving objects, while still improving on the speed up of the visual pipeline of our proposal. ## Acknowledgments This work has been supported by the POLISOCIAL Grant \"Maps for Easy Paths (MEP)\", the \"Interaction between Driver Road Infrastructure Vehicle and Environment (I.DRIVE)\" Inter-department Laboratory Grant from Politecnico di Milano and Nvidia who kindly support our research through the Hardware Grant Program. ## References * [1] A. Azim and O. Aycard, \"Detection, classification and tracking of moving objects in a 3d environment,\" in _Intelligent Vehicles Symposium (IV), 2012 IEEE_. IEEE, 2012, pp. 802-807. * [2] A. W. Vieira, P. L. Drews, and M. F. Campos, \"Spatial density patterns for efficient change detection in 3d environment for autonomous surveillance robots,\" _Automation Science and Engineering, IEEE Transactions on_, vol. 11, no. 3, pp. 766-774, 2014. * [3] H. Andreasson, M. Magnusson, and A. Lilienthal, \"Has somethong changed here\" autonomous difference detection for security patrol robots,\" in _Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conf. on_. IEEE, 2007, pp. 3429-3435. * [4] P. Drews, S. da Silva Filho, L. Marcolino, and P. Nunez, \"Fast and adaptive 3d change detection algorithm for autonomous robots based on gaussian mixture models,\" in _Robotics and Automation (ICRA), 2013 IEEE International Conf. on_. IEEE, 2013, pp. 4685-4690. * [5] W. Xiao, B. Vallet, and N. Paparoditis, \"Change detection in 3d point clouds acquired by a mobile mapping system,\" _ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences_, vol. 1, no. 2, pp. 331-336, 2013. * [6] P. Nunez, P. Drews Jr, A. Bandera, R. Rocha, M. Campos, and J. Dias, \"Change detection in 3d environments based on gaussian mixture model and robust structural matching for autonomous robotic applications,\" in _Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conf. on_. IEEE, 2010, pp. 2633-2638. * [7] M. Piccardi, \"Background subtraction techniques: a review,\" in _Systems, man and cybernetics, 2004 IEEE international conference on_, vol. 4. IEEE, 2004, pp. 3099-3104. * [8] A. Sobral and A. Vacavant, \"A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos.\" _Computer Vision and Image Understanding_, vol. 122, p. 421, 2014. * [9] P. Azzari, L. D. Stefano, and A. Bevilacqua, \"An effective real-time mosaicing algorithm apt to detect motion through background subtraction using a ptz camera,\" in _Advanced Video and Signal Based Surveillance, 2005. AVSS 2005. IEEE Conf. on_. IEEE, 2005, pp. 511-516. * [10] A. Romanoni, M. Matteucci, and D. G. Sorrenti, \"Background subtraction by combining temporal and spatio-temporal histograms in the presence of camera movement,\" _Machine vision and applications_, vol. 25, no. 6, pp. 1573-1584, 2014. * [11] S. W. Kim, K. Yun, K. M. Yi, S. J. Kim, and J. Y. Choi, \"Detection of moving objects with a moving camera using non-panoramic background model,\" _Machine vision and applications_, vol. 24, no. 5, pp. 1015-1028, 2013. * [12] M. Shakeri and H. Zhang, \"Detection of small moving objects using a moving camera,\" in _Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conf. on_. IEEE, 2014, pp. 2777-2782. * [13] I. Markovic, F. Chaumette, and I. Petrovic, \"Moving object detection, tracking and following using an omnidirectional camera on a mobile robot,\" in _Robotics and Automation (ICRA), 2014 IEEE International Conf. on_. IEEE, 2014, pp. 5630-5635. * [14] T.-H. Lin and C.-C. Wang, \"Deep learning of spatio-temporal features with geometric-based moving point detection for motion segmentation,\" in _Robotics and Automation (ICRA), 2014 IEEE International Conf. on_. IEEE, 2014, pp. 3058-3065. * [15] C. Premebida, O. Ludwig, and U. Nunes, \"Lidar and vision-based pedestrian detection system,\" _Journal of Field Robotics_, vol. 26, no. 9, pp. 696-711, 2009. * [16] B. Vallet, W. Xiao, and M. Bredif, \"Extracting mobile objects in images using a velodyne lidar point cloud,\" _ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences_, vol. 1, pp. 247-253, 2015. * [17] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, \"Vision meets robotics: The kitti dataset,\" _International Journal of Robotics Research (IJRR)_, 2013. * [18] A. Segal, D. Haehnel, and S. Thrun, \"Generalized-icp.\" in _Robotics: Science and Systems_, vol. 2, no. 4, 2009. * [19] A. Romanoni and M. Matteucci, \"Incremental reconstruction of urban environments by edge-points delaunay triangulation,\" in _Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on_, Sept 2015, pp. 4473-4479. * [20] A. Romanoni, A. Delaunoy, M. Pollefeys, and M. Matteucci, \"Automatic 3d reconstruction of manifold meshes via delaunay triangulation and mesh sweeping,\" in _2016 IEEE Winter Conference on Applications of Computer Vision (WACV)_, March 2016, pp. 1-8. Fig. 9: ROC of [16] and our algorithms.
Detecting moving objects in dynamic scenes from sequences of lidar scans is an important task in object tracking, mapping, localization, and navigation. Many works focus on changes detection in previously observed scenes, while a very limited amount of literature addresses moving objects detection. The state-of-the-art method exploits Dempster-Shafer Theory to evaluate the occupancy of a lidar scan and to discriminate points belonging to the static scene from moving ones. In this paper we improve both speed and accuracy of this method by discretizing the occupancy representation, and by removing false positives through visual cues. Many false positives lying on the ground plane are also removed thanks to a novel ground plane removal algorithm. Efficiency is improved through an octree indexing strategy. Experimental evaluation against the KITTI public dataset shows the effectiveness of our approach, both qualitatively and quantitatively with respect to the state-of-the-art.
Write a summary of the passage below.
178
arxiv-format/1804_05042v3.md
# Unsupervised Sparse Dirichlet-Net for Hyperspectral Image Super-Resolution Ying Qu1*, Hairong Qi1, Chiman Kwan2 1The University of Tennessee, Knoxville, TN 2 Applied Research LLC, Rockville, MD [email protected] [email protected] [email protected] ## 1 Introduction Hyperspectral image (HSI) analysis has become a thriving and active research area in computer vision with a wide range of applications [7, 5], including, for example, object recognition and classification [24, 12, 53, 31], tracking [44, 13, 42, 43], environmental monitoring [40, 35], and change detection [25, 6]. Compared to multispectral images (MSI with around 10 spectral bands) or conventional color images (RGB with 3 bands), HSI collects hundreds of contiguous bands which provide finer details of spectral signature of different materials. However, its spatial resolution becomes significantly lower than MSI or RGB due to hardware limitations [20, 3]. On the contrary, although MSI or RGB has high spatial resolution, their spectral resolution is relatively low. Very often, to yield better recognition and analysis results, images with both high spectral and spatial resolution are desired [46]. A natural way to generate such images is to fuse hyperspectral images with multispectral images or conventional color images. This procedure is referred to as _hyperspectral image super-resolution (HSI-SR)_[3, 27, 8] as shown in Fig. 1. The problem of HSI-SR originates from _multispectral pan-sharpening (MSI-PAN)_ in the remote sensing field, where the spatial resolution of MSI is further improved by a high-resolution panchromatic image (PAN). Note that, in general, resolution refers to the spatial resolution. Usually, MSI has much higher resolution than HSI, but PAN has even higher resolution than MSI. We use LR to denote low spatial resolution and HR for high spatial resolution. There are roughly two groups of MSI-PAN methods, namely, the component substitution (CS) [41, 38, 2] and the multi-resolution analysis (MRA) based approaches [1]. Although MSI-PAN has been well developed through decades of innovations [41, 29, 54], they cannot be readily adopted to solve the HSI-SR problems. On one hand, the amount of spectral information to be preserved for HSI-SR is much higher than that of MSI-PAN, thus it is easier to introduce spectral distortion, _i.e_., the output image does not preserve Figure 1: General procedure of HSI-SR. the accurate spectral information [29, 51, 3, 8]. On the other hand, HSI possesses much lower resolution than that of MSI, making it more challenging to improve the spatial resolution. There have been few methods specifically designed for HSI-SR, including mainly Bayesian based and matrix factorization based approaches [29, 51, 10]. The unique framework of Bayesian offers a convenient way to regularize the solution space of HR HSI by employing a proper prior distribution such as Gaussian [47]. Simoes _et al_. proposed HySure [39], which applied a total variation regularization to smooth the image. Akhtar _et al_. [3] introduced a non-parametric Bayesian strategy to extract spectral dictionary and spatial coefficients from LR HSI and HR MSI, respectively. Matrix factorization based approaches have been actively studied recently [20, 52, 10, 27, 45]. Yokoya _et al_. [52] decomposed both the LR HSI and HR MSI alternatively to achieve the optimal non-negative bases and coefficients that used to generate HR HSI. Lanaras _et al_. [27] further improved the fusion results by introducing a sparse constraint. However, most existing HSI-SR approaches generally assume that the downsampling function between the spatial coefficients of HR HSI and LR HSI are known beforehand. This assumption is not always true due to the distortions caused by both the sensors and complex environmental conditions [3]. HSI-SR is also closely related to the natural image super-resolution (SR) problem, which has been extensively studied and achieved excellent performance through the state-of-the-art _deep learning_[9, 30, 37, 21, 22, 28, 26, 16]. The main principle of SR is to learn a mapping function between LR images and HR images in a supervised fashion. Natural image SR methods usually work on up to \\(4\\times\\) upscaling. There have been three attempts to address the MSI-PAN problem with deep learning where the mapping function is learned using different frameworks including tied-weights denoising/ autoencoder [19], SRCNN [32], and deep residual network [16, 48]. These deep learning based methods, including natural image SR and MSI-PAN are all supervised, making their adoption on HSI-SR a challenge due to two reasons. First, they are designed to find an end-to-end mapping function between the LR images and HR images under the assumption that the mapping function is the same for different images. However, the mapping function may not be the same for images acquired with different sensors. Even for the data collected from the same sensor, the mapping function for different spectral bands may not be the same. Thus the assumption may cause severe spectral distortion. Second, training a mapping function is a supervised solution which requires a large dataset, the down-sampling function, and the availability of the HR HSI, that are not realistic for HSI. In this paper, we propose an _unsupervised_ network structure to address the challenges of HSI-SR. To the best of our knowledge, this is the first effort to solving the HSI-SR problem with deep learning in an unsupervised fashion. The novelty of this work is three-fold. First, the network extracts both the spectral and spatial information from LR HSI and HR MSI with two deep learning networks which share the same decoder weights, as illustrated in Fig. 2. Second, in order to incorporate the two physical constraints of HSI and MSI data representation, i.e., sum-to-one and sparsity, the network encourages the representations from both modalities to follow a Dirichlet distribution which naturally incorporates the sum-to-one property. Since each pixel of the image only consists of a few spectral bases, the sparsity of the representations is guaranteed by minimizing their entropy function. Third, to address the challenge of spectral distortion, instead of adopting the down-sampling function (as an estimated mapping function) to relate the representations of the two modalities, we minimize the angular difference of these representations such that they have similar patterns. In this way, the spectral distortion is largely reduced. The proposed method is referred to as uSDN. ## 2 Problem Formulation Given the LR HSI, \\(\\bar{\\mathbf{Y}}_{h}\\in\\mathbb{R}^{m\\times n\\times L}\\), where \\(m\\), \\(n\\) and \\(L\\) denote the width, height and number of spectral bands of the HSI, respectively, and the corresponding HR MSI, \\(\\bar{\\mathbf{Y}}_{m}\\in\\mathbb{R}^{M\\times N\\times l}\\), where \\(M\\), \\(N\\) and \\(l\\) denote the width, height and number of spectral bands of the MSI, respectively, the goal is to estimate the HR HSI, \\(\\bar{\\mathbf{X}}\\in\\mathbb{R}^{M\\times N\\times L}\\), with both high spatial and spectral resolution. In general, MSI has much higher spatial resolution than HSI, _i.e_., \\(M\\gg m\\), \\(N\\gg n\\), and HSI has much higher spectral resolution than MSI, _i.e_., \\(L\\gg l\\). To facilitate the subsequent processing, we unfold the 3D images into 2D matrices, _i.e_., each row of the 2D matrix denotes the spectral reflectance of a given pixel. The unfolded matrices are written as \\(\\mathbf{Y}_{h}\\in\\mathbb{R}^{mn\\times L}\\), \\(\\mathbf{Y}_{m}\\in\\mathbb{R}^{MN\\times l}\\) and \\(\\mathbf{X}\\in\\mathbb{R}^{MN\\times L}\\). This is illustrated in Fig. 1. Assuming that each row of \\(\\mathbf{Y}_{h}\\) is a linear combination of \\(c\\) basis vectors (or spectral signatures), as expressed in Eq. (1), where \\(\\mathbf{\\Phi}_{h}\\in\\mathbb{R}^{c\\times L}\\) and each row of which denotes the spectral basis that preserves the spectral information and \\(\\mathbf{S}_{h}\\in\\mathbb{R}^{mn\\times c}\\) is the corresponding proportional coefficients (referred to as _representations_ in deep learning). Since the coefficients indicate how the spectral bases are mixed at specific spatial locations, they preserve the spatial structure of HSI. Similarly, \\(\\mathbf{Y}_{m}\\) can be expressed as Eq. (2), where \\(\\mathbf{\\Phi}_{m}\\in\\mathbb{R}^{c\\times l}\\) and each row of which indicates the spectral basis of MSI. \\(\\mathcal{R}\\in\\mathbb{R}^{L\\times l}\\) is the transformation matrix given as a prior from the sensor [20, 52, 47, 29, 39, 46, 27, 8], which describes the relationship between HSI and MSI bases. With \\(\\mathbf{\\Phi}_{h}\\in\\mathbb{R}^{c\\times L}\\) carrying the high spectral information and \\(\\mathbf{S}_{m}\\in\\mathbb{R}^{MN\\times c}\\) carrying the high spatial information, the desired HR HSI, \\(\\mathbf{X}\\), is generated by Eq. (3). See Fig. 1. \\[\\mathbf{Y}_{h}=\\mathbf{S}_{h}\\mathbf{\\Phi}_{h}, \\tag{1}\\] \\[\\mathbf{Y}_{m}=\\mathbf{S}_{m}\\mathbf{\\Phi}_{m},\\quad\\mathbf{\\Phi} _{m}=\\mathbf{\\Phi}_{h}\\mathcal{R}\\] (2) \\[\\mathbf{X}=\\mathbf{S}_{m}\\mathbf{\\Phi}_{h}. \\tag{3}\\] The problem of HSI-SR can be described mathematically as \\(P(\\mathbf{X}|\\mathbf{Y}_{h},\\mathbf{Y}_{m})\\). Since the ground truth \\(\\mathbf{X}\\) is not available, the problem should be solved in an unsupervised fashion. The key to addressing this problem is to take advantage of the shared information, _i.e._, \\(\\mathbf{\\Phi}_{h}\\in\\mathbb{R}^{c\\times L}\\), to extract desired high spectral bases \\(\\mathbf{\\Phi}_{h}\\) and spatial representations \\(\\mathbf{S}_{m}\\) from two different modalities. In addition, three unique requirements of HSI-SR need to be given special consideration. First, in representing HSI or MSI as a linear combination of spectral signatures, the representation vectors should be non-negative and sum-to-one. That is, \\(\\sum_{j=1}^{c}s_{ij}=1\\), where \\(\\mathbf{s}_{i}\\) is the row vector of either \\(\\mathbf{S}_{h}\\) or \\(\\mathbf{S}_{m}\\)[20, 52, 10, 27, 45]. Second, due to the fact that each pixel of image only consists of a few spectral bases, the representations should be sparse. Third, spectral distortion should be largely reduced in the process in order to preserve the spectral information of HR HSI while gaining spatial resolution. ## 3 Proposed Approach We propose an unsupervised architecture as shown in Fig. 2. We highlight the three structural uniquenesses here. First, the architecture consists of two deep networks, for the representation learning of the LR HSI and HR MSI, respectively. These two networks share the same decoder weights, enabling the extraction of both spectral and spatial information from multi-modalities in an unsupervised fashion. Second, in order to satisfy the sum-to-one constraint of the representations, both \\(\\mathbf{S}_{h}\\) and \\(\\mathbf{S}_{m}\\) are encouraged to follow a Dirichlet distribution where the sum-to-one property is naturally incorporated in the network with a further sparsity constraint. Third, to address the challenge of spectral distortion, the representations of two modalities are encouraged to have similar patterns by minimizing their angular difference. ### Network Architecture As shown in Fig. 2, the network reconstructs both the LR HSI \\(\\mathbf{Y}_{h}\\) and HR MSI \\(\\mathbf{Y}_{m}\\) in a coupled fashion. Taking the LR HSI network (the top network) as an example. The network consists of an encoder \\(\\text{E}_{h}(\\theta_{he})\\), which maps the input data to low-dimensional representations (latent variables on the Bottleneck hidden layer), _i.e._, \\(p_{\\theta_{he}}(\\mathbf{S}_{h}|\\mathbf{Y}_{h})\\), and a decoder \\(\\text{D}_{h}(\\theta_{hd})\\) which reconstructs the data from the representations, _i.e._, \\(p_{\\theta_{hd}}(\\hat{\\mathbf{Y}}_{h}|\\mathbf{S}_{h})\\). Both the encoder and decoder are constructed with multiple fully-connected layers. Note that the bottleneck hidden layer \\(\\mathbf{S}_{h}\\) behaves as the representation layer that reflect the spatial information and the weights \\(\\theta_{hd}\\) of the decoder \\(\\text{D}_{h}(\\theta_{hd})\\) serve as \\(\\mathbf{\\Phi}_{h}\\) in Eq. (1), respectively. This correspondence is further elaborated below. The HSI is reconstructed by \\(\\hat{\\mathbf{Y}}_{h}=f_{k}(\\mathbf{W}_{dk}f_{k-1}( (f_{1}(\\mathbf{S}_{h} \\mathbf{W}_{d1}+b_{1}) )+b_{k-1})+b_{k})\\), where \\(\\mathbf{W}_{dk}\\) denotes the weights in the \\(k\\)th layer. To extract the spectral basis from LR HSI, the latent variables of the representation layer \\(\\mathbf{S}_{h}\\) act as the proportional coefficients, where \\(\\mathbf{S}_{h}\\) follows a Dirichlet distribution with the sum-to-one property naturally incorporated. Suppose the activation function is an identity function and there is no bias in the decoder, we have \\(\\theta_{hd}=\\mathbf{W}_{1}\\mathbf{W}_{2} \\mathbf{W}_{k}\\). That is, the weights \\(\\theta_{hd}\\) of the decoder correspond to the spectral basis \\(\\mathbf{\\Phi}_{h}\\) in Eq. (1) and \\(\\mathbf{\\Phi}_{h}=\\theta_{hd}\\). In this way, \\(\\mathbf{\\Phi}_{h}\\) preserves the spectral information of LR HSI, and the latent variables \\(\\mathbf{S}_{h}\\) preserves the spatial information effectively. Equivalently, the bottom network reconstructs the HR MSI in a similar way with encoder \\(\\text{E}_{m}(\\theta_{me})\\) and decoder \\(\\text{D}_{m}(\\theta_{md})\\). However, since \\(l\\leq c\\leq L\\), _i.e._, the number of latent variables, \\(L\\), is much larger than the number of input nodes, \\(l\\), the MSI network is very unstable and hard to train. On the other hand, the spectral basis of HR MSI Figure 3: Details of the encoder nets. Figure 2: Simplified architecture of uSDN. can be transformed from those of LR HSI which possesses more spectral information, the decoder of the MSI is designed to share the weights with that of HSI in terms of \\(\\theta_{md}=\\mathbf{\\Phi}_{m}=\\theta_{hd}\\mathcal{R}=\\mathbf{\\Phi}_{h}\\mathcal{R}\\). Then the reconstructed HR MSI can be obtained by \\(\\hat{\\mathbf{Y}}_{m}=\\mathbf{S}_{m}\\mathbf{\\Phi}_{h}\\mathcal{R}\\). In this way, only the encoder \\(\\text{E}_{m}(\\theta_{me})\\) of the MSI is updated during the optimization, where the HR spatial information \\(\\mathbf{S}_{m}\\) is extracted from MSI. Eventually, the desired HR HSI is generated directly by \\(\\mathbf{X}=\\mathbf{S}_{m}\\mathbf{\\Phi}_{h}\\). Note that the dashed lines in the image show the path of backpropagation which will be elaborated in Sec. 3.4. ### Sparse Dirichlet-Net with Dense Connectivity To extract stable spectral information, we need to enforce the proportional coefficients \\(\\mathbf{S}=(\\mathbf{s}_{1},\\mathbf{s}_{2},\\cdots,\\mathbf{s}_{i},\\cdots, \\mathbf{s}_{p})^{T}\\) of each pixel to sum-to-one [52, 49, 27, 27], _i.e_., \\(\\sum_{j=1}^{c}s_{ij}=1\\). Without loss of generality, \\(\\mathbf{S}\\) represents either \\(\\mathbf{S}_{h}\\) with \\(p=mn\\) or \\(\\mathbf{S}_{m}\\) with \\(p=MN\\). In addition, due to the fact that only a few spectral bases actually contribute in the linear combination of the spectral reflectance of each pixel, the coefficients should also be sparse. In the proposed architecture, the latent variables (or representations) of the hidden layer \\(\\mathbf{S}_{h}\\) or \\(\\mathbf{S}_{m}\\) correspond to the proportional coefficients in Eqs. (1) and (2). To naturally incorporate the sum-to-one property, the representations are encouraged to follow a Dirichlet distribution which is accomplished with stick-breaking process as illustrated in Fig. 3. Furthermore, entropy function is adopted to reinforce the sparsity of the representations. The stick-breaking process was first proposed by Sethuraman [36] back in 1994. It is used to generate random vectors \\(\\mathbf{s}\\) with Dirichlet distribution. The process can be illustrated as breaking a unit-length stick into \\(c\\) pieces, the length of which follows a Dirichlet distribution. Assuming that the generated vector is denoted as \\(\\mathbf{s}=(s_{1},\\cdots,s_{j},\\cdots,s_{c})\\), we have \\(0\\leq s_{j}\\leq 1\\), and the variables in the vector are sum to one, _i.e_., \\(\\sum_{j=1}^{c}s_{j}=1\\). Mathematically [36], a single variable \\(s_{j}\\) is defined as \\[s_{j}=\\left\\{\\begin{array}{ll}v_{1}&\\text{for}\\quad j=1\\\\ v_{j}\\prod_{o<j}(1-v_{o})&\\text{for}\\quad j>1,\\end{array}\\right. \\tag{4}\\] where \\(v_{o}\\) is drawn from a Beta distribution, _i.e_., \\(v_{o}\\sim\\text{Beta}(u,\\alpha,\\beta)\\). Nalisnick and Smyth successfully coupled the expressiveness of generative networks with Bayesian nonparametric model through stick-breaking process [33]. The network uses a Kumaraswamy distribution [23] as an approximate posterior which takes in the samples from a randomly generated uniform distribution during the training procedure. Different from the generative network, we aim to find shared representations that better reconstruct the data. Therefore, the weights of the network should be changed according to the input data instead of randomly generated distribution. It has been proved that when \\(v_{o}\\sim\\text{Beta}(u,1,\\beta)\\), \\(\\mathbf{s}\\) follows a Dirichlet distribution. Since it is difficult to draw samples directly from Beta distribution, we draw samples from the inverse transform of Kumaraswamy distribution, as shown in Eq. (5), which is equivalent to Beta distribution when \\(\\alpha=1\\) or \\(\\beta=1\\), \\[\\text{kuma}(u,\\alpha,\\beta)=\\alpha\\beta u^{\\alpha-1}(1-u^{\\alpha})^{\\beta-1} \\tag{5}\\] where \\(\\alpha>0\\), \\(\\beta>0\\) and \\(u\\in(0,1)\\). The benefit of Kumaraswamy distribution is that it has a closed-form CDF, where the inverse transform is defined as \\[v_{o}\\sim(1-(1-u^{\\frac{1}{\\beta}})^{\\frac{1}{\\alpha}}). \\tag{6}\\] Let \\(\\alpha=1\\), parameters \\(u\\) and \\(\\beta\\) are learned through the network as illustrated in Fig. 3. Because \\(\\beta>0\\), a softplus is adopted as the activation function [11] at the \\(\\beta\\) layer. Similarly, a sigmoid [15] is used to map \\(u\\) into \\((0,1)\\) range at the \\(\\mathbf{u}\\) layer. To avoid gradient vanishing and increase the representation power of the proposed method, the encoder of the network is densely connected, _i.e_., each layer is fully connected with all its subsequent layers [17]. To further increase the variability of \\(u\\) and \\(\\beta\\) (theoretically, we want the learned \\(u\\) and \\(\\beta\\) to be any number within their range), instead of concatenating all the preceding layers, the input of the \\(k\\)th layer is the summation of all the preceding layers \\(x_{0},\\ x_{1},\\ x_{k-1}\\) with their own weights, _i.e_., \\(\\mathbf{W}_{0}x_{0}+\\mathbf{W}_{1}x_{1}+ +\\mathbf{W}_{k-1}x_{k-1}\\). In this way, fewer number of layers is required to learn the optimal representations. Although the stick-breaking structure encourages the representations to follow a Dirichlet distribution, it does not guarantee the sparsity of the representations. In addition, the widely used \\(l_{1}\\) regularization or Kullback-Leibler divergence [14] will not encourage the representation layer to be sparse either, because they guarantee the sparsity by reducing the mean of active value, _i.e_., mean of the representation layer. However, due to the stick-breaking structure, the mean of \\(\\mathbf{S}_{h}\\) or \\(\\mathbf{S}_{m}\\) is almost one. Therefore, we introduce a generalized Shannon entropy function [18] to reinforce the sparsity of the representation layer which works effectively even with the sum-to-one constraint. The entropy function was first proposed in compressive sensing field to solve the signal recovery problem. It is de Figure 4: Shannon entropy (L) and Shannon entropy function (R). fined as \\[\\mathcal{H}_{p}(\\mathbf{s})=-\\sum_{j=1}^{N}\\frac{|s_{j}|^{p}}{\\|\\mathbf{s}\\|_{p}^{ p}}\\log\\frac{|s_{j}|^{p}}{\\|\\mathbf{s}\\|_{p}^{p}}. \\tag{7}\\] Compared to the more popular Shannon entropy, the entropy function Eq. (7) decreases monotonically when the data become sparse. To illustrate the effect, we show the phenomena with 2D variables in Fig. 4. Shannon entropy is small when both \\(x_{1}\\) and \\(x_{2}\\) are small or large. But for Shannon entropy function, the local minimum only occurs at the boundaries of the quadrants. This nice property guarantees the sparsity of arbitrary data even the data are with the sum-to-one constraint. Due to the stick-breaking structure, the latent variables at the representation layer are positive. We choose \\(p=1\\) which is more efficient and will encourage the variables to be sparse. ### Angle Similarity Extracting spatial information from HR MSI is quite challenging and easy to introduce spectral distortion in the subsequent HR HSI results. The main cause to this problem is that the number of the representations \\(c\\) (number of nodes in the representation layer) is much larger than the dimension of the MSI, _i.e_., \\(c\\gg l\\). Previous researchers assume the down-sampling function is available _a-priori_ to build a relationship between the representations of HSI and MSI. However, the down-sampling function is usually unknown for real applications. Therefore, instead of taking the down-sampling function as a prior, we encourage the representations \\(\\mathbf{S}_{h}\\) and \\(\\mathbf{S}_{m}\\) of the two networks following a similar pattern to prevent spectral distortion. And such similarity is measured by the angular difference between the two representations. Spectral angle mapper (SAM) is employed to measure this angular difference. SAM is a spectral evaluation method in remote sensing [29, 51, 34], which measures the angular difference between the estimated image and the ground truth image. The lower the SAM score, the smaller the spectral angle difference, and the more similar the two representations. Since the HSI and MSI networks share the same decoder weights, the representations should have similar angle in order to generate high quality image with less spectral distortion. Besides encouraging the representation layer to follow a sparse Dirichlet distribution, we further reduce the angular difference of the representations of HSI and MSI during the optimization procedure. In the network, representations \\(\\mathbf{S}_{h}\\in\\mathbb{R}^{mn\\times c}\\) and \\(\\mathbf{S}_{m}\\in\\mathbb{R}^{MN\\times c}\\), from two different modalities have different dimensions. To minimize the angular difference, we increase the size of the low-dimensional \\(\\mathbf{S}_{h}\\) by duplicating its values at each pixel to its nearest neighborhood. Then the duplicated representations \\(\\tilde{\\mathbf{S}_{h}}\\in\\mathbb{R}^{MN\\times c}\\) have the same dimension as \\(\\mathbf{S}_{m}\\). With vectors of equal size, the angular difference is defined as \\[\\mathcal{A}(\\tilde{\\mathbf{S}_{h}},\\mathbf{S}_{m})=\\frac{1}{MN}\\sum_{i=1}^{MN }\\arccos(\\frac{\\tilde{\\mathbf{s}_{h}}^{i}\\cdot\\mathbf{s}_{m}^{i}}{\\|\\tilde{ \\mathbf{s}_{h}}^{i}\\|_{2}\\|\\mathbf{s}_{m}^{i}\\|_{2}}) \\tag{8}\\] To map the range of the angle within \\((0,1)\\), Eq. (8) is divided by the circular constant \\(\\pi\\). \\[\\mathcal{J}(\\tilde{\\mathbf{S}_{h}},\\mathbf{S}_{m})=\\frac{\\mathcal{A}(\\tilde{ \\mathbf{S}_{h}},\\mathbf{S}_{m})}{\\pi} \\tag{9}\\] ### Optimization and Implementation Details To prevent over-fitting, we applied an \\(l_{2}\\) norm on the decoder weights. The objective functions of the proposed network architecture can then be expressed as: \\[\\mathcal{L}(\\theta_{he},\\theta_{hd})=\\frac{1}{2}\\|\\mathbf{Y}_{h}( \\theta_{he},\\theta_{hd})-\\hat{\\mathbf{Y}}_{h}(\\theta_{he},\\theta_{hd})\\|_{F}^{2} \\tag{10}\\] \\[+\\lambda\\mathcal{H}_{1}(\\mathbf{S}_{h}(\\theta_{he}))+\\mu\\|\\theta _{hd}\\|_{F}^{2},\\] \\[\\mathcal{L}(\\theta_{me})=\\frac{1}{2}\\|\\mathbf{Y}_{m}(\\theta_{me}, \\theta_{hd})-\\hat{\\mathbf{Y}}_{m}(\\theta_{me},\\theta_{hd})\\|_{F}^{2}\\] (11) \\[+\\lambda\\mathcal{H}_{1}(\\mathbf{S}_{m}(\\theta_{me})),\\] \\[\\mathcal{L}(\\theta_{me})=\\mathcal{J}(\\tilde{\\mathbf{S}}_{h}(\\theta _{he}),\\mathbf{S}_{m}(\\theta_{me})), \\tag{12}\\] where \\(\\lambda\\) and \\(\\mu\\) are parameters that balance the trade-off between the reconstruction error and the sparsity and weights loss, respectively. The proposed architecture consists of two sparse Dirichlet-Nets which extract the spectral information \\(\\Phi_{h}\\) from HSI and spatial information \\(\\mathbf{S}_{m}\\) from MSI. The network is optimized with back-propagation following the procedure described below, also illustrated in Fig. 2 with the dashed line. Step 1: Since the decoder weights \\(\\theta_{hd}\\) of the HSI network preserves the spectral information \\(\\Phi_{h}\\), we first update the HSI network, given the objective function in Eq. (10), to find the optimal \\(\\theta_{hd}\\). To prevent over-fitting, an \\(l_{2}\\) norm is applied on the decoder of the HSI network. Step 2: The estimated decoder weights \\(\\theta_{hd}\\) are fixed and shared with the decoder of the MSI network. Update the encoder weights \\(\\theta_{me}\\) of the MSI network given the objective function in Eq. (11). Step 3: To reduce spectral distortion, every 10 iterations, we minimize the angular difference between the representations of two modalities given the objective function in Eq. (12). Since we already have \\(\\theta_{he}\\) from the first step, only the encoder \\(\\theta_{me}\\) of the MSI network is updated during the optimization. For all the experiments, both the input and output of the HSI network have 31 nodes, representing the number of spectral bands in the data. The numbers of densely-connected layers and nodes of the encoder are shown in Table 1. There are 3 layers in the HSI network and each layer contains 10 nodes. The MSI network has 5 layers with the number of nodes increases from \\(4\\) to \\(10\\). The \\(\\mathbf{v}_{h}\\)/\\(\\mathbf{v}_{m}\\) are drawn with Eq. (6) given \\(\\mathbf{u}_{h}\\)/\\(\\mathbf{u}_{m}\\) and \\(\\beta_{h}\\)/\\(\\beta_{m}\\), which are learned by back-propagation. Both \\(\\beta_{h}\\) and \\(\\beta_{m}\\) have only one node, denoting the distribution parameter of each pixel. The representation layers, \\(\\mathbf{S}_{h}\\) and \\(\\mathbf{S}_{m}\\) with 10 nodes are constructed with \\(\\mathbf{v}_{h}\\) and \\(\\mathbf{v}_{m}\\), respectively, according to Eq. (4). The network shares the decoder with 2 layers and each layer has 10 nodes. Since different images have different spectral bases and representations, the network is trained on each pair of LR HSI and HR MSI to reconstruct each image accurately. ## 4 Experiments and Results ### Datesets and Experimental Setup The proposed uSDN has been thoroughly evaluated with two widely used benchmark datasets, CAVE [50] and Harvard [7]. The CAVE dataset consists of 32 HR HSI images and each of which has a dimension of \\(512\\times 512\\) with 31 spectral bands. These spectral images are taken within the wavelength range 400 \\(\\sim\\) 700nm with an interval of 10 nm. The Harvard dataset includes 50 HR HSI images with both indoor and outdoor scenes. The dimension of the images in this dataset is \\(1392\\times 1040\\), with 31 bands taken at an interval of 10nm within the wavelength range of 420 \\(\\sim\\) 720nm. Note that for this dataset, the top left corner of size \\(1024\\times 1024\\times 31\\) is cropped as the HR HSI. For the two benchmark datasets, the LR HSI \\(\\mathbf{Y}_{h}\\) is obtained by averaging the HR HSI over \\(32\\times 32\\) disjoint blocks. The HR MSI images with 3 bands are generated by multiplying the HR HSI with the given spectral response matrix \\(\\mathcal{R}\\) of Nikon D700. All the images are normalized between 0 and 1. Note that the CAVE dataset is in general considered a more challenging set than Harvard since images in Harvard usually contain more smooth reflections; and since the images have higher spatial resolution, pixels within close vicinity usually have similar spectral reflectance. Hence, even the images are down-sampled by the \\(32\\times 32\\) kernel, most spectral information is still preserved in the LR HSI. The results of the proposed method on individual images are compared with seven state-of-the-art methods, _i.e_., CS based [2], MRA based [1], CNMF [52], Bayesian Sparse (BS) [47], HySure [39], Lanaras's 15 (CSU) [27], and Akhtar's 15 (BSR) [3] that belong to different categories of approaches described in Sec. 1. These methods also reported the best performance [29, 3, 27], with the original code made available by the authors. We also directly list results [4] from Akhtar's 16 (HBPG) since the code is not available. The average results on the complete dataset is also reported to evaluate the robustness of the proposed method. For quantitative comparison, the root mean squared error (RMSE) and spectral angle mapper (SAM) are applied to evaluate the reconstruction error and the amount of spectral distortion, respectively. ### Experimental Results Tables 2 and 3 show the experimental results of 7 groups of images from the CAVE and Harvard datasets, which are commonly benchmarked by existing literature [20, 3, 4]. We observe that traditional CS-based and MRA-based methods suffer from spectral distortion, thus could not achieve competitive performance. The Bayesian based approach, BS [47], fails due to the fact that it assumes the representation \\(\\mathbf{S}_{m}\\) follows a Gaussian distribution, which is not always true. However, the Bayesian non-parametric based method BSR [3] outperforms BS because it estimates the spectra through non-parametric learning. The matrix-based approaches,CNMF [52] and CSU [27], are not as competitive on the CAVE dataset due to their predefined down-sampling function, although they perform much better on the Harvard dataset. We also observe that some methods like Hysure can achieve better RMSE, but worse SAM scores, that is because they cannot preserve the spectral information properly which has caused large spectral distortion. Based on the experiments, the proposed uSDN powered by the unique sparse Dirichlet-net outperforms all of the other approaches in terms of both RMSE and SAM, and it is quite stable for different types of input images. BSR estimates the representations separately from the spectral bases, although it can achieve good RMSE scores, its SAM scores are not promising. While CSU relates the representations with a predefined down-sampling function, and thus achieves better results on the Harvard dataset, it generates worse results on the CAVE dataset. Both methods may cause spectral distortion in different scenarios. The proposed approach consistently outperforms the other methods in terms of both RMSE and SAM as reported in Table 4. We also make two further observations. First, since the Harvard dataset is less challenging than the CAVE dataset, the improvement on the former is not as apparent as that on the latter. This, on the other hand, demonstrates that uSDN can handle challenging scenarios much better than state-of-the-art. Second, uSDN is very effective in preserving the spectral signature of the reconstructed HR HSI, showing much improved performance especially on SAM on CAVE. The main reason that contributes to the success of the proposed approach is that it relates the representations \\(\\mathbf{S}_{h}\\) and \\(\\mathbf{S}_{m}\\) with statistics and angular difference, _i.e_., both representations are encouraged to follow a Dirichlet distribution, and their angular difference is enforced to be small. In this way, both the reconstruction error and spectral distortion are effectively reduced. Since the representation is enforced to be sparse Dirichlet over each pixel, not the entire image, the proposed structure is capable of recovering different pixels individually. And the total number of the recovered samples, that equals the number of pixels, is large. This demonstrates the representation capacity of the proposed structure. To visualize the results, we show the reconstructed samples from CAVE and Harvard taken at wavelengths 460, 540, and 670 nm in Fig. 5. The first through fourth columns show the LR images, reconstructed images from our method, ground truth images, and the absolute difference between the images at the second and third columns, respectively. We also compare the proposed method with CSU and BSR on the challenge dataset CAVE and show the results in Fig. 6. The effectiveness of the proposed method can be readily observed from the difference images, where the proposed approach is able to preserve both the spectral and spatial information. **Ablation Study:** Taking the 'pompom' image from the CAVE dataset as an example, we further evaluate 1) the necessity of enforcing the representation to follow sparse Dirichlet and 2) the usage of angle similarity loss. Fig. 7 illustrates the RMSE of the reconstructed HR HSI using 4 different network structures, _i.e_., autoencoder (AE) without any constraints, AE with the sparsity constraint (SAE), a simple Dirichlet-Net without any constraints, and the proposed sparse Dirichlet-Net. We observe that the adoption of Dirichlet-Net significantly reduces RMSE as compared to AE and SAE; and the proposed sparse Dirichlet-Net reduces RMSE even further, especially as the number of iterations increases. Fig. 9 shows the summation of elements in \\(\\mathbf{s}_{j}\\) averaged over all pixels in the image, where \\begin{table} \\begin{tabular}{l|c c|c c} \\hline \\multirow{2}{*}{Methods} & \\multicolumn{2}{c|}{CAVE} & \\multicolumn{2}{c}{Harvard} \\\\ \\cline{2-5} & RMSE & SAM & RMSE & SAM \\\\ \\hline CSU[27] & 9.96 & 15.63 & 3.37 & 5.35 \\\\ BSR[3] & 5.29 & 13.63 & 2.61 & 4.46 \\\\ uSDN & **4.09** & **6.95** & **1.78** & **4.05** \\\\ \\hline \\end{tabular} \\end{table} Table 4: The average RMSE and SAM scores over complete benchmarked datasets. Figure 5: Reconstructed images from the CAVE (top) and Harvard dataset (bottom) at wavelength 460, 540 and 620 nm. First column: LR images (\\(16\\times 16\\)). Second: estimated images (\\(512\\times 512\\)). Third: ground truth images. Fourth: absolute difference. \\begin{table} \\begin{tabular}{l|c c c c|c c|c} \\hline \\multirow{2}{*}{Methods} & \\multicolumn{4}{c|}{CAVE} & \\multicolumn{2}{c}{Harvard} \\\\ \\cline{2-7} & balloon & CD & cloth & photospool & img1 imgb5 \\\\ \\hline CS & 19 & 17 & 17 & 82 & 48 & 15 & 14 \\\\ MRA & 12 & 9 & 11 & 14 & 15 & 13 & 15 \\\\ BS & 11 & 16 & 10 & 18 & 24 & 17 & 18 \\\\ Hysure & 18 & 24 & 18 & 19 & 38 & 18 & 19 \\\\ BSR & 11.9 & 17.9 & 6 & 14 & 16 & 1.9 & 3.4 \\\\ CNMF & 10 & 9 & 7 & 11 & 20 & 10 & 13 \\\\ CSU & 8.9 & 25 & 12.6 & 10 & 17 & 1.8 & 2.8 \\\\ uSDN & **4.7** & **10** & **4.8** & **5.4** & **13** & **1.6** & **1.7** \\\\ \\hline HBPG & 7.6 & 10.6 & 5.0 & – & – & 2.5 & 2.1 \\\\ \\hline \\end{tabular} \\end{table} Table 3: Benchmarked results in terms of SAM. we observe that representations \\(\\mathbf{s}_{j}\\) are sum-to-one almost surely after around 300 iterations with Dirichlet-Net. Fig. 8 demonstrates the spectral angle mapper (SAM) of the reconstructed HR HSI using 4 different loss functions when the MSI network is updated, _i.e_., only with angle similarity loss, only with reconstruction loss, reconstruction loss with MSE similarity, and the proposed reconstruction loss with angle similarity, respectively. We observe that reconstruction loss significantly stabilizes/regulates the convergence process; and reconstruction loss with angle similarity presents the lowest SAM and fastest convergence speed. **Convergence Study:** During the optimization, both the HSI and MSI networks converge smoothly as shown in Fig. 10. The MSI network has a little bit fluctuation caused by the angular difference which is minimized every 10 iterations between the representations of two modalities. **Effect of Free Parameters:** There are two free parameters in the algorithm design, i.e., \\(\\mu\\) for the decoder weight loss and \\(\\lambda\\) for the sparsity control, as shown in Eq. (10). We keep \\(\\mu=1e^{-6}\\) during the experiments. Fig. 11 shows how RMSE is decreasing when we increase \\(\\lambda\\) from \\(2\\times 10^{-7}\\) to \\(1\\times 10^{-6}\\). We set \\(\\lambda=1\\times 10^{-6}\\) in the experiments. **Visualizing \\(\\mathbf{S}_{m}\\) and \\(\\mathbf{\\Phi}_{h}\\):** The proposed structure is based on the assumption that the LR HSI, HR MSI, and HR HSI can be formulated as a linear combination of their corresponding spectral bases. Here, we would like to provide visualization results of the spatial representation, \\(\\mathbf{S}_{m}\\), its sparsity property, and the spectral bases, \\(\\mathbf{\\Phi}_{h}\\). We use the pompom image from the CAVE dataset as the testing image to generate all the visualization. In order to visually see if the linear combination assumption is valid or not, we project the estimated bases, \\(\\mathbf{\\Phi}_{m}\\) into a 3D space using singular value decomposition. In Fig. 12, we observe that the learned bases from CSU is a little bit far away from the data, while the bases from BSR cluster with each other and do not cover all the data. The bases from our method circumscribe the entire data, indicating a more effective representation of the data. We also study if \\(\\mathbf{S}_{m}\\) is indeed sparse or not. The histogram of the learned representations \\(\\mathbf{S}_{m}\\) is shown in Fig. 13, where the sparsity is clearly evident. ## 5 Conclusion We proposed an unsupervised sparse Dirichelet-Net (uSDN) to solve the problem of hyperspectral image super-resolution (HSI-SR). To the best of our knowledge, this is the first effort to solving the problem of HSI-SR in an unsupervised fashion. The network extracts the spectral basis from LR HSI with rich spectral information and spatial representations from HR MSI with high spatial information through a shared decoder. The representations from two modalities are encouraged to follow a sparse Dirichlet distribution. In addition, the angular difference of two representations is minimized during the optimization to reduce spectral distortion. Extensive experiments on two benchmark datasets demonstrate the superiority of the proposed approach over state-of-the-art. **Acknowledgement:** This work was supported in part by NASA NNX12CB05C and NNX16CP38P. Figure 12: Spectral basis. Figure 13: Histogram of \\(\\mathbf{S}_{m}\\). ## References * [1] B. Aiazzi, L. Alparone, S. Baronti, A. Garzelli, and M. Selva. Mtf-tailored multiscale fusion of high-resolution ms and pan imagery. _Photogrammetric Engineering & Remote Sensing_, 72(5):591-596, 2006. * [2] B. Aiazzi, S. Baronti, and M. Selva. Improving component substitution pansharpening through multivariate regression of ms+ pan data. _IEEE Transactions on Geoscience and Remote Sensing_, 45(10), 2007. * [3] N. Akhtar, F. Shafait, and A. Mian. Bayesian sparse representation for hyperspectral image super resolution. _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 3631-3640, 2015. * [4] N. Akhtar, F. Shafait, and A. Mian. Hierarchical beta process with gaussian process prior for hyperspectral image super resolution. _European Conference on Computer Vision_, pages 103-120, 2016. * [5] J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot. Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. _Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of_, 5(2), 2012. * [6] M. Borengasser, W. S. Hungate, and R. Watkins. _Hyperspectral remote sensing: principles and applications_. 2007. * [7] A. Chakrabarti and T. Zickler. Statistics of real-world hyperspectral images. _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 193-200, 2011. * [8] R. Dian, L. Fang, and S. Li. Hyperspectral image super-resolution via non-local sparse tensor factorization. _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 5344-5353, 2017. * [9] C. Dong, C. C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. _IEEE transactions on pattern analysis and machine intelligence_, 38(2):295-307, 2016. * [10] W. Dong, F. Fu, G. Shi, X. Cao, J. Wu, G. Li, and X. Li. Hyperspectral image super-resolution via non-negative structured sparse representation. _IEEE Transactions on Image Processing_, 25(5):2337-2352, 2016. * [11] C. Dugas, Y. Bengio, F. Belisle, C. Nadeau, and R. Garcia. Incorporating second-order functional knowledge for better option pricing. _Advances in neural information processing systems_, pages 472-478, 2001. * [12] M. Fauvel, Y. Tarabalka, J. A. Benediktsson, J. Chanussot, and J. C. Tilton. Advances in spectral-spatial classification of hyperspectral images. _Proceedings of the IEEE_, 101(3):652-675, 2013. * [13] Y. Fu, Y. Zheng, I. Sato, and Y. Sato. Exploiting spectral-spatial correlation for coded hyperspectral image restoration. _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2016. * [14] I. Goodfellow, Y. Bengio, and A. Courville. _Deep Learning_. MIT Press, 2016. [http://www.deeplearningbook.org](http://www.deeplearningbook.org). * [15] J. Han and C. Moraga. The influence of the sigmoid function parameters on the speed of backpropagation learning. _From Natural to Artificial Neural Computation_, pages 195-201, 1995. * [16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 770-778, 2016. * [17] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely connected convolutional networks. _arXiv preprint arXiv:1608.06993_, 2016. * [18] S. Huang and T. D. Tran. Sparse signal recovery via generalized entropy functions minimization. _arXiv preprint arXiv:1703.10556_, 2017. * [19] W. Huang, L. Xiao, Z. Wei, H. Liu, and S. Tang. A new pan-sharpening method with deep neural networks. _IEEE Geoscience and Remote Sensing Letters_, 12(5):1037-1041, 2015. * [20] R. Kawakami, Y. Matsushita, J. Wright, M. Ben-Ezra, Y.-W. Tai, and K. Ikeuchi. High-resolution hyperspectral imaging via matrix factorization. _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 2329-2336, 2011. * [21] J. Kim, J. Kwon Lee, and K. Mu Lee. Accurate image super-resolution using very deep convolutional networks. _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2016. * [22] J. Kim, J. Kwon Lee, and K. Mu Lee. Deeply-recursive convolutional network for image super-resolution. _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2016. * [23] P. Kumaraswamy. A generalized probability density function for double-bounded random processes. _Journal of Hydrology_, 46(1-2):79-88, 1980. * [24] C. Kwan, B. Ayhan, G. Chen, J. Wang, B. Ji, and C.-I. Chang. A novel approach for spectral unmixing, classification, and concentration estimation of chemical and biological agents. _IEEE Transactions on Geoscience and Remote Sensing_, 44(2):409-419, 2006. * [25] H. Kwon and N. M. Nasrabadi. Kernel matched signal detectors for hyperspectral target detection. _IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_, pages 6-6, 2005. * [26] W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang. Deep laplacian pyramid networks for fast and accurate super-resolution. _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2017. * [27] C. Lanaras, E. Baltsavias, and K. Schindler. Hyperspectral super-resolution by coupled spectral unmixing. _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 3586-3594, 2015. * [28] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. _arXiv preprint arXiv:1609.04802_, 2016. * [29] L. Loncan, L. B. de Almeida, J. M. Bioucas-Dias, X. Brittet, J. Chanussot, N. Dobigeon, S. Fabre, W. Liao, G. A. Licciardi, M. Simoes, et al. Hyperspectral pansharpening:a review. _IEEE Geoscience and Remote Sensing Magazine_, 3(3), 2015. * [30] J. Lu and D. Forsyth. Sparse depth super resolution. _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2015. * [31] E. Maggiori, G. Charpiat, Y. Tarabalka, and P. Alliez. Recurrent neural networks to correct satellite image classification maps. _IEEE Transactions on Geoscience and Remote Sensing_, 55(9):4962-4971, Sept 2017. * [32] G. Masi, D. Cozzolino, L. Verdoliva, and G. Scarpa. Pansharpening by convolutional neural networks. _Remote Sensing_, 8(7):594, 2016. * [33] E. Nalisnick and P. Smyth. Deep generative models with stick-breaking priors. _ICML_, 2017. * [34] S. Ozkan, B. Kaya, E. Esen, and G. B. Akar. Endnet: Sparse autoencoder network for endmember extraction and hyperspectral unmixing. _arXiv preprint arXiv:1708.01894_, 2017. * [35] A. Plaza, Q. Du, J. M. Bioucas-Dias, X. Jia, and F. A. Kruse. Foreword to the special issue on spectral unmixing of remotely sensed data. _IEEE Transactions on Geoscience and Remote Sensing_, 49(11):4103-4110, 2011. * [36] J. Sethuraman. A constructive definition of dirichlet priors. _Statistica sinica_, pages 639-650, 1994. * [37] W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2016. * [38] S. C. Sides, J. A. Anderson, et al. Comparison of three different methods to merge multiresolution and multispectral data- landsat tm and spot panchromatic. _Photogrammetric Engineering and remote sensing_, 57(3):295-303, 1991. * [39] M. Simoes, J. Bioucas-Dias, L. B. Almeida, and J. Chanussot. A convex formulation for hyperspectral image super-resolution via subspace-based regularization. _IEEE Transactions on Geoscience and Remote Sensing_, 53(6):3373-3388, 2015. * [40] L. H. Spangler, L. M. Dobeck, K. S. Repasky, A. R. Nehrir, S. D. Humphries, J. L. Barr, C. J. Keith, J. A. Shaw, J. H. Rouse, A. B. Cunningham, et al. A shallow subsurface controlled release facility in bozeman, montana, usa, for testing near surface co2 detection techniques and transport models. _Environmental Earth Sciences_, 60(2):227-239, 2010. * [41] C. Thomas, T. Ranchin, L. Wald, and J. Chanussost. Synthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics. _IEEE Transactions on Geoscience and Remote Sensing_, 46(5):1301-1312, 2008. * [42] B. Uzkent, M. J. Hoffman, and A. Vodacek. Real-time vehicle tracking in aerial video using hyperspectral features. _The IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_, June 2016. * [43] B. Uzkent, A. Rangnekar, and M. Hoffman. Aerial vehicle tracking by adaptive fusion of hyperspectral likelihood maps. _The IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_, July 2017. * [44] H. Van Nguyen, A. Banerjee, and R. Chellappa. Tracking via object reflectance using a hyperspectral video camera. _IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_, pages 44-51, 2010. * [45] M. A. Veganzones, M. Simoes, G. Licciardi, N. Yokoya, J. M. Bioucas-Dias, and J. Chanussot. Hyperspectral super-resolution of locally low rank images from complementary multisource data. _IEEE Transactions on Image Processing_, 25(1):274-288, 2016. * [46] G. Vivone, L. Alparone, J. Chanussot, M. Dalla Mura, A. Garzelli, G. A. Licciardi, R. Restaino, and L. Wald. A critical comparison among pansharpening algorithms. _IEEE Transactions on Geoscience and Remote Sensing_, 53(5), 2015. * [47] Q. Wei, J. Bioucas-Dias, N. Dobigeon, and J.-Y. Tourneret. Hyperspectral and multispectral image fusion based on a sparse representation. _IEEE Transactions on Geoscience and Remote Sensing_, 53(7), 2015. * [48] Y. Wei, Q. Yuan, H. Shen, and L. Zhang. Boosting the accuracy of multi-spectral image pan-sharpening by learning a deep residual network. _arXiv preprint arXiv:1705.07536_, 2017. * [49] E. Wycoff, T.-H. Chan, K. Jia, W.-K. Ma, and Y. Ma. A non-negative sparse promoting algorithm for high resolution hyperspectral imaging. _Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on_, pages 1409-1413, 2013. * [50] F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum. _IEEE Transactions on Image Processing_, 19(9):2241-2253, 2010. * [51] N. Yokoya, C. Grohnfeldt, and J. Chanussot. Hyperspectral and multispectral data fusion: A comparative review of the recent literature. _IEEE Geoscience and Remote Sensing Magazine_, 5(2):29-56, June 2017. * [52] N. Yokoya, T. Yairi, and A. Iwasaki. Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion. _IEEE Transactions on Geoscience and Remote Sensing_, 50(2):528-537, 2012. * [53] F. Zhang, B. Du, and L. Zhang. Scene classification via a gradient boosting random convolutional network framework. _IEEE Transactions on Geoscience and Remote Sensing_, 54(3):1793-1802, 2016. * [54] J. Zhou, C. Kwan, and B. Budavari. Hyperspectral image super-resolution: a hybrid color mapping approach. _Journal of Applied Remote Sensing_, 10(3):035024-035024, 2016.
In many computer vision applications, obtaining images of high resolution in both the spatial and spectral domains are equally important. However, due to hardware limitations, one can only expect to acquire images of high resolution in either the spatial or spectral domains. This paper focuses on hyperspectral image super-resolution (HSI-SR), where a hyperspectral image (HSI) with low spatial resolution (LR) but high spectral resolution is fused with a multispectral image (MSI) with high spatial resolution (HR) but low spectral resolution to obtain HR HSI. Existing deep learning-based solutions are all supervised that would need a large training set and the availability of HR HSI, which is unrealistic. Here, we make the first attempt to solving the HSI-SR problem using an unsupervised encoder-decoder architecture that carries the following uniquenesses. First, it is composed of two encoder-decoder networks, coupled through a shared decoder; in order to preserve the rich spectral information from the HSI network. Second, the network encourages the representations from both modalities to follow a sparse Dirichlet distribution which naturally incorporates the two physical constraints of HSI and MSI. Third, the angular difference between representations are minimized in order to reduce the spectral distortion. We refer to the proposed architecture as unsupervised Sparse Dirichlet-Net, or uSDN. Extensive experimental results demonstrate the superior performance of uSDN as compared to the state-of-the-art.
Summarize the following text.
293
arxiv-format/2409_08589v5.md
# Domain-Invariant Representation Learning of Bird Sounds Ilyass Moummad\\({}^{1}\\) Romain Serizel\\({}^{2}\\) Emmanouil Benetos\\({}^{3}\\) Nicolas Farrugia\\({}^{1}\\) \\({}^{1}\\)IMT Atlantique, CNRS, Lab-STICC, Brest, France \\({}^{2}\\)Universite de Lorraine, CNRS, Inria, Loria, Nancy, France \\({}^{3}\\)C4DM, Queen Mary University of London, London, UK ## I Introduction Passive Acoustic Monitoring (PAM) is a non-invasive method for studying wildlife through sound. By using acoustic recorders, researchers can gather data on animal behavior, migration, and population trends without disturbance [1]. PAM is useful for monitoring endangered species, offering long-term insights for conservation. In recent years, deep learning models have emerged as a powerful tool to process and analyze complex bioacoustic data [2]. A key source of training data for these models comes from citizen science platforms like Xeno-Canto (XC) [3], which contains over one million annotated vocalizations from more than 10,000 species, primarily birds. These citizen-led initiatives have significantly expanded the availability of labeled wildlife sound data, enabling the development of more robust and accurate deep learning models [4]. However, a challenge arises from the difference between data collected on platforms like XC and PAM recordings. In citizen science platforms, recordings are typically focal--where the recorder is aimed directly at the species of interest. In contrast, PAM systems passively capture sounds within natural soundscapes, leading to recordings that contain a mix of species vocalizations and background environmental noise. This difference in recording conditions creates a domain shift, complicating the ability of models trained on focal data to generalize to soundscape recordings in PAM. In practice, we require models that can perform well across diverse and potentially unseen environments. Supervised contrastive learning (SupCon) [5], a supervised learning framework for training robust feature extractors, has demonstrated strong generalization capabilities for transfer learning in bioacoustics, particularly in few-shot classification [6] and detection [7]. However, these studies have been limited to settings where both training and testing rely on focal recordings, and therefore do not address the domain shift challenge associated with testing on PAM recordings when models are trained on focal recordings. Domain Generalization (DG) [8] aims to develop models that learn robust features that are domain-invariant, i.e. capable of generalizing to new unseen domains without prior knowledge or access to target domain data during training. SupCon offers a promising approach for DG. In SupCon, the objective is to learn an embedding space where same-class examples are pulled together and different-class examples are pushed apart. This clustering can promote domain-invariance when sufficient domain diversity is present in the dataset, allowing the model to focus on features that are domain-invariant. In contrast, its self-supervised counterpart SimCLR [9] lacks this explicit mechanism for domain-invariance, as it relies solely on augmentations to create positive pairs. Without label information, SimCLR requires carefully designed augmentations that account for domain shift [10]. Despite its effectiveness, SupCon is computationally expensive due to the need for pairwise similarity calculations between all examples. To address this, we introduce ProtoCLR (Prototypical Contrastive Learning of Representations), a more efficient variant. By analyzing SupCon's gradient and drawing inspiration from the generalization capabilities of prototypical networks [11] in few-shot learning, ProtoCLR replaces pairwise comparisons with a prototypical contrastive loss that compares examples to class prototypes, retaining the original objective while significantly reducing computational complexity. We propose a new few-shot classification evaluation based on the BIRB [12] benchmark to evaluate the generalizationcapabilities of models trained on XC's focal recordings and tested on diverse soundscape datasets. This benchmark is designed to assess how well models can generalize across domains in challenging few-shot scenarios. We validate our proposed loss ProtoCLR on this benchmark, demonstrating its effectiveness in improving DG in bird sound classification. In this work, we make the following contributions: * We establish a large-scale few-shot benchmark for bird sound classification using BIRB datasets, evaluating model generalization from focal to soundscape recordings. * We introduce ProtoCLR, a novel supervised contrastive loss that reduces computational complexity of SupCon by using class prototypes instead of pairwise comparisons. ## II Related Work Nolasco et al. [13] reformulate bioacoustic sound event detection using a few-shot learning approach to recognize species from a few labeled examples, making it suitable for rare species but limited to single-species detection per task. Heggan et al. [14] introduce MetaAudio, a few-shot benchmark for audio classification, including BirdCLEF 2020 [15], which focuses on generalizing to new classes but only includes focal recordings. To address the generalization challenge from focal to soundscape recordings, the BIRB [12] benchmark focuses on few-shot retrieval, retrieving labeled sounds from large, unlabeled datasets. BirdSet [16] emphasizes transfer learning, evaluating models across various downstream classification tasks. DG [8] has emerged as a critical approach to tackle domain shift, where the test data distribution differs from the training data. It aims to learn robust, domain-invariant representations using only source domain data, without requiring access to target domain data during training [8]. DG methods typically focus on learning domain-invariant representations, using techniques like domain alignment [17], meta-learning [18], and data augmentation [19]. These approaches help the model learn features that remain consistent across varying domains. In bioacoustics, DG is especially important due to the difficulty in collecting annotated soundscape recordings compared to focal data [12, 16]. Invariant learning has gained attention, where models are trained to learn features that remain invariant across different variations in data, such as augmented versions in self-supervised learning [9, 20, 21, 22] or same-class examples in supervised learning [5]. This approach has proven effective for learning robust features in bird sound classification [6]. ## III Method ### _Supervised Contrastive Loss (SupCon)_ Given a batch with two views (transformations) of each example, let \\(I\\) denote the set of indices of all examples in the batch, \\(P(i)\\) represent the set of indices of positive examples for examples \\(i\\), and \\(A(i)=I\\setminus\\{i\\}\\) represent the set of all other indices excluding \\(i\\). For an example \\(i\\), let \\(z_{i}\\) be its \\(l_{2}\\)-normalized embedding and \\(\\tau\\) the temperature parameter. The SupCon [5] loss is defined as: \\[\\mathcal{L}^{\\text{SupCon}}=\\sum_{i\\in I}\\frac{-1}{|P(i)|}\\sum_{p\\in P(i)} \\log\\frac{\\exp{(z_{i}\\cdot z_{p}/\\tau)}}{\\sum_{a\\in A(i)}\\exp{(z_{i}\\cdot z_{a }/\\tau)}}, \\tag{1}\\] The gradient of SupCon loss for an example \\(i\\) with respect to its embedding \\(z_{i}\\) is (please refer to [5] for more details): \\[\ abla_{z_{i}}\\ell_{i}^{\\text{SupCon}}=\\frac{1}{|P(i)|}\\sum_{p\\in P(i)}\\frac{ 1}{\\tau}z_{p}-\\frac{1}{\\tau}\\frac{\\sum_{a\\in A(i)}S_{ia}z_{a}}{\\sum_{a\\in A(i )}S_{ia}}, \\tag{2}\\] where \\(S_{ia}=\\exp(z_{i}\\cdot z_{a}/\\tau)\\) is the similarity between \\(z_{i}\\) and \\(z_{a}\\). This gradient consists of two terms: a positive term \\(\\frac{1}{|P(i)|}\\sum_{p\\in P(i)}\\frac{1}{\\tau}z_{p}\\) pulling the embedding \\(z_{i}\\) towards its class centroid and a negative term \\(-\\frac{1}{\\tau}\\frac{\\sum_{a\\in A(i)}S_{ia}z_{a}}{\\sum_{a\\in A(i)}S_{ia}}\\) pushing it away from other examples. ### _Prototypical Contrastive Loss (ProtoCLR)_ Motivated by the gradient of SupCon and drawing inspiration from prototypical networks [11], we propose ProtoCLR (**Prototypical**C**ontrastive **L**earning of **R**epresentations), which introduces class-level centroids into contrastive learning. The centroid for each class \\(y\\) in the batch is computed as \\(c_{y}=\\frac{1}{|C(y)|}\\sum_{i\\in C(y)}z_{i}\\), where \\(C(y)\\) is the set of indices of examples with label \\(y\\) and \\(|C(y)|\\) is its size. We define the ProtoCLR loss as: \\[\\mathcal{L}^{\\text{ProtoCLR}}=\\sum_{i\\in I}\\frac{-1}{|P(i)|}\\log\\frac{\\exp{( z_{i}\\cdot c_{y_{i}}/\\tau)}}{\\sum_{y\\in Y}\\exp{(z_{i}\\cdot c_{y}/\\tau)}}, \\tag{3}\\] where \\(c_{y_{i}}\\) is the centroid of the class to which example \\(i\\) belongs, and \\(Y\\) is the set of all classes in the batch. Similarly to SupCon, the gradient for ProtoCLR is: \\[\ abla_{z_{i}}\\mathcal{L}_{i}^{\\text{ProtoCLR}}=\\frac{1}{\\tau}c_{y_{i}}- \\frac{1}{\\tau}\\frac{\\sum_{y\\in Y}S_{iy}c_{y}}{\\sum_{y\\in Y}S_{iy}}, \\tag{4}\\] where \\(S_{iy}=\\exp(z_{i}\\cdot c_{y}/\\tau)\\) is the similarity between \\(z_{i}\\) and \\(c_{y}\\). The positive term remains the same as in the gradient of SupCon, pulling the embeddings \\(\\mathbf{z}_{i}\\) towards the centroids of their respective classes. The difference is the negative term \\(-\\frac{1}{\\tau}\\frac{\\sum_{y\\in Y}S_{iy}c_{y}}{\\sum_{y\\in Y}S_{iy}}\\): in ProtoCLR, the embeddings are pushed away from the weighted average of the centroids as opposed to the individual embeddings in SupCon. ### _ProtoCLR vs SupCon_ In order to comprehensively assess the efficacy of ProtoCLR, we conduct a comparative analysis with SupCon. #### Iii-C1 Complexity SupCon has a computational cost of \\(\\mathcal{O}(N^{2})\\) due to computing dot products between all pairs of examples in a batch of size \\(N\\), independent of the number of classes \\(C\\). In contrast, ProtoCLR reduces this to \\(\\mathcal{O}(N\\times C)\\) by computing dot products with class prototypes. Since \\(C\\) is usually smaller than \\(N\\), ProtoCLR is more efficient, particularly for large batches. #### Iii-B2 Variance SupCon relies on pairwise comparisons within the same class, which can lead to higher variance due to intra-class variability. In contrast, ProtoCLR compares embeddings \\(z_{i}\\) with class prototypes \\(c_{y_{i}}\\), reducing variance as the prototype variance \\(\\text{Var}(c_{y_{i}})\\) decreases by \\(N_{y_{i}}^{2}\\), where \\(N_{y_{i}}\\) is the number of examples in class \\(y_{i}\\). This leads to lower noise and more stable gradients in ProtoCLR: \\[\\text{Var}(c_{y_{i}})=\\frac{\\text{Var}(\\sum_{j\\in y_{i}}z_{j})}{N_{y_{i}}^{2}}.\\] #### Iii-B3 Near Convergence Equivalence Near convergence, both SupCon and ProtoCLR promote intra-class compactness with embeddings clustering tightly around class centroids: \\(z_{i}\\approx c_{y_{i}}\\) for all \\(i\\in I\\). In SupCon, the negative can be rewritten as: \\[\\frac{S_{ia}z_{a}}{S_{ia}}\\approx\\frac{\\exp(z_{i}\\cdot z_{a}/\\tau)z_{a}}{\\exp( z_{i}\\cdot z_{a}/\\tau)}\\approx\\frac{\\exp(z_{i}\\cdot c_{y_{a}}/\\tau)c_{y_{a}}}{ \\exp(z_{i}\\cdot c_{y_{a}}/\\tau)}\\approx\\frac{S_{iy_{a}}c_{y_{a}}}{S_{iy_{a}}}. \\tag{5}\\] Thus, SupCon and ProtoCLR converge to similar strategies, ensuring intra-class compactness and inter-class separability in the final learned representations. ## IV Experiments ### _Few-Shot Classification Benchmark_ The original task of BIRB [12] is information retrieval in bioacoustics, focused on retrieving bird vocalizations from passively recorded datasets using focal recordings for training. Downstream datasets may contain long audio recordings where events of interest are sparse. Individual bird events are detected using peak detection [23], which identifies frames with high energy to used for the task. We build on our previous work [6], which shows that using a pretrained classifier on AudioSet [24] is a good proxy for selecting windows with the highest bird class activation. Following the MetaAudio framework, we focus on a multi-class classification task to define a few-shot evaluation for assessing the generalization capabilities of pre-trained models [25]. BIRB provides a large training set, XC (Xeno-Canto), consisting of focal recordings from nearly 10,000 species, while the validation and test datasets contain soundscape recordings. The benchmark details are provided in Table I. Following BioCLIP [26], we sample \\(k\\)-shot learning tasks by randomly selecting \\(k\\) examples for each class and obtain the audio embeddings from the audio encoder of the pre-trained models. We then compute the average feature vector of the \\(k\\) embeddings as the training prototype for each class. All the examples left in the dataset are used for testing. To make predictions, we employ SimpleShot [27] by applying mean subtraction and L2-normalization to both centroids and test feature vectors. We then select the class whose centroid is closest to the test vector as the prediction. We repeat each few-shot experiment 10 times with different random seeds and report the mean and standard deviation accuracy in Table II. ### _Reference Systems_ To compare supervised contrastive approaches SupCon and ProtoCLR for addressing domain generalization, we train reference systems using cross-entropy (CE) loss as a supervised baseline and SimCLR as a self-supervised contrastive baseline. Additionally, we evaluate large-scale, state-of-the-art models in bioacoustics: BirdAVES [28], a transformer model based on the speech model HuBERT, trained in a self-supervised manner based on the AVES framework (explained in Section **?**). We evaluate two versions of BirdAVES, biox-base, a 12-layer transformer with 768 units, trained on Xeno-Canto [7] and animal sounds from AudioSet [24] and VGGSound [7], and bioxn-large, a 24-layer transformer with 1024 units, trained on the same datasets plus iNaturalist 2. BirdAVES outputs one embedding vector per time frame, which we average across the time dimension to produce a single vector representing the entire audio clip; the encoder of BioLingual [29], an HT-SAT transformer pre-trained on AudioSet and fine-tuned using contrastive language-audio training to align animal sounds with text captions describing the class across a large collection of data including Xeno-Canto, iNaturalist, Animal Sound Archive, etc; and the encoder of Perch [12], an EfficientNet-B1 [30] trained on Xeno-Canto for species classification, as well as taxonomic ranks genus, family, and order. We evaluate only models trained specifically on bird sounds, as general-purpose audio models have been shown to perform poorly in this domain [25]. Similarly, Hamer et al. [12] found that general-purpose models can, in some cases, perform worse than simply averaging mel spectrogram features without any learning. Footnote 2: [https://github.com/gvanhom38/iNatSounds](https://github.com/gvanhom38/iNatSounds) ### _Pre-training Details_ We follow the same preprocessing as Moummad et al. [6]. We train all models with CvT-13 [31], an efficient 2D transformer architecture with 20M parameters, on XC for 100 epochs using the AdamW optimizer with a batch size of 256, with a weight decay of \\(1\\times 10^{-6}\\). Following Moummad et al. [6], we apply the augmentations found to be effective for bird sound representations: circular time shift [32], SpecAugment [33], and spectrogram mixing [34]. These models aretrained with a projector of dimension 128. For the CE loss, we only apply circular time shift and SpecAugment as augmentations, excluding Spectrogram Mixing, as it prevented the model from converging. The learning rate for CE and ProtoCLR is set to \\(5\\times 10^{-4}\\), while for SupCon and SimCLR, we use a learning rate of \\(1\\times 10^{-4}\\). We tune hyperparameters by monitoring _k_-NN accuracy on the POW dataset, which is split randomly into a training and a validation subset. ### _Results and Discussion_ Table II presents the performance of different models on one-shot and five-shot bird sound classification tasks. ProtoCLR slightly outperforms SupCon both in one-shot and five-shot classification. Additionally, ProtoCLR is more computationally efficient; for one training epoch with a batch size of 256, SupCon computes 80.4B MACs, while ProtoCLR computes only 28.3B. On the other hand, CE outperforms ProtoCLR, when trained on 100 epochs. This could be due to the fact that contrastive methods require longer training iterations for optimal convergence than cross-entropy [5], which should be investigated. For self-supervised methods, SimCLR outperforms BirdAVES-biox-base and BirdAVES-bioxn-large in both one-shot and five-shot learning, suggesting that invariant learning is more effective than self-prediction methods for few-shot scenarios, consistent with findings in computer vision [35]. On average, BioLingual outperforms CE in one-shot learning but underperforms in five-shot learning. However, when examining individual datasets, BioLingual performs worse on PER, NES, UHH, and HSN, yet significantly better on SSW and SNE. Interestingly, certain models appear to specialize in specific species, suggesting a promising research direction: distilling these specialized models into a single general model that can leverage the strengths of each. Perch significantly outperforms all models in both one-shot and five-shot classification across all datasets, except for PER, where it performs slightly worse than CE. This suggests that incorporating taxonomic ranks as auxiliary tasks may enhance the discriminative capabilities of the feature extractor. ## V Conclusion In this work, we addressed the challenge of domain generalization for bird sound classification in few-shot scenarios, with a focus on the domain shift from focal to soundscape recordings. We proposed a new few-shot evaluation, derived from the BIRB datasets, to evaluate the generalization capabilities of models trained on focal recordings and tested on soundscapes. Additionally, we introduced ProtoCLR, a computationally efficient alternative to SupCon, inspired by prototypical networks. ProtoCLR maintains performance levels comparable to SupCon, and in some cases, slightly outperforms it. Future work will explore scaling training to longer epochs to maximize the potential of contrastive methods, as well as integrating taxonomic ranks into the learning process. Another promising direction is to investigate domain adaptation techniques, particularly source-free adaptation, which is well-suited for few-shot bioacoustics when access to source data is limited or retraining is too costly. ## References * [1]M. Assran, M. Caron, I. Misra, P. Bojanowski, F. Bordes, P. Vincent, A. Joulin, M. Rabbat, and N. Ballas (2018) Masked Siamese Networks for Label-Efficient Learning,\" in _European Conference on Computer Vision_. Springer, 2022, pp. 456-473.
Passive acoustic monitoring (PAM) is crucial for bioacoustic research, enabling non-invasive species tracking and biodiversity monitoring. Citizen science platforms like Xeno-Canto provide large annotated datasets from focal recordings, where the target species is intentionally recorded. However, PAM requires monitoring in passive soundscapes, creating a domain shift between focal and passive recordings, which challenges deep learning models trained on focal recordings. To address this, we leverage supervised contrastive learning to improve domain generalization in bird sound classification, enforcing domain invariance across same-class examples from different domains. We also propose ProtoCLR (Prototypical Contrastive Learning of Representations), which reduces the computational complexity of the SupCon loss by comparing examples to class prototypes instead of pairwise comparisons. Additionally, we present a new few-shot classification evaluation based on BIRB, a large-scale bird sound benchmark to evaluate bioacoustic pre-trained models.1 Footnote 1: Models and code: [https://github.com/ilyassmouumad/ProtoCLR](https://github.com/ilyassmouumad/ProtoCLR) Supervised Contrastive Learning, Domain Generalization, Few-shot Learning, Bioacoustics.
Give a concise overview of the text below.
244
isprs/59fa0439_517d_40e9_97cb_553bf7cd57f6.md
# Photogrammetric Technique for Teeth Occclusion Analysis in Dentistry V. A. Knyaz 1 State Research Institute of Aviation System (GosNIIAS), 125319 Moscow, Russia - (knyaz,zhl)@gosniias.ru S. Yu. Zheltov 2 State Research Institute of Aviation System (GosNIIAS), 125319 Moscow, Russia - (knyaz,zhl)@gosniias.ru ## 1 Introduction For successful dental treatment and denture making it is important to have information about relative position of upper and lower jaws and to know distances between corresponding teeth in given section. The determination of these distances is a problem for a dentist because there is no means for performing required measurements neither in a mouth nor on plasters casts due to teeth occlusions. The only information which a dentist can get about teeth position is information about presence (or absence) of contact between upper and lower teeth. This information can be obtained using thin colour sheet of paper to mark off the place of contact on teeth when closing jaws. A photogrammetric technique for the solution of this problem is proposed. It supposes generating 3D models of jaws and positioning them in given position for analysis. The procedure includes the following stages. At first plaster models of upper and lower jaws are made. Then 3D models of upper and lower jaws are generated using a photogrammetric system based on two CCD cameras and structured light projector. For whole jaw 3D model generation a set of partial 2.5D models is scanned which then are merged using original software realizing iterative closest point (ICP) algorithm. The next step is bringing 3D models of upper and lower jaws in a given position according their location and orientation in a mouth. For this purpose plaster models are installed in given posture using dental articulator or special plastic mould for setup. Then the arrangement of the jaws is registered. Two method of registration are developed and tested. The first method uses a set of reference points on the upper and lower jaws. Images of jaws are captured by the photogrammetric system, and then 3D coordinates of reference points are calculated for mutual jaws position determination. When applying the second method the surface of upper and lower teeth rows is scanned in given position of jaws plaster models. Then the jaws 3D model position is determined using a set of this surface scans as a reference surface using iterative closest point algorithm. This technique allows performing necessary analysis of mutual teeth location in different jaws attitude. A dentist can study mutual position of teeth and measure distances between given teeth at various jaw sections. Also this technique allows virtual studying of how a denture will interact with other teeth. ## 2 Photogrammetric System for Non-contact Measurements ### System outline For surface 3D reconstruction original photogrammetric system is used. The system (Figure 1) is PC-based and it includes the following hardware: * Two SONY XC-75 CCD cameras * Structured light projector * Multi-channel frame grabber Figure 1: The photogrammetric systemThe system is designed for dentistry application and it have to provide measurement accuracy of about 0.04 mm. To meet this requirement with given cameras the working space of the system is chosen as 160x160x160 mm. Cameras are used for non-contact 3D measurement of surface coordinates using a photogrammetric approach. For automated identification of given surface point in images from left and right cameras (correspondence problem solution) stripe structured light is used. ### System calibration The metric characteristics of 3D models produced by the system are provided by system calibration. Calibration procedure gives the estimation of interior orientation parameters basing on a set of images of special test field with known spatial coordinates of reference points. The problem of parameters determination is solved as estimation of unknown parameters basing on observation. Additional term describing non-linear distortion in co-linearity equations for perspective projection are taken in form: \\[\\Delta x=a\\overline{y}+\\overline{x}^{\\prime}x_{i}+\\overline{x}^{ \\prime}x_{i}+\\overline{x}^{\\prime}x_{i}+(r^{2}+2\\overline{x}^{*})P_{i}+2 \\overline{x}\\overline{y}P_{2}\\] \\[\\Delta y=a\\overline{x}+\\overline{y}^{\\prime}x_{i}+\\overline{y}^{ \\prime}x_{i}+\\overline{y}^{\\prime}x_{i}+2\\overline{x}\\overline{y}P_{i}+(r^{2 }+2\\overline{y}^{*})P_{z}\\] \\[\\overline{x}=m_{x}\\left(x-x_{p}\\right);\\overline{y}=-m_{y}(y-y_{ p});r=\\sqrt{x^{2}+y^{2}}\\] where \\(x_{p}\\)\\(y_{p}\\)-the coordinates of principal point, \\(m_{p}\\)\\(m_{y}\\) - scales in \\(x\\) and \\(y\\) directions, \\(a\\) - affinity factor, \\(K_{1}K_{2}K_{3}\\) - the coefficients of radial symmetric distortion \\(P_{i}P_{2}\\) - the coefficients of decentring distortion Image interior orientation and image exterior orientation (X\\({}_{i}\\), Y\\({}_{i}\\), Z\\({}_{i}\\) - location and \\(\\alpha_{i}\\),\\(\\alpha_{i}\\),\\(\\alpha_{i}\\) and angle position in given coordinate system) are determined as a result of calibration. Calibration process is fully automated due to applying two axis positioning stage and coded targets for reference point marking (Knyaz, 2002). Calibration program captures a set of test field images at different position controlled by positioning stage (Figure 2). Original calibration software controls test field orientating for image acquisition and test field image capturing at given positions. Then camera orientation parameters are estimated basing on image reference points coordinates as observations of points with known spatial coordinates. The results of system calibration are given in Table 1. \\(\\sigma_{\\text{s}}\\), \\(\\sigma_{\\text{y}}\\), \\(\\sigma\\) are residuals of co-linearity conditions for the reference points after least mean square estimation concerning as precision criterion for calibration. ## 3 Point-based occlusion registration ### Plaster model setup First method for occlusion registration is aimed on registering the relative displacement of upper and lower jaws. It uses a set of reference points on an upper and a lower jaw which determines the local coordinate system of each jaw. The stereo pair of the jaws plaster model with reference points is shown in the Figure 3. Four reference points (#1 - #4) on the lower jaw and four reference points (#5 - #8) are used for occlusion analysis. Reference points are marked with coded targets for accurate automated point identification and sub-pixel measurement. \\begin{table} \\begin{tabular}{|l|c|c|c|} \\hline & \\(\\sigma_{\\text{s, mm}}\\) & \\(\\sigma_{\\text{v, mm}}\\) & \\(\\sigma_{\\text{mm}}\\) \\\\ \\hline _Left camera_ & 0.0071 & 0.0052 & 0.0086 \\\\ \\hline _Right camera_ & 0.0061 & 0.0067 & 0.0092 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Results of system calibration Figure 3: The stereo pair of the jaws plaster model with reference points Figure 2: Calibration test field on the positioning stage ### Point-based estimation Spatial coordinates of reference points calculated by the photogrammetric system are determined in object coordinate system referred to the test field. X and Y axes are situated in the plane of the test field, Y axis being directed up and X axis being directed to the right. Z axis is directed toward an observer. The object coordinate system is defined during external orientation of the photogrammetric system. At first step jaw plaster models are installed in initial (reference) position accordingly natural central occlusion and reference points spatial coordinates are measured automatically by the photogrammetric system. For reproducing the position of real jaws in new (non-central) occlusion special plastic mass is used. The mould of teeth shape made on this plastic mass is used for installing jaw plaster copies accordingly the position of real jaws. It is placed between plaster jaw models and values of reference points spatial coordinates in new position are measured automatically by the photogrammetric system. Because of necessity to place plastic mould between jaw models the initial position of the lower jaw model also changes. So to find relative displacement of the upper jaw it is firstly required to determine the new position of the lower jaw. The transition matrix is determined by least mean square estimation of linear translation and rotation parameters which provides best fitting of lower jaw reference points at initial (reference) and new position of the lower jaw: \\[\\mathrm{X_{n}=A(\\alpha,\\alpha,\\kappa)\\cdot X+b}\\] where X - initial reference points coordinates, X\\({}_{n}\\) - new reference points coordinates, A(\\(\\alpha\\),\\(\\alpha\\),\\(\\kappa\\))- rotation matrix, B - translation vector. In Table 2 the parameters of lower jaw displacement and error at reference points after registration plastic cast installing for one of experiments is presented. Then new upper jaw reference points coordinates for reference position are calculated using the transition matrix found at the previous step. This new coordinates and upper jaw reference points coordinates measured by the photogrammetric system for jaw models with registration plastic mould are used for upper jaw displacement estimation. Table 3 presents the upper jaw displacement relatively its reference position due to registration plastic cast installing. At each step of the procedure the error of corresponding reference points superposition are calculated. They are concerned as a criterion of registration quality. The low level of the errors indicates that mutual position of the reference points cluster is not changed and results of occlusion registration are valuable. The developed technique allows to install jaw 3D models in the position according their real position in given occlusion and to estimate the parameters of translation and rotation of an upper jaw relatively a lower jaw. Also this technique for occlusion registration can be applied for adjacent teeth surfaces analysis by using spatial reference points coordinates along with jaws 3D models. ## 4 Surface-based Occlusion Registration ### Plaster models scanning Photogrammetric system allows measuring 3D coordinates of any point on object surface observed by both cameras simultaneously. For object of complicated shape such as jaw plaster model it is impossible to view all necessary points of the object. So a set of partial object scans has to be obtained for whole surface of the object 3D model generation. Every partial object scan is made in its local reference system of coordinates specified by test field. \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline & X & Y & Z \\\\ \\hline Translation, mm & 0.0235632 & 1.09383 & 0.0994412 \\\\ \\hline Rotation, \\({}^{\\circ}\\) & 0.270525 & 0.296401 & -0.669212 \\\\ \\hline \\multicolumn{4}{|c|}{Error for reference points} \\\\ \\hline 5 & 0.0331227 & -0.00402294 & 0.00468284 \\\\ \\hline 6 & 0.025844 & 0.0150891 & -0.0193283 \\\\ \\hline 7 & -0.0324345 & -0.0176837 & 0.0218483 \\\\ \\hline 8 & -0.0265322 & 0.00661761 & -0.0072028 \\\\ \\hline \\end{tabular} \\end{table} Table 4: The parameters of upper jaw displacement \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline & X & Y & Z \\\\ \\hline Translation, mm & 9.08579 & 0.609782 & 2.763461 \\\\ \\hline Rotation, \\({}^{\\circ}\\) & 0.453748 & 0.0724305 & 1.70743 \\\\ \\hline \\multicolumn{4}{|c|}{Error for reference points} \\\\ \\hline 1 & 0.00936135 & 0.013985 & -0.0072014 \\\\ \\hline 2 & 0.0118679 & -0.00149339 & 0.0292316 \\\\ \\hline 3 & 0.00416825 & -0.0239597 & -0.0159276 \\\\ \\hline 4 & -0.0253975 & 0.0114681 & -0.0061026 \\\\ \\hline \\end{tabular} \\end{table} Table 2: The parameters of lower jaw displacement \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline & \\multicolumn{3}{c|}{Displacement of reference points} \\\\ \\hline & X & Y & Z \\\\ \\hline 5 & -0.702834 & -1.66721 & 0.213742 \\\\ \\hline 6 & -0.342446 & -1.25036 & 0.185768 \\\\ \\hline 7 & -0.402836 & -1.07549 & 0.311374 \\\\ \\hline 8 & -0.745331 & 0.216462 & 0.773238 \\\\ \\hline \\end{tabular} \\end{table} Table 3: The parameters of upper jaw displacementFor whole 3D model generation from a set of scan several techniques can be applied such as reference point network designing, applying precise positioning stage with known position of axis, fragment merging using iterative closest point algorithm (Besl, 1992). The latter technique is applied for fragment merging with point-to surface metrics. This technique requires that merging scans have significant overlapping for proper work of iterative closest point algorithm. To generate whole 3D model of an upper jaw presented in Figure 5 a set of 28 scan is used. Various scans are shown in different colours to control visually a quality of merging. The mean error of scan merging is at the level of 0.02 mm. The resulting 3D model consisting of a significant number of overlapping surfaces is not suitable for purposes of jaws occlusion analysis. So this 3D model is transformed in single mesh using interpolating mesh algorithm (Curless, 1996). The result of single mesh generation is shown in Figure 6. ### Occlusion registration After generating 3D models both upper and lower jaws it is necessary to install these 3D models according jaws real position in a mouth. At first registration of jaws position is performed using dental articulator or silicon moulds made in mouth. Then plaster jaw model are installed in registered position using obtained silicon moulds. In this position the front surface of upper and lower teeth rows is scanned. The scan of the teeth front surface is then used as a reference surface for jaw 3D models translating into attitude corresponding real jaws occlusion. Resulting reference surface is presented in Figure 7. Then 3D models of upper and lower jaws are bringing in registered position using iterative closest point algorithm. Figure 4: Two partial scans of a jaw Figure 5: Upper jaw 3D model from 28 scans Figure 6: Jaw 3D model in single mesh form Figure 7: Jaws position registration ## 5 Conclusion Analysis Jaws 3D models installed in the position according their real occlusion allow investigating how hidden teeth surfaces are relatively located, this being important for teeth treatment and denture manufacturing. For occlusion analysis original software is developed. It supports the following functions: * 3D model visualization in different modes, * making given plane section of 3D model, * section contours visualization in forms of 2d or 3D curves, * Measuring given parameters in section plane. Figure 8 presents the user interface of the developed software. User can make section manually defining the plane position or he can choose anthropometric points on teeth and analyze occlusion in plane corresponding to these points. Figure 9 shows the 3D section contours (the surface of the upper jaw is hidden). Figure 10 presents section of jaws 3D model and measuring distance between upper and lower teeth in section plane. ## 6 Conclusion The photogrammetric technique for occlusion analysis is proposed. It allows installing upper and lower jaw 3D models according their original position in a mouth by applying point-based or surface-based technique. 3D modelling of occlusion provides a dentist by valuable mean for teeth hidden surfaces analysis. The accuracy of 3D models reconstruction and arrangement for both methods is sufficient for dentistry applications. The developed technique also can be successfully applied for analysis of joint parts of aggregates and mechanisms for industrial use. ## References * Knyaz (2002) Knyaz, V., 2002. Method for on-line calibration for automobile obstacle detection system. In: _The international Archives of Photogrammetry and Remote Sensing_, Proceedings of ISPRS Commission V Symposium \"CLOSE-RANGE IMAGING, LONG-RANGE VISION\", Vol. XXXIV, part 5, Commission V, 2002, Corfu, Greece. Pp. 48-53. * Besl & McKay (1992) Besl, P.J., McKay, N.D., 1992. A method for registration of 3-D shapes, In: _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 14, no.2, pp. 239-256. * Curless & Levoy (1996) Curless, B., Levoy, M., 1996. A Volumetric Method for Building Complex Models from Range Images. In: _SIGGRAPH 96 Conference proceedings_, pp. 303-312. Figure 8: Software user interface Figure 10: Jaws 2D section and occlusion measurement Figure 9: The 3D section contours
Significant number of applications has a demand for measurement of mutual location of two or more adjacent parts. Measuring outer parameters of adjacent parts is relatively simple task. But it is a problem to determine relative attitude of inner surfaces of adjacent parts and to measure given parameters of conjunction. This problem is very important for dentistry where accurate measurements of relative upper and corresponding lower teeth position (occlusion) are essential for efficient treatment. A photogrammetric technique for analysis of jaws positioning and hidden surfaces attitude analysis is proposed. It uses original photogrammetric system for non-contact 3D measurements and surface 3D reconstruction. Two techniques are proposed for jaws 3D models mutual arrangement according their original position in a mouth. The first one is point-based and it allows to determine jaw displacement concerning a set of reference points. The second technique is surface-based and it uses a scan of teeth occlusion for jaws positioning. Both methods provide adequate accuracy for dentistry. Photogrammetry, Calibration, Non-contact measurements
Summarize the following text.
201
arxiv-format/2205_03739v1.md
# Airport Digital Twins for Resilient Disaster Management Response Eva Agapaki\\({}^{1}\\) \\({}^{1}\\)*M.E., Sr. Rinker School of Construction Management, University of Florida, 573 Newell Dr, Gainesville, 32603, Florida, USA. Corresponding author(s). E-mail(s): [email protected]; ## 1 Introduction The complexity of airport operations regardless of their size extends beyond the airside side of operations. Natural disasters, climate change threats, high annual passenger demand, large volumes of cargo and baggage, concessionaires and vendors as well as other airport tenants may extend an airport's operations beyond capacity or disrupt operations. Figure 1 shows the airport systems and stakeholders in an airport. The American Society of Civil Engineers (ASCE) has rated airport infrastructure with grade \"D\" and this finding is based on the anticipated higher passenger demand compared to infrastructure capacity (Bureau of Transportation Statistics, 2019). Moreover, irregular operations and disruptions due to internal or external threats can have seriousconsequences in cities [1]. For example, in December 2017, one of the busiest airports in the world, Atlanta's Hartsfield-Jackson airport, suffered an 11hr-long power outage, which disrupted the airport's operations and also incurred economic losses [2]. These incidents necessitate the need for a resilient airport. Resilience incorporates the ability to (a) anticipate, prepare for, and adapt to changing conditions, (b) to absorb, (c) to withstand, respond to, and (d) to recover rapidly from disruptions. The implementation of resilient solutions in airports can be performed by preventing or mitigating disruptive events to air traffic operations [3; 4; 5]. Those events can be either severe weather hazards (e.g., dense fog, flooding, snow, drought, tornado, wildfire, hurricane), threats (e.g., equipment outages, political changes, economic downturn, pandemics, cyber-attacks, physical attacks) and vulnerabilities (e.g., equipment outages, lack of staff). Resilience can be quantified by analyzing risks to an airport. We adopt the definition of airport risks by the National Infrastructure Protection Plan (NIPP) [6], where risk is defined by the likelihood and the associated consequences of an unexpected event. Those risks are the hazards most likely to occur, potential threats, and vulnerabilities. Hazards and threats refer to incidents that can damage, destroy, or disrupt a site or asset. The difference between hazards and threats is that the former can happen unexpectedly, typically outside of an airport's control whereas the latter happen purposefully and are usually manmade. Some examples of hazards are natural hazards (e.g., hurricanes, earthquakes, wildfire), technological (e.g., infrastructure failure, poor workmanship, or design), or human-caused threats (e.g., accidents, cyberattacks, political upheaval). The consequences associated with the vulnerabilities of an airport, as a result of a hazard or threat being realized, is one way to measure the impacts associated with risks. Therefore, risk is defined in 1 by: Figure 1: Airport Systems and Stakeholders \\[\\text{Risk}=\\text{Consequence}\\quad x\\quad\\text{probability}\\quad x\\quad\\text{ vulnerability}. \\tag{1}\\] Resilience analysis includes both the time before (planning capability), during (absorbing capability) and after a disruption event occurs (recovery and adaptation capability), including the actions taken to minimize the system damage or degradation, and the steps taken to build the system back stronger than before. Figure 2(a) shows this timeline and the planning [7; 8; 9; 10; 11; 12; 13; 14], absorbing [7; 10; 11; 12; 13; 14; 15; 16], recovering [7; 12; 17; 18; 19], and adapting [7; 11; 12; 14; 17; 18] phases of a resilience event. As shown in Figure 2, the system initially is in a steady state. After the disruptive event occurs at \\(t_{d}\\), the system's performance starts decreasing and then a contingency plan is implementing at time \\(t_{c}\\). Then, there are four \"recovery\" scenarios. In the first scenario (blue line), the performance of the system gradually recovers without any outside intervention until it reaches the original steady state. In the second scenario (purple, dashed line), the system first reaches a new steady state, but eventually returns to its originally state. For example, temporary routes and measures are taken to meet immediate needs of airport operations when the system is damaged due to a hurricane. However, it may take weeks or months for the system to fully recover. The worst scenario is when the system cannot recover (red dashed line). The last scenario is to reach the recovery state earlier by using a holistic Digital Twin (DT) framework (green line), which will be discussed in the last section of this paper. Figure 2(b) showcases the risk assessment adoption framework in airport operations [20]. This paper targets to identify the areas of highest risks for an airport, so that these can serve as indicators to inform policies and investment decisions. ### Background on airport resilience In recent years, there has been a lot of research on resilience in airport operations. Multiple studies have implemented Cost-Benefit Analysis (CBA) tools to estimate costs and benefits after implementing security measures in their security risk assessment policies. However, CBA analysis cannot validate the estimated airport security costs, therefore multiple simulation experiments are needed to investigate the interdependencies between different stakeholders and systems [21]. To overcome these limitations, the ATHENA project investigated a framework to evaluate curbside traffic management control measures and traffic scenarios at the Dallas-Fort Worth International Airport (DFW) [22], optimization of shuttle operations that can lead to 20% energy reductions [23] and traffic demand forecasting [24]. Researchers have also focused on risk assessment models of airport security. Lykou et al. (2019) [25] developed a model for smart airport network security with the objective to mitigate malicious cyberattacks and threats. Zhou and Chen (2020) [26] proposed a method to evaluate an airport's resilient performance under extreme weather events. Their results demonstrated that airport resilience greatly varies based on the level of modal substitution, airport capacity and weather conditions. Previous research has greatly focusedon airport security protection [4; 27; 28]. Agent-based modeling has been used to represent sociotechnical elements of an airport's security system and identify states and behaviors of its agents such as weather, pilots, aircrafts, control tower operators [29]. Recently, Huang et al. (2021) [30] proposed a Bayesian Best Worst Method that identifies the optimal criteria weights with a modified Preference Ranking Organization method for Enrichment evaluations (modified PROMETHEE) to make pairwise comparisons between alternatives for each criterion. They evaluated their method in three airports in Taiwan. This system relies on the judgement of experts for the evaluation of multiple, even overlapping criteria based on pre-determined evaluation scales. However, a comprehensive framework for resilient management response for airports has not yet been developed. This is a complex and difficult Multiple Criteria Decision-Making (MCDM) problem. The objective of MCDM is to identify an optimal solution when taking into account multiple overlapping or conflicting criteria. This study intends to develop a framework that Figure 2: (a) Resilience framework overview with and without the use of Digital Twins and (b) risk assessment adoption in airports (modified from Crosby et al., 2020). leverages recent advances in digital twin technologies and identify the criteria and metrics, which will act as a guidance for a holistic disaster management response. ### Resilience Indexes Previous literature has focused on identifying multiple metrics (resilience indexes) related to aviation and airport safety and operations [1; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. Figure 3 summarizes the most widely used resilience metrics for each resilience phase as described above. These metrics take into account the airport's physical facilities, personnel, equipment and the disaster response phases. Figure 3: Literature review on metrics for each airport resilience phase Resilience indexes and web-based platforms have been widely developed for communities. Two widely adopted tools are the National Risk Index (FEMA, 2022) and the ASCE Hazard Tool (ASCE, 2022). The former calculates risk index scores per each US county based on 18 natural hazards by computing the expected annual loss due to natural disasters multiplied by the social vulnerability divided by the community resilience. The ASCE Hazard tool provides reports on natural disasters with widely known parameters. These tools need to be taken into account when considering the exposure of airports to natural disasters. The uncertainly of disruption occurrences in conjunction with the complexity of airport infrastructure and operations (Figure 1) necessitates the need for a unified digital platform to integrate information related to airport operations as well as interactions between airport sub-systems and their accurate representation. ## 2 Airport Digital Twin Framework Technological innovations have the potential to: (a) capture the detailed geometry of the physical infrastructure and generate the asset's digital twin, (b) enrich the geometric digital twin with real-time sensor data, (c) update, maintain and communicate with the digital twin and (d) leverage the digital twin to monitor the asset's performance and improve decision-making by planning interventions well before the time of need. The goal of this research is to develop a foundational digital twin template that can be implemented across airports of all sizes. This leverages the use of digital twin technologies to explore alternative future scenarios for a more resilient airport. The foundational digital twin will be used as the main framework and specific systems of the digital twin will be investigated. In particular, the key objectives are the following: * O1: Airport Digital Twin (ADT) definition in the context of operational airport systems. * O2: Airport Digital Twin (ADT) generation and maintenance. We propose a framework for static and dynamic information curation based on existing sensor data and airport systems. We introduce the concept of the foundational airport digital twin (Figure 4), which incorporates an umbrella of twins that could interact with each other; these twins are, but not limited to, the geometric, financial, operations, social and environmental twin. If successful implementation of the foundational twin is achieved, then it can lead to improved efficiency and operations as well as better planning in the presence of irregular events that are of paramount importance for airport executives and stakeholders. The Foundational Digital Twin will serve as a resilient data backbone for airport infrastructure systems and will enable the implementation of more advanced twins such as the adaptive/planning and intelligent twin. As illustrated in Figure 4, the adaptive twin encapsulates simulated scenarios towards a proactive plan of operating an airport, where planned interventions will be more sophisticated than ever before. The data collection, modeling and intervention will become increasingly automated. That level of automation will lead to the Intelligent Twin, where we envision an informed decision-making system with minimal human intervention. We expand on the definition of the Foundational Digital Twin in Figure 5. The geometric twin entails spatial data collection as well as their intelligent processing, Building Information Modeling (BIM) and validation with laser scanned data and GIS data integration. The process of laser scanning to BIM has already been applied to other complex infrastructure assets and is named as geometric digital twinning [34]. The Financial twin should have the capacity to: (a) simulate the allocation of funding from a variety of sources and visualize it in different physical assets at the airport, (b) visualize potential conflicts in funding utilization in real-time and (c) facilitate the fiscal management of an airport expansion and renewal projects. The Social twin should visualize the human demand on infrastructure and predict social behaviors based on historical data. In particular, it should: (a) integrate geospatial and airport-specific data (e.g., area and airport infrastructure reachability in correlation to the number of runways, taxiways), (b) integrate and process demographic data such as urban indexes and population around the airport to predict human demand and (c) integrate geographic and urban data related to the existing built environment surroundings that can affect passenger demand. Lastly, the environmental twin should account for natural hazards, energy consumption, occupancy rates, pollution and air volume for airports to be on track to achieve net-zero carbon infrastructure by 2050 (UN Environment Program, 2020). Figure 4: Digital Twin framework As presented in Figure 5, the foundational digital twin will be integrated with existing infrastructure Asset Management (AM) software and the configuration parameters (asset lifecycle, risk management, consequence of failure, probability of failure) will be predicted based on the capabilities of the twin. The asset will be registered in a Common Data Environment (CDE) and the infrastructure needs will be assessed based on the existing capital program of the airport (in-year or multi-year capital program) as well as the financial system (e.g., existing asset reports, tangible capital asset and government accounting standards board). All the data is expected to be aggregated in a data warehouse (data lake), which will be hosted in the airports' Operations Center facilities. ### Exploration of threats and hazards for Airport Digital Twins The threats and hazards are grouped into each Foundational Twin and are summarized in Figure 6. Each group is then analyzed below. **Geometric.** The majority of aging industrial facilities lacks accurate drawings and documentation [34]. Without capturing the existing geometry of an asset accurately, the incurred information loss throughout an asset's lifecycle would be immense. **Financial.** Budget reductions and overruns, economic downturns, inefficient funding allocations are some of the financial threats airport authorities may be encountered with [35; 36]. **Operational.** Equipment maintenance is critical for airport operations (SMS Pilot Studies; FAA, 2019). The condition of passenger bridge boarding and ancillary equipment is another critical component that needs to be reliably assessed and maintained. **Social.** Airports are at risk of malicious events that can put their operations in jeopardy. Such incidents have been reported in the Los Angeles Figure 5: Definition of foundational digital twin International airport in 2002 and 2013, when assailants killed airport personnel during attacks [37; 38]. In 2014 an intoxated passenger attacked another passenger in a hate crime at the Dallas airport [39], indicating the need for vigilance at all levels. Incidents of cyberattacks continue to increase and airport infrastructure is a target for those. Zoonotic diseases 1 will likely increase in their extent and frequency in the future [40]. We need to look into the diseases that have the highest likelihood of infecting regions where the airports operate. For example, in Texas, the most common vectors include ticks and mosquitoes carrying spotted fever rickettsiosis and West Nile virus, respectively [41]. Other unknown zoonotic diseases could arise in the future in the same way that the novel coronavirus is thought to have emerged as a zoonotic disease and then rapidly spread around the world, primarily via airports and air travel [42]. In such scenarios, the airport's operations could facilitate the spread of new zoonotic diseases from one person to another, including passengers, flight crews, airport workers, transportation personnel, and others. Other societal threats are demographic changes (e.g., sudden population growth) and unprecedented industrial or staff accidents. Footnote 1: A zoonotic disease is one in which an animal acts as intermediary for disease transmission between the vector and the infected human (e.g., Lyme disease occurs in white-footed mice, but it is transmitted to humans via ticks). **Environmental.** These threats include natural hazards and impacts of climate change that can significantly alter meteorological conditions, which may affect airport operations. Extreme natural events can lead to power system and flight disruptions. Studies have estimated that these events will be exacerbated to a modest degree by climate change [43; 44]. Although current research cannot definitively conclude whether climate change will increase or decrease the frequency of tornadoes in every situation, overall warmer temperatures will likely reduce the potential for wind shear conditions that lead to tornadoes [45; 46]. Severe wind could become more common because previous research has shown that climate change is responsible for a gradual increase in world average wind speeds [47]. Fewer and/or less powerful tornadoes could help improve operations, because airport structures and visitors are less likely to be damaged or harmed by airborne debris caused by tornadoes. On the other hand, stronger overall winds could lead to more difficult weather conditions for pilots of arriving or departing aircraft. Climate change also has the potential to increase the annual average temperatures and to make the temperature swings more extreme. This can result in an increase in electricity demand as HVAC systems respond to temperature changes. This increased electricity demand can stress system components in both daily operations and in heat wave and cold snap events. There are three features of the twin that have the potential to assist decision making better than any other technology tool. Those are: (a) interoperability between software tools, which facilitates better communication between multiple stakeholders, (b) relationship mapping between both static (e.g., static infrastructure) and dynamic entities (e.g., humans, moving infrastructure) and (c) semantics that allow Artificial Intelligence (AI) tools and data analytics to forecast future scenarios. An example of airport stakeholders involved in the operational twin is presented in Figure 7. Figure 8 shows semantics in the geometric digital twin applied to heavy industrial facilities. We will also investigate potential data sources for the generation and maintenance of DTs in the next section. Figure 6: Definition of Foundational DTs in relation to hazards and threats ### Data sources for the generation and maintenance of airport DTs A sample of data sources and existing datasets for each Foundational Twin is summarized in Figures 9 and 10. Existing literature provides the insight that there is an abundance of systems related to air traffic control (i.e., ADS-B, SWIM, ERAM). FAA's goal of the NEXT Generation Air Transportation System (NextGen) is a national upgrade of the air traffic control systems. The System Wide Integration Management (SWIM) is the first implementation of this vision, and it integrates a variety of data systems, including weather, communication radar information, traffic flow management systems, en route flight plan changes, arrival and departure procedures, microburst information, NOTAMs, storm cells, wind shear and terminal area winds aloft. However, the systems related to operating, maintaining and predicting failures and irregular operations in the landslide and terminal operations are limited. Airport Computer Maintenance Management Systems (CMMS) have been proposed for use in airports (National Academies, 2015) to streamline maintenance operations Figure 8: Automated geometric digital twinning on industrial assets (Agapaki, 2020) Figure 7: Airport Stakeholders of the Operational Twinby using work schedules, maintaining inventory and spare parts at optimal levels and tracking historical records. However, based on the results of the above-mentioned report, a total of 15 different CMMS software packages were used by airports that have implemented these systems, which makes the data governance and interoperability a challenge. In addition, although the exact location and tracking of air traffic is managed thoroughly through a variety of systems, the movement of passengers and occupants of the terminals is not monitored. To give an example, an airline becomes aware of a passenger being at the terminal only when they check in their luggage or pass TSA control. Figure 9: Data sources and existing datasets for the geometric, operations,financial and social foundational twin The data needs will differ per airport size depending on their experience and on their operational management practices. At a minimum, we investigate data sources and metrics to be adopted for the environmental twin for a small set of US airports. ## 3 Investigation of environmental digital twin metrics We selected a set of airports, namely: Southwest Florida International airport (RSW), Hartsfield-Jackson International airport (ATL), Charlotte Douglas International airport (CLT), Ronald Reagan Washington International airport (DCA), William Hobby airport (HOU), Dalls/Fort Worth International airport (DFW), Dallas Love Field airport (DAL), Austin-Bergstrom International airport (AUS) and Orlando international airport (MCO). The reason for our selection was based on their number of enplanements, being in the top-10 busiest US airports for large and medium hub airports [48]. We investigated the natural hazard risk index scores for selected counties, where the identified airports operate. Figure 11 presents the relative distribution of hazard type risk index scores for each county, which can be used as one of the data sources for the environmental airport DT. Figure 10: Data sources and existing datasets for the environmental foundational twin Another indicator of irregular airport operations is the number of cancelled and/or delayed flights arriving or departing at the airport. We collected data from TranStats (Bureau of Transportation Statistics, 2022) for the on-time arrival and departure flights. Figure 12 shows the arrivals and departures in five of the above-mentioned airports and their distribution per year for the last ten years. Then, we looked at extreme weather events that have occurred and could affect the operation of these airports. In particular, we investigated Hurricane Irma. Hurricane Irma occurred in September 2017 affecting MCO, RSW, ATL, CLT and DCA airports and primarily ATL airport with 3.76 and 3.78% cancelled arrival and departure flights respectively. A power outage also affected the ATL airport with nearly 5% cancelled flights in December 2017. Similarly, RSW, CLT, DCA and MCO were affected with the majority of their canceled flights being in September 2017 (Figure 13). Another important aspect that should be investigated is the recovery time of airport operations after these events occurred, which is part of future work. Figure 11: Risk index scores for 18 natural hazards and nine US airports (modified from FEMA). Climate change metrics are another significant factor that needs to be taken into account when generating and maintaining a digital twin. The National Climate Assessment Representative Concentration Pathways (RCP) provide key indicators for quantifying climate change. Those are: cooling degree days, heating degree days, average daily mean temperature, annual number of days with minimum and maximum temperatures beyond threshold values, annual precipitation, dry days, and days with more than 2\" precipitation. Cooling and heating degree days refer to the number of hours per year and the degrees above or below 65\"F as detailed by the NOAA methodology (US Department Figure 12: Distribution of percentage of cancelled (a) arrival and (b) departure flights for ATL, MCO, RSW, CLT and DCA airports. Figure 13: Per month percentage of cancelled flights in 2017. of Commerce 2022). This methodology means that the cooling degree days and heating degree days can exceed 365 as they are multiplied with the number of hours per year and the magnitude of temperatures above or below 65\\({}^{\\circ}\\)F. Temperature has the potential to impact human health. It can also affect power, transport, and water system resilience. For example, at the DFW airport, the number of cooling degree days is projected to increase from 2600 to 3400 and the number of heating degree days is projected to decrease from 2400 to 1900 (both according to the low emissions scenario). Cooling degree days will increase electricity consumption from air conditioning, increase on site water consumption, and can exacerbate the airport's peak electricity demand which was analyzed in a subsequent section. Both maximum and minimum ambient temperature can affect electricity demand. The average high and low temperatures at 2 meters above the ground, the average precipitation and average wind speed are compared for ATL, CLT, RSW, DFW and HOU. The average precipitation is computed by accumulating rainfall over the course of a sliding 31-day period and the average wind speed is computed as the mean hourly wind speed at 10 meters above the ground. Aside from the average temperature, wind and precipitation values, heat waves and cold spells can have an acute effect on airport operations. Using the SAFRAN methodology (EPA, 2021), those can be identified in future work. The overall amount of precipitation, dry days, and days with significant rainfall (\\(>2\\)\" per day) is expected to change very little from present time through 2050. We used DFW as an example, where there have been several historical events where extreme precipitation has led to power system and flight disruptions. These events have included snowfall (US Department of Commerce 2021a; US Department of Commerce 2021b; Narvekar 2011; NBCDFW 2021; Lindsey 2021; L'Heureux 2021), drough Figure 14: (a) Average high and low temperature (b) average wind speed and (c) average monthly rainfall. Aircraft Digital Twins for Resilient Disaster Management Response Hegewisch and Abatzoglou (2021) Centers for Disease Control and Prevention National Environmental Public Health Tracking (2020), and floods (USGCRP 2018; First Street Foundation 2021). ## 4 Discussion When dealing with risk, an airport organization has the choice to either accept the risk, avoiding the risk by planning and preparing for interrupted activities or working to eliminate the risk through mitigation. This research reviewed the existing risks in the context of gathering available data sources, grouped them into categories (geometric, financial, social and environmental) and developed a unified framework for the assessment of risks using multiple criteria. We particularly emphasized environmental threats and provided metrics as well as open-source existing databases for the evaluation of those risks. ## 5 Conclusion There have been many studies investigating airport resilience frameworks, however a unified framework that identifies and then combines multiple data sources has not yet been investigated. The aim of this study was to use a foundational digital twins in order to identify key threats and hazards. We then suggested metrics and data sources for the environmental digital twin that can be used as guidance for the development of a unified and integrated data framework for the DT development. Future research directions include the foundational digital twin implementation on airport case studies. ## References * (1) Metzner, N.: A comparison of agent-based and discrete event simulation for assessing airport terminal resilience. Transportation Research Procedia **43**, 209-218 (2019) * (2) Sun, X., Wandelt, S., Zhang, A.: Resilience of cities towards airport disruptions at global scale. Research in Transportation Business & Management **34**, 100452 (2020) * (3) Clark, K.L., Bhatia, U., Kodra, E.A., Ganguly, A.R.: Resilience of the us national airspace system airport network. IEEE Transactions on Intelligent Transportation Systems **19**(12), 3785-3794 (2018) * (4) Yanjun, W., Jianming, Z., Xinhua, X., Lishuai, L., Ping, C., Hansen, M.: Measuring the resilience of an airport network. Chinese Journal of Aeronautics **32**(12), 2694-2705 (2019)* [5] Pishdar, M., Ghasemzadeh, F., Maskeliunaite, L., Braziunas, J.: The influence of resilience and sustainability perception on airport brand promotion and desire to reuse of airport services: the case of iran airports. Transport **34**(5), 617-627 (2019) * [6] US Department of Homeland Security: National infrastructure protection plan, 29-33 (2013) * [7] Yang, C.-L., Yuan, B.J., Huang, C.-Y.: Key determinant derivations for information technology disaster recovery site selection by the multi-criterion decision making method. Sustainability **7**(5), 6149-6188 (2015) * [8] Huizer, Y., Swaan, C., Leitmeyer, K., Timen, A.: Usefulness and applicability of infectious disease control measures in air travel: a review. Travel medicine and infectious disease **13**(1), 19-30 (2015) * [9] Humphries, E., Lee, S.-J.: Evaluation of pavement preservation and maintenance activities at general aviation airports in texas: practices, perceived effectiveness, costs, and planning. Transportation Research Record **2471**(1), 48-57 (2015) * [10] Skorupski, J., Uchronski, P.: A fuzzy system to support the configuration of baggage screening devices at an airport. Expert Systems with Applications **44**, 114-125 (2016) * [11] Chen, W., Li, J.: Safety performance monitoring and measurement of civil aviation unit. Journal of Air Transport Management **57**, 228-233 (2016) * [12] Zhao, J.-n., Shi, L.-n., Zhang, L.: Application of improved unascertained mathematical model in security evaluation of civil airport. International Journal of System Assurance Engineering and Management **8**(3), 1989-2000 (2017) * [13] Singh, V., Sharma, S.K., Chadha, I., Singh, T.: Investigating the moderating effects of multi group on safety performance: The case of civil aviation. Case studies on transport policy **7**(2), 477-488 (2019) * [14] Ergun, N., Bulbul, K.G.: An assessment of factors affecting airport security services: an ahp approach and case in turkey. Security Journal **32**(1), 20-44 (2019) * [15] Tahmasebi Birgani, Y., Yazdandoost, F.: An integrated framework to evaluate resilient-sustainable urban drainage management plans using a combined-adaptive mcdm technique. Water Resources Management **32**(8), 2817-2835 (2018)* (16) Willemsen, B., Cadee, M.: Extending the airport boundary: Connecting physical security and cybersecurity. Journal of Airport Management **12**(3), 236-247 (2018) * (17) Wallace, M., Webber, L.: The Disaster Recovery Handbook: A Step-by-step Plan to Ensure Business Continuity and Protect Vital Operations, Facilities, and Assets. Amacom,?? (2017) * (18) Zhou, L., Wu, X., Xu, Z., Fujita, H.: Emergency decision making for natural disasters: An overview. International journal of disaster risk reduction **27**, 567-576 (2018) * (19) Bao, D., Zhang, X.: Measurement methods and influencing mechanisms for the resilience of large airports under emergency events. Transportmetrica A: Transport Science **14**(10), 855-880 (2018) * (20) Crosby, Missouri, L.: Airport security vulnerability assessments. Program for applied research in airport security (2020) * (21) Stewart, M.G., Mueller, J.: Cost-benefit analysis of airport security: Are airports too safe? Journal of Air Transport Management **35**, 19-28 (2014) * (22) Ugirumurera, J., Severino, J., Ficenec, K., Ge, Y., Wang, Q., Williams, L., Chae, J., Lunacek, M., Phillips, C.: A modeling framework for designing and evaluating curbside traffic management policies at dallas-fort worth international airport. Transportation Research Part A: Policy and Practice **153**, 130-150 (2021). [https://doi.org/10.1016/j.tra.2021.07.013](https://doi.org/10.1016/j.tra.2021.07.013) * a case study from dallas fort worth international airport. Journal of Air Transport Management **94**, 102077 (2021). [https://doi.org/10.1016/j.jairtraman.2021.102077](https://doi.org/10.1016/j.jairtraman.2021.102077) * (24) Lunacek, M., Williams, L., Severino, J., Ficenec, K., Ugirumurera, J., Eash, M., Ge, Y., Phillips, C.: A data-driven operational model for traffic at the dallas fort worth international airport. Journal of Air Transport Management **94**, 102061 (2021). [https://doi.org/10.1016/j.jairtraman.2021.102061](https://doi.org/10.1016/j.jairtraman.2021.102061) * (25) Lykou, G., Anagnostopoulou, A., Gritzalis, D.: Smart airport cybersecurity: Threat mitigation and cyber resilience controls. Sensors **19**(1), 19 (2018) * (26) Zhou, L., Chen, Z.: Measuring the performance of airport resilience to severe weather events. Transportation research part D: transport and environment **83**, 102362 (2020)* [27] Zhou, Y., Wang, J., Yang, H.: Resilience of transportation systems: concepts and comprehensive review. IEEE Transactions on Intelligent Transportation Systems **20**(12), 4262-4276 (2019) * [28] Thompson, K.H., Tran, H.T.: Operational perspectives into the resilience of the us air transportation network against intelligent attacks. IEEE Transactions on Intelligent Transportation Systems **21**(4), 1503-1513 (2019) * [29] Stroeve, S.H., Everdij, M.H.: Agent-based modelling and mental simulation for resilience engineering in air transport. Safety science **93**, 29-49 (2017) * [30] Huang, C.-N., Liou, J.J., Lo, H.-W., Chang, F.-J.: Building an assessment model for measuring airport resilience. Journal of Air Transport Management **95**, 102101 (2021) * [31] Bruneau, M., Chang, S.E., Eguchi, R.T., Lee, G.C., O'Rourke, T.D., Reinhorn, A.M., Shinozuka, M., Tierney, K., Wallace, W.A., Von Winterfeldt, D.: A framework to quantitatively assess and enhance the seismic resilience of communities. Earthquake spectra **19**(4), 733-752 (2003) * [32] Damgacioglu, H., Celik, N., Guller, A.: A route-based network simulation framework for airport ground system disruptions. Computers & Industrial Engineering **124**, 449-461 (2018) * [33] Janic, M.: Modeling the resilience of an airline cargo transport network affected by a large scale disruptive event. Transportation Research Part D: Transport and Environment **77**, 425-448 (2019) * [34] Agapaki, E.: Automated object segmentation in existing industrial facilities. PhD thesis, University of Cambridge (2020) * [35] Graham, A., Morrell, P.: Airport Finance and Investment in the Global Economy. Routledge,?? (2016) * [36] Urazova, N., Kotelnikov, N., Martynyuk, A.: Infrastructure project planning. In: IOP Conference Series: Materials Science and Engineering, vol. 880, p. 012105 (2020). IOP Publishing - u.s. news. (2013) * los angeles airport shooting kills 3 - july 5, 2002. (2004) * [39] Post, W.: Man tackled at dallas airport after homophobic attack. (2014) * [40] James, M., Gage, K., Khan, A.: Potential influence of climate change on * [41] Centers for Disease Control and Prevention Division of Vector-Borne Diseases: Texas: Vector-borne diseases profile (2004-2018) (2020) * [42] Van Beusekom, M.: Studies trace covid-19 spread to international flights. (2020) * key findings. (2017) * [44] Future Climate Dashboard' Web Tool. Climate Toolbox.: Future climate dashboard: Location: 32.9025o n, 97.0433o w. (2021) * [45] Center for Climate and Energy Solutions: Tornadoes and climate change (2019) * [46] Hausfather, Z.: Tornadoes and climate change: What does the science say? (2019) * [47] Climate Central: Extreme weather and climate change (2021) * [48] Bazargan, M., Vasigh, B.: Size versus efficiency: a case study of us commercial airports. Journal of air transport management **9**(3), 187-193 (2003)
Airports are constantly facing a variety of hazards and threats from natural disasters to cybersecurity attacks and airport stakeholders are confronted with making operational decisions under irregular conditions. We introduce the concept of the _foundational twin_, which can serve as a resilient data platform, incorporating multiple data sources and enabling the interaction between an umbrella of twins. We then focus on providing data sources and metrics for each foundational twin, with an emphasis on the environmental airport twin for major US airports. airports, Digital Twins, resilience
Summarize the following text.
99
arxiv-format/1612_08879v3.md
# MARTA GANs: Unsupervised Representation Learning for Remote Sensing Image Classification Daoyu Lin, Kun Fu, Yang Wang, Guangluan Xu, and Xian Sun This work was supported in part by the National Natural Science Foundation of China under Grant No.41501485 and No.61331017. (Corresponding author: Kun Fu.)The authors are with the Key Laboratory of Technology in Geo-spatial Information Processing and Application System, Chinese Academy of Sciences, Beijing, 100190, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]). ## I Introduction As satellite imaging techniques improve, an ever-growing number of high-resolution satellite images provided by special satellite sensors have become available. It is urgent to be able to interpret these massive image repositories in automatic and accurate ways. In recent decades, scene classification has become a hot topic and is now a fundamental method for land-resource management and urban planning applications. Compared with other images, remote sensing images have several special features. For example, even in the same category, the objects we are interested in usually have different sizes, colors and angles. Moreover, other materials around the target area cause high intra-class variance and low inter-class variance. Therefore, learning robust and discriminative representations from remotely sensed images is difficult. Previously, the bag of visual words (BoVW) [1] method was frequently adopted for remote sensing scene classification. BoVW includes the following three steps: feature detection, feature description, and codebook generation. To overcome the problems of the orderless bag-of-features image representation, the spatial pyramid matching (SPM) model [2] was proposed, which works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The above-mentioned methods have comprised the state of the art for several years in the remote sensing community [3], but they are based on hand-crafted features, which are difficult, time-consuming, and require domain expertise to produce. Deep learning algorithms can learn high-level semantic features automatically rather than requiring handcrafted features. Some approaches [4, 5] based on convolutional neural networks (CNNs) [6] have achieved success in remote sensing scene classification, but those methods usually require an enormous amount of labeled training data or are fine-tuned from pre-trained CNNs. Several unsupervised representation learning algorithms have been based on the autoencoder [7, 8], which receives corrupted data as input and is trained to predict the original, uncorrupted input. Although training the autoencoder requires only unlabeled data, input reconstruction may not be the ideal metric for learning a general-purpose representation. The concept of Generative Adversarial Networks (GANs) [9] is one of the most exciting unsupervised algorithm ideas to appear in recent years; its purpose is to learn a generative distribution of data through a two-player minimax game. In subsequent work, a deep convolutional GAN (DCGAN) [10] achieved a high level of performance on image synthesis tasks, showing that its latent representation space captures important variation factors. GANs is a promising unsupervised learning method, yet thus far, it has rarely been applied in the remote sensing field. Due to the tremendous volume of remote sensing images, it would be prohibitively time-consuming and expensive to label all the data. To tackle this issue, GANs would be the excellent choice because it is an unsupervised learning method Figure 1: Overview of the proposed approach. The discriminator (2) learns to make classifications between real and synthesized images, while the generator (1) learns to fool the discriminator. in which the required quantities of training data would be provided by its generator. Therefore, in this paper, we propose a multiple-layer feature-matching generative adversarial networks (MARTA GANs) model to learn the representation of remote sensing images using unlabeled data. Although based on DCGAN, our approach is rather different in the following aspects. 1) DCGAN can, at most, produce images with a \\(64\\times 64\\) resolution, while our approach can produce remote sensing images with a resolution of \\(256\\times 256\\) by adding two deconvolutional layers in the generator; 2) To avoid the problem of such deconvolutional layers producing checkerboard artifacts, the kernel sizes of our networks are \\(4\\times 4\\), while those in DCGAN are \\(5\\times 5\\); 3) We propose a multi-feature layer to aggregate the mid- and high-level information; 4) We combine both the perceptual loss and feature matching loss to produce more accurate fake images. Based on the improvements above, our method can realize the better representation of remote sensing images among all methods. Fig. 1 shows the overall model. The contributions of this paper are the following: 1. To our knowledge, this is the first time that GANs have been applied to classify unsupervised remote sensing images. 2. The results of experiments on the UC-Merced Land-use and Brazilian Coffee Scenes datasets showed that the proposed algorithm outperforms state-of-the-art unsupervised algorithms in terms of overall classification accuracy. 3. We propose a multi-feature layer by combining perceptual loss and loss of feature matching to learn better image representations. ## II Method A GAN is most straightforward to apply when the involved models are both multilayer perceptrons; however, to apply a GAN to remote sensing images, we used CNNs for both the generator and discriminator in this work. The generator network directly produces samples \\(x=G(z;\\theta_{g})\\) with parameters \\(\\theta_{g}\\) and \\(z\\), where \\(z\\) obeys a prior noise distribution \\(p_{z}(z)\\). Its adversary, the discriminator network, attempts to distinguish between samples drawn from the training data and samples created by the generator. The discriminator emits a probability value denoted by \\(D(x;\\theta_{d})\\) with parameters \\(\\theta_{d}\\), indicating the probability that \\(x\\) is a real training example rather than a fake sample drawn from the generator. During the classification task, the discriminative model \\(D\\) is regarded as the feature extractor. Then, additional training data so that the discriminator can learn a better representation is provided by the generative model \\(G\\). ### _Training the discriminator_ When training the discriminator, the weights of the generator are fixed. The goals of training the discriminator \\(D(x)\\) are as follows: 1. Maximize \\(D(x)\\) for every image from the real training examples. 2. Minimize \\(D(x)\\) for every image from the fake samples drawn from the generator. Therefore, the objective function of training discriminator is to maximize: \\[\\mathbb{E}_{x\\sim p_{\\text{data}}(x)}\\log D(x)+\\mathbb{E}_{z\\sim p_{z}(z)}[ \\log(1-D(G(z)))]\\,. \\tag{1}\\] ### _Training the generator_ When training the generator, the weights of the discriminator are fixed. The goal of training the generator \\(G(z)\\) is to produce samples that fool \\(D\\). The output of the generator is an image that can be used as the input for the discriminator. Therefore, the generator wants to maximize \\(D(G(z))\\) (or equivalently, minimize \\(1-D(G(z))\\)) because \\(D\\) is a probability estimate that ranges only between 0 and 1. We call this concept perceptual loss; it encourages the reconstructed image to be similar to the samples drawn from the training set by minimizing the perceptual loss. \\[\\ell_{perceptual}=\\mathbb{E}_{z\\sim p_{z}(z)}[\\log(1-D(G(z)))]. \\tag{2}\\] In summary, the discriminator \\(D\\) is shown an image produced from the generator \\(G\\) and adjusts its parameters to make its output, \\(D(G(Z))\\), larger. But \\(G(Z)\\) will train itself to produce images that fool \\(D\\) into thinking they are real. It does this by getting the gradient of \\(D\\) with respect to each sample it produces. In other words, the \\(G\\) is trying to minimize the Figure 2: Network architectures of a generator and a discriminator: (a) a MARTA GANs generator is used for the UC-Merced Land-use dataset. The input is a 100-dimensional uniform distribution \\(p_{z}(z)\\) and the output is a \\(256\\times 256\\)-pixel RGB image; (b) a MARTA GANs discriminator is used for the UC-Merced Land-use dataset. The discriminator is treated as a feature extractor to extract features from the multi-feature layer. output while \\(D\\) is trying to maximize it; consequently, it is a minimax game that is defined as follows: \\[\\min_{G}\\max_{D}V(D,G)= \\mathbb{E}_{x\\sim p_{\\text{data}}(x)}\\log D(x)+\\] \\[\\mathbb{E}_{z\\sim p_{z}(z)}[\\log(1-D(G(z)))]\\,. \\tag{3}\\] To make the images generated by generator more similar to the real images, we train the generator to match the expected values of the features in the multi-feature layer of the discriminator. Letting \\(f(x)\\) denote activations on the multi-feature layer of the discriminator, the loss of feature matching for the generator is defined as follows: \\[\\ell_{feature\\_match}=||\\mathbb{E}_{x\\sim p_{\\text{data}}(x)}f(x)-\\mathbb{E}_ {z\\sim p_{z}(z)}f(G(z))||_{2}^{2}\\,. \\tag{4}\\] Therefore, our final object (the combination of Eqn. 2 and Eqn. 4) for training the generator is to minimize Eqn. 5. \\[\\ell_{final}=\\ell_{perceptual}+\\ell_{feature\\_matching}. \\tag{5}\\] ### _Network architectures_ The details of the generator and discriminator in MARTA GANs are as follows: The generator takes 100 random numbers drawn from a uniform distribution as input. Then, the result is reshaped into a four-dimensional tensor. We used six deconvolutional layers in our generator to learn its own spatial upsampling and upsample the \\(4\\times 4\\) feature maps to \\(256\\times 256\\) remote sensing images. Fig. (a)a shows a visualization of the generator. For the discriminator, the first layer takes input images, including both real and synthesized images. We use convolutions in our discriminator which allows it to learn its own spatial downsampling. As shown in Fig. (b)b, by performing \\(4\\times 4\\) max pooling, \\(2\\times 2\\) max pooling and the identity function separately in the last three convolutional layers, we can produce feature maps that have the same spatial size, \\(4\\times 4\\). Then, we concatenate the \\(4\\times 4\\) feature maps through channel dimension in the multi-feature layer. Finally, the multi-feature layer is flattened and fed into a single sigmoid output. The multi-feature layer includes two functions: 1) the features used for classification are extracted from the flatted multi-feature layer; 2) when training the generator, we use feature matching loss (Eqn. 4) to evaluate the similarities of the features between the fake and real images in the flatted multi-feature layer. We set the kernel sizes to \\(4\\times 4\\) and the stride to 2 in all the convolutional and deconvolutional layers, because the deconvolutional layers can avoid uneven overlap when the kernel size is divisible by the stride [11]. In the generator, all layers use ReLU activation except for the output layer, which uses the tanh function. We use LeakyReLU activation in the discriminator for all the convolutional layers; the slope of the leak was set to 0.2. We used batch normalization in both the generator and the discriminator, and the decay factor was 0.9. ## III Experiments To verify the effectiveness of the proposed method, we trained MARTA GANs on two datasets: the UC Merced Land Use dataset [12] and the Brazilian Coffee Scenes dataset [4]. We carried out experiments on both datasets using a 5-fold cross-validation protocol and a regularized linear L2-SVM as a classifier. We implemented MARTA GANs in TensorLayer 1, a deep learning and reinforcement learning library extended from Google TensorFlow [13]. We scaled the input image to the range of [-1, 1] before training. All the models were trained by SGD with a batch size of 64, and we used the Adam optimizer with a learning rate of 0.0002 and a momentum term \\(\\beta_{1}\\) of 0.5. Footnote 1: [http://tensorlayer.readthedocs.io/en/latest/](http://tensorlayer.readthedocs.io/en/latest/) ### _UC Merced dataset_ This dataset consists of images of 21 land-use classes (100 \\(256\\times 256\\)-pixel images for each class). Some of the images from this dataset are shown in Fig. (a)a. We used a moderate data augmentation in this dataset via flipping images horizontally and vertically and rotating them by 90 degrees to increase the effective training set size. Training takes approximately 4 hours on a single NVIDIA GTX 1080 GPU. To evaluate the quality of the representations learned by the multi-feature layer, we trained on the UC-Merced data and extracted the features from different multi-feature layers. To improve the clarity of the expression, we use \\(f_{1}\\) to denote the features from the last convolutional layer, \\(f_{2}\\) to denote features combined from the last two convolutional layers' features, and so on. Based on the results shown in Fig. 4, we found that \\(f_{3}\\) achieved the highest accuracy. These results Figure 3: Part of exemplary images. (a) Ten random images from UC-Merced data set. (b) Exemplary images produced by generator trained on UC-Merced using the \\(\\ell_{final}\\) (Eqn. 5) objective. can be explained by two reasons. First, \\(f_{3}\\) has the same high-level information as \\(f_{1}\\) and \\(f_{2}\\), but it has more mid-level information compared with \\(f_{1}\\) and \\(f_{2}\\). However, \\(f_{4}\\) has too much low-level information, which leads to the \"curse of dimensionality.\" Therefore, the features extracted from the last three convolutional layers in the discriminator resulted in the highest accuracy. As shown in Fig. 4, data augmentation is an effective way to reduce overfitting when training a large deep network. Augmentation generates more training image samples by rotating and flipping patches from original images. We also evaluated the performance between two types of loss: \\(\\ell_{perceptual}\\) (Eqn. 2) and \\(\\ell_{final}\\) (Eqn. 5) and found that using \\(\\ell_{final}\\) achieved the best performance. Synthesized remote sensing images when using \\(\\ell_{final}\\) are shown in Fig. (b)b. Fig. 5 depicts the confusion matrix of classification results for the two GAN architectures, DCGAN and MARTA GAN. DCGAN and MARTA GAN reached an overall accuracy of \\(87.76\\pm 0.64\\)% and \\(94.86\\pm 0.80\\)%, respectively. MARTA GAN is approximately 7% better because it used the multi-feature layer to merge the mid-level and global features. To improve the comparison, the accuracy classification performances of the methods for each class are shown in Table I. Compared to DCGAN, MARTA GAN achieves 100.00% accuracy in some scene categories (e.g., Beach, Airplane, etc.). Moreover, MARTA GAN also achieves higher accuracy in some very close classes, such as dense residential, building, medium residential, sparse residential. In addition, we visualized the global image representations encoded via MARTA GANs features of the UC-Merced dataset. We computed the features for all the scenes of the dataset and then used the t-SNE algorithm to embed the high-dimensional features in 2-D space. The final results are shown in Fig. 6. This visualization shows that features extracted from the multi-feature layer contain abstract semantic information because those close classes are also very close in 2-D space. Compared with the results of other tested methods, the method proposed in this work achieves the highest classification accuracy among the unsupervised methods. As shown in Table II, our method outperforms the SCMF [14] (a sparse coding based multiple-feature fusion method) by 3.82%. When the classification accuracy of our method is compared with LRFF [15] (an improved unsupervised feature learning algorithm based on spectral clustering), our method outperforms LRFF by more than 4%. While some of the supervised methods [4, 5] achieved an accuracy above 99%, these methods are fine-tuned from pre-trained models, which are usually trained with a large amount of labeled data (such as ImageNet). Compared with those methods, our unsupervised method requires fewer parameters. \\begin{table} \\begin{tabular}{c|c c c c c c c c c c c} \\hline Class & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\\\ \\hline DCGAN & \\(85\\pm 5.0\\) & \\(94\\pm 2.2\\) & \\(89\\pm 4.2\\) & \\(95\\pm 3.5\\) & \\(82\\pm 2.7\\) & \\(91\\pm 2.2\\) & \\(78\\pm 2.7\\) & \\(83\\pm 2.7\\) & \\(88\\pm 2.7\\) & \\(90\\pm 0.0\\) & \\(79\\pm 2.2\\) \\\\ MARTA GAN & \\(95\\pm 3.5\\) & \\(100\\pm 0.0\\) & \\(96\\pm 4.2\\) & \\(100\\pm 0.0\\) & \\(89\\pm 4.2\\) & \\(99\\pm 2.2\\) & \\(86\\pm 6.5\\) & \\(97\\pm 2.7\\) & \\(98\\pm 2.7\\) & \\(94\\pm 2.2\\) & \\(89\\pm 2.2\\) \\\\ \\hline Class & \\(12\\) & \\(13\\) & \\(14\\) & \\(15\\) & \\(16\\) & \\(17\\) & \\(18\\) & \\(19\\) & \\(20\\) & \\(21\\) & \\\\ \\hline DCGAN & \\(89\\pm 4.2\\) & \\(88\\pm 2.7\\) & \\(95\\pm 3.5\\) & \\(78\\pm 4.5\\) & \\(93\\pm 2.7\\) & \\(88\\pm 2.7\\) & \\(97\\pm 2.7\\) & \\(77\\pm 2.7\\) & \\(95\\pm 5.0\\) & \\(89\\pm 4.2\\) & \\\\ \\hline MARTA GAN & \\(94\\pm 4.2\\) & \\(98\\pm 2.7\\) & \\(100\\pm 0.0\\) & \\(85\\pm 5.0\\) & \\(100\\pm 0.0\\) & \\(93\\pm 2.7\\) & \\(100\\pm 0.0\\) & \\(87\\pm 5.7\\) & \\(97\\pm 5.5\\) & \\(95\\pm 5.0\\) & \\\\ \\hline \\end{tabular} \\end{table} Table I: Classification accuracy (%) in the form of the means \\(\\pm\\) standard deviation bars of DCGAN and MARTA GAN for every class.. The class labels are as follows: 1 = Mobile home park, 2 = Beach, 3 = Tennis courts, 4 = Airplane, 5 = Dense residential, 6 = Harbor, 7 = Buildings, 8= Forest, 9 = Intersection, 10 = River, 11 = Sparse residential, 12 = Runway, 13 = Parking lot, 14 = Baseball diamond, 15 = Agricultural, 16 = Storage tanks, 17 = Chaparral, 18 = Golf course, 19 = Freeway, 20 = Medium residential, and 21 = Overpass. Figure 4: The performance comparison uses different features. (a) \\(f_{1}\\); (b) \\(f_{2}\\); (c) \\(f_{3}\\); (d) \\(f_{4}\\). The red curves: training with \\(\\ell_{final}\\) and with data augmentation; Cyan curves: training with \\(\\ell_{perceptual}\\) and with data augmentation; Yellow curves: training with \\(\\ell_{final}\\) and without data augmentation; Green curves: training with \\(\\ell_{perceptual}\\) and without data augmentation. Figure 5: Confusion matrix of (a):DCGAN, (b):MARTA GAN. The class labels are same as Table I. Figure 6: 2-D feature visualization of image global representations of the UC-Merced dataset. The class labels are same as Table I. ### _Brazilian Coffee Scenes dataset_ To evaluate the generalization power of our model, we also performed experiments using the Brazilian Coffee Scenes dataset [4], which is a composition of scenes taken by the SPOT sensor in the green, red, and near-infrared bands. This dataset has 2,876 multispectral high-resolution scenes. It includes 1,438 tiles of coffee and 1,438 tiles of non-coffee with a \\(64\\times 64\\)-pixel resolution. Fig. (a)a shows some examples of this dataset. We did not use data augmentation on this dataset because it contains sufficient data to train the network. Table II shows the results obtained with the proposed method. In general, the results are significantly worse than those on the UC-Merced dataset, despite reducing the classification from a 21-class to a 2-class problem. Brazilian Coffee Scenes is a challenging dataset because of the high intra-class variability caused by different crop management techniques, different plant ages and spectral distortions and shadows. Nevertheless, our results are better than that of BIC [4]. ## IV Conclusion This paper introduced a representation learning algorithm called MARTA GANs. In contrast to previous approaches that require supervision, MARTA GANs is completely unsupervised; it can learn interpretable representations even from challenging remote sensing datasets. In addition, MARTA GANs introduces a new multiple-feature-matching layer that learns multi-scale spatial information for high-resolution remote sensing. Other possible future extensions to the work described in this paper include: producing high-quality samples of remote sensing images using the generator and classifying remote sensing images in a semi-supervised manner to improve classification accuracy. ## References * [1] J. Sivic, A. Zisserman _et al._, \"Video google: A text retrieval approach to object matching in videos.\" * [2] S. Lazebnik, C. Schmid, and J. Ponce, \"Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,\" in _Computer vision and pattern recognition, 2006 IEEE computer society conference on_, vol. 2. IEEE, 2006, pp. 2169-2178. * [3] M. Lienou, H. Maitre, and M. Daten, \"Semantic annotation of satellite images using latent dirichlet allocation,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 7, no. 1, pp. 28-32, 2010. * [4] O. A. Pentikit, K. Nogueira, and J. A. dos Santos, \"Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_, 2015, pp. 44-51. * [5] K. Nogueira, O. A. Pentikit, and J. A. dos Santos, \"Towards better exploiting convolutional neural networks for remote sensing scene classification,\" _Pattern Recognition_, vol. 61, pp. 539-556, 2017. * [6] A. Krizhevsky, I. Sutskever, and G. E. Hinton, \"Imagenet classification with deep convolutional neural networks,\" in _Advances in neural information processing systems_, 2012, pp. 1097-1105. * [7] P. Vincent, H. Larochelle, L. Lajoic, Y. Bengio, and P.-A. Manzagol, \"Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,\" _Journal of Machine Learning Research_, vol. 11, no. Dec, pp. 3371-3408, 2010. * [8] A. Makhzani and B. Frey, \"K-sparse autoencoders,\" _arXiv preprint arXiv:1312.5663_, 2013. * [9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, \"Generative adversarial nets,\" in _Advances in neural information processing systems_, 2014, pp. 2672-2680. * [10] A. Radford, L. Metz, and S. Chintala, \"Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,\" _arXiv_, pp. 1-15, 2015. [Online]. Available: [http://arxiv.org/abs/1511.06434](http://arxiv.org/abs/1511.06434) * [11] A. Odena, V. Dumoulin, and C. Olah, \"Deconvolution and checkerboard artifacts,\" _Distill_, 2016. [Online]. Available: [http://distill.pub/2016/deconv-checkerboard](http://distill.pub/2016/deconv-checkerboard) * [12] Y. Yang and S. Neessam, \"Bag-of-visual-words and spatial extensions for land-use classification,\" in _Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems_. ACM, 2010, pp. 270-279. * [13] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin _et al._, \"Tensorflow: Large-scale machine learning on heterogeneous distributed systems,\" _arXiv preprint arXiv:1603.04467_, 2016. * [14] G. Sheng, W. Yang, T. Xu, and H. Sun, \"High-resolution satellite scene classification using a sparse coding based multiple feature combination,\" _International journal of remote sensing_, vol. 33, no. 8, pp. 2395-2412, 2012. * [15] F. Hu, G.-S. Xia, Z. Wang, X. Huang, L. Zhang, and H. Sun, \"Unsupervised feature learning via spectral clustering of multidimensional patches for remotely sensed scene classification,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 8, no. 5, 2015. \\begin{table} \\begin{tabular}{c|l l l l} \\hline DataSet & Method & Description & Parameters & Accuracy \\\\ \\hline \\multirow{4}{*}{UC-Merced} & SCMF [4] & Unsupervised & - & \\(91.03\\pm 0.48\\) \\\\ & UFL-SC [15] & Unsupervised & - & \\(90.26\\pm 1.51\\) \\\\ & OverFeat\\_1 + Caffe [4] & Supervised & \\(205\\)M & \\(99.43\\pm 0.27\\) \\\\ & GoogLeNet [5] & Supervised & \\(3\\)M & \\(99.47\\pm 0.50\\) \\\\ & **MARTA GANs** & **Unsupervised** & **2.8M** & \\(94.86\\pm 0.80\\) \\\\ \\hline \\multirow{4}{*}{Coffee} & BIC [4] & Unsupervised & - & \\(87.03\\pm 1.07\\) \\\\ & OverFeat\\_1+OverFeat\\_5 [4] supervised & \\(29\\)M & \\(83.04\\pm 2.00\\) \\\\ \\cline{1-1} & CaffeNet [5] & Supervised & \\(60\\)M & \\(94.45\\pm 1.20\\) \\\\ \\cline{1-1} & **MARTA GANs** & **Unsupervised** & **0.18M** & \\(89.86\\pm 0.98\\) \\\\ \\hline \\end{tabular} \\end{table} Table II: Overall classification accuracy (%) of reference and proposed methods on the UC-Merced dataset and Coffee Scenes dataset. Our result is in bold. Figure 7: Parts of exemplary images: (a) ten random images from the Brazilian Coffee Scenes dataset; (b) exemplary images produced by a generator trained on the Brazilian Coffee Scenes dataset using the \\(\\ell_{final}\\) (Eqn. 5) objective.
With the development of deep learning, supervised learning has frequently been adopted to classify remotely sensed images using convolutional networks (CNNs). However, due to the limited amount of labeled data available, supervised learning is often difficult to carry out. Therefore, we proposed an unsupervised model called multiple-layer feature-matching generative adversarial networks (MARTA GANs) to learn a representation using only unlabeled data. MARTA GANs consists of both a generative model \\(G\\) and a discriminative model \\(D\\). We treat \\(D\\) as a feature extractor. To fit the complex properties of remote sensing data, we use a fusion layer to merge the mid-level and global features. \\(G\\) can produce numerous images that are similar to the training data; therefore, \\(D\\) can learn better representations of remotely sensed images using the training data provided by \\(G\\). The classification results on two widely used remote sensing image databases show that the proposed method significantly improves the classification performance compared with other state-of-the-art methods. Unsupervised representation learning, generative adversarial networks, scene classification.
Write a summary of the passage below.
237
arxiv-format/2011_05418v2.md
# Self-supervised Learning of LiDAR Odometry for Robotic Applications Julian Nubert\\({}^{1,2}\\), Shehryar Khattak\\({}^{1}\\) and Marco Hutter\\({}^{1}\\) *This work is supported in part by the Max Planck ETH Center for Learning Systems, the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme grant agreement No.852044, the Swiss National Science Foundation through the National Centre of Competence in Research Robotics (NCCR) and the Swiss National Science Foundation (SNSF) as part of project No.188596.\\({}^{1}\\)The authors are with the Robotic Systems Lab, ETH Zurich, {nubertj, skhattak, mahutter}@.ethz.ch.\\({}^{2}\\)The author is with the Max Planck ETH Center for Learning Systems, Germany/Switzerland. ## I Introduction Reliable and accurate pose estimation is one of the core components of most robot autonomy pipelines, as robots rely on their pose information to effectively navigate in their surroundings and to efficiently complete their assigned tasks. In the absence of external pose estimates, e.g. provided by GPS or motion-capture systems, robots utilize on-board sensor data for the estimation of their pose. Recently, 3D LiDARs have become a popular choice due to reduction in weight, size, and cost. LiDARs can be effectively used to estimate the 6-DOF robot pose as they provide direct depth measurements, allowing for the estimation at scale while remaining unaffected by certain environmental conditions, such as poor illumination and low-texture. To estimate the robot's pose from LiDAR data, established model-based techniques such as Iterative Closest Point (ICP) [1, 2] typically perform a scan-to-scan alignment between consecutive LiDAR scans. However, to maintain real-time operation, in practice only a subset of available scan data is utilized. This subset of points is selected by either down-sampling or by selecting salient scan points deemed to contain the most information [3]. However, such data reduction techniques can lead to a non-uniform spatial distribution of points, as well as to an increase in sensitivity of the underlying estimation process to factors such as the mounting orientation of the sensor. More complex features [4, 5, 6] can be used to make the point selection process invariant to sensor orientation and robot pose, however high-computational cost makes them unsuitable for real-time robot operation. Furthermore, although using all available scan data may not be necessary, yet it has been shown that utilizing more scan data up to a certain extent can improve the quality of the scan-to-scan alignment process [7]. In order to utilize all available scan data efficiently, learning-based approaches offer a potential solution for the estimation of the robot's pose directly from LiDAR data. Similar approaches have been successfully applied to camera data and have demonstrated promising results [8]. However, limited work has been done in the field of learning-based robot pose estimation using LiDAR data, in particular for applications outside the domain of autonomous driving. Furthermore, most of the proposed approaches require labelled or supervision data for their training, making them limited in scope as annotating LiDAR data is particularly time consuming [9], and obtaining accurate ground-truth data for longer missions, especially indoors, is particularly difficult. Motivated by the challenges mentioned above, this work presents a self-supervised learning-based approach that utilizes LiDAR data for robot pose estimation. Due to the self-supervised nature of the proposed approach, it does not require any labeled or ground-truth data during training. In contrast to previous work, arbitrary methods can be utilized Fig. 1: ANYmal during an autonomous exploration and mapping mission at ETH Zurich, with the height-colored map overlayed on-top of the image. The lack of environmental geometric features as well as rapid rotation changes due to motions of walking robots make the mission challenging. for performing the normal computation on the training set; in this work PCA is used. Furthermore, the presented approach does not require expensive pre-processing of the data during inference; instead only data directly available from the LiDAR is utilized. As a result, the proposed approach is computationally lightweight and is capable of operating in real-time on a mobile-class CPU. The performance of the proposed approach is verified and compared against existing methods on driving datasets. Furthermore, the suitability towards complex real-world robotic applications is demonstrated for the first time by conducting autonomous mapping missions with the quadrupedal robot ANYmal [10], shown in operation in Figure 1, as well as evaluating the mapping performance on DARPA Subterranean (SubT) Challenge datasets [11]. Finally, the code of the proposed method is publicly available for the benefit of the robotics community1. Footnote 1: [https://github.com/leggedrobotics/DeLORA](https://github.com/leggedrobotics/DeLORA) ## II Related Work To estimate robot pose from LiDAR data, traditional or model-based approaches, such as ICP [1, 2], typically minimize either point-to-point or point-to-plane distances between points of consecutive scans. In addition, to maintain real-time performance, these approaches choose to perform such minimization on only a subset of available scan data. Naively, this subset can be selected by sampling points in a random or uniform manner. However, this approach can either fail to maintain uniform spatial scan density or inaccurately represent the underlying local surface structure. As an alternative, works presented in [12] and [13] aggregate the depth and normal information of local point neighborhoods and replace them by more compact Voxel and Surfel representations, respectively. The use of such representations has shown an improved real-time performance, nevertheless, real scan data needs to be maintained separately as it gets replaced by its approximation. In contrast, approaches such as [3, 14], choose to extract salient points from individual LiDAR scan-lines in order to reduce input data size while utilizing original scan data and maintaining a uniform distribution. These approaches have demonstrated excellent results, yet such scan-line point selection makes these approaches sensitive to the mounting orientation of the sensor, as only depth edges perpendicular to the direction of LiDAR scan can be detected. To select salient points invariant to sensor orientation, [15] proposes to find point pairs across neighboring scan lines. However, such selection comes at increased computational cost, requiring random sampling of a subset of these point pairs for real-time operation. To efficiently utilize all available scan data without sub-sampling or hand-crafted feature extraction, learning-based approaches can provide a potential solution. In [16, 17], the authors demonstrate the feasibility of using learned feature points for LiDAR scan registration. Similarly, for autonomous driving applications, [18] and [19] deploy supervised learning techniques for scan-to-scan and scan-to-map matching purposes, respectively. However, these approaches use learning as an intermediate feature extraction step, while the estimation is obtained via geometric transformation [18] and by solving a classification problem [19], respectively. To estimate robot pose in an end-to-end manner from LiDAR data, [20] utilizes Convolution Neural Networks to estimate relative translation between consecutive LiDAR scans, which is then separately combined with relative rotation estimates from an IMU. In contrast, [21] demonstrates the application of learning-based approaches towards full 6-DOF pose estimation directly from LiDAR data alone. However, it should be noted that all these techniques are supervised in nature, and hence rely on the provision of ground-truth supervision data for training. Furthermore, these techniques are primarily targeted towards autonomous driving applications which, as noted by [20], are very limited in their rotational pose component. Unsupervised approaches have shown promising results with camera data [8, 22, 23]. However, the only related work similar to the proposed approach and applied to LiDAR scans is presented in [24], which, while performing well for driving use-cases, skips demonstration for more complex robotic applications. Moreover, it requires a simplified normal computation due to its network and loss design, as well as an additional field-of view loss in order to avoid divergence of the predicted transformation. In this work, a self-supervised learning-based approach is presented that can estimate 6-DOF robot pose directly from consecutive LiDAR scans, while being able to operate in real-time on a mobile-CPU. Furthermore, due to a novel design, arbitrary methods can be used for the normal-computation, without need for explicit regularization during training. Finally, the application of the proposed work is not limited to autonomous driving, and experiments with legged and tracked robots as well as three different sensors demonstrate the variety of real-world applications. ## III Proposed Approach In order to cover a large spatial area around the sensor, one common class of LiDARs measures point distances while rotating about its own yaw axis. As a result, a data flow of detected 3D points is generated, often bundled by the sensor as full point cloud scans \\(\\mathcal{S}\\). This work proposes a robot pose estimator which is self-supervised in nature and only requires LiDAR point cloud scans \\(\\mathcal{S}_{k},\\mathcal{S}_{k-1}\\) from the current and previous time steps as its input. ### _Problem Formulation_ At every time step \\(k\\in\\mathbb{Z}^{+}\\), the aim is to estimate a relative homogeneous transformation \\(T_{k-1,k}\\in SE(3)\\), which transforms poses expressed in the sensor frame at time step \\(k\\) into the previous sensor frame at time step \\(k-1\\). As an observation of the world, the current and previous point cloud scans \\(\\mathcal{S}_{k}\\in\\mathbb{R}^{n_{k}\\times 3}\\) and \\(\\mathcal{S}_{k-1}\\in\\mathbb{R}^{n_{k-1}\\times 3}\\) are provided, where \\(n_{k}\\) and \\(n_{k-1}\\) are the number of point returns in the corresponding scans. Additionally, as a pre-processing step and only for training purposes, normal vectors \\(\\mathcal{N}_{k}(\\mathcal{S}_{k})\\) are extracted. Due to measurement noise, the non-static nature of environments and the motion of the robot in the environment, the relationship between the transformation \\(T_{k-1,k}\\) and the scans can be described by the following unknown conditional probability density function: \\[p(T_{k-1,k}|\\mathcal{S}_{k-1},\\mathcal{S}_{k}). \\tag{1}\\] In this work, it is assumed that a unique deterministic map \\(\\mathcal{S}_{k-1},\\mathcal{S}_{k}\\mapsto T_{k-1,k}\\) exists, of which an approximation \\(\\tilde{T}_{k-1,k}(\\theta,\\mathcal{S}_{k-1},\\mathcal{S}_{k})\\) is modeled by a deep neural network. Here, \\(\\theta\\in\\mathbb{R}^{P}\\) denotes the weights and biases of the network, with \\(P\\) being the number of trainable parameters. During training, the values of \\(\\theta\\) are obtained by optimizing a geometric loss function \\(\\mathcal{L}\\), s.t. \\(\\theta^{*}=\\operatorname*{argmin}_{\\theta}\\mathcal{L}(\\tilde{T}_{k-1,k}(\\theta ),\\mathcal{S}_{k-1},\\mathcal{S}_{k},\\mathcal{N}_{k-1},\\mathcal{N}_{k})\\), which will be discussed in more detail in Sec. III-D. ### _Network Architecture and Data Flow_ As this work focuses on general robotic applications, a priority in the approach's design is given to achieve real-time performance on hardware that is commonly deployed on robots. For this purpose, computationally expensive pre-processing operations such as calculation of normal vectors, as e.g. done in [21], are avoided. Furthermore, during inference the proposed approach only requires raw sensor data for its operation. An overview of the proposed approach is presented in Figure 2, with red letters _a), b), C., D._ providing references to the following subsections and paragraphs. Data RepresentationThere are three common techniques to perform neural network operations on point cloud data: i) mapping the point cloud to an image representation and applying \\(2D\\)-techniques and architectures [25, 26], ii) performing \\(3D\\) convolutions on voxels [25, 27] and iii) to perform operations on disordered point cloud scans [28, 29]. Due to PointNet's [28] invariance to rigid transformations and the high memory-requirements of \\(3D\\) voxels for sparse LiDAR scans, this work utilizes the \\(2D\\) image representation of the scan as the input to the network, similar to DeepLO [24]. To obtain the image representation, a geometric mapping of the form \\(\\phi:\\mathbb{R}^{n\\times 3}\\rightarrow\\mathbb{R}^{4\\times H\\times W}\\) is applied, where \\(H\\) and \\(W\\) denote the height and width of the image, respectively. Coordinates \\((u,v)\\) of the image are calculated by discretizing the azimuth and polar angles in spherical coordinates, while making sure that only the nearest point is kept at each pixel location. A natural choice for \\(H\\) is the number of vertical scan-lines of the sensor, whereas \\(W\\) is typically chosen to be smaller than the amount of points per ring, in order to obtain a dense image (cf. _a)_ in Figure. 2). In addition to \\(3D\\) point coordinates, range is also added, yielding \\((x,y,z,r)^{\\intercal}\\) for each valid pixel of the image, given as \\(\\mathcal{I}=\\phi(\\mathcal{S})\\). NetworkIn order to estimate \\(\\tilde{T}_{t-1,t}(\\theta,\\mathcal{I}_{k-1},\\mathcal{I}_{k})\\), a network architecture consisting of a combination of convolutional, adaptive average pooling, and fully connected layers is deployed, which produces a fixed-size output independent of the input dimensions of the image. For this purpose, \\(8\\) ResNet [30]-like blocks, which have proven to work well for image to value/label mappings, constitute the core of the architecture. In total, the network employs approximately \\(10^{7}\\) trainable parameters. After generating a feature map of \\((N,512,\\frac{H}{2},\\frac{W}{32})\\) dimensions, adaptive average pooling along the height and width of the feature map is performed to obtain a single value for each channel. The resulting feature vector is then fed into a single multi-layer perceptron (MLP), before splitting into two separate MLPs for predicting translation \\(t\\in\\mathbb{R}^{3}\\) and rotation in the form of a quaternion \\(q\\in\\mathbb{R}^{4}\\). Throughout all convolutional layers, circular padding is applied, in order to achieve the same behavior as for a true (imaginary) \\(360^{\\circ}\\) circular image. After normalizing the quaternion, \\(\\bar{q}=\\frac{q}{|q|}\\), the transformation matrix \\(\\tilde{T}_{k-1,k}(\\bar{q}(\\theta,\\mathcal{S}_{k-1},\\mathcal{S}_{k}),t(\\theta, \\mathcal{S}_{k-1},\\mathcal{S}_{k}))\\) is computed. ### _Normals Computation_ Learning rotation and translation at once is a difficult task [20], since both impact the resulting loss independently and can potentially make the training unstable. However, recent works [21, 24] that have utilized normal vector estimates Fig. 2: Visualization of the proposed approach. The letters _a)_, _b)_, _C._ and _D._ correspond to the identically named subsections in Sec. III. Starting from the previous and current sensor inputs \\(\\mathcal{S}_{t-1}\\) and \\(\\mathcal{S}_{t}\\), two LiDAR range images \\(\\mathcal{I}_{t-1},\\mathcal{I}_{t}\\) are created which are then fed into the network. The output of the network is a geometric transformation, which is applied to the source scan and normals \\(\\mathcal{S}_{k},\\mathcal{N}_{k}\\). After finding target correspondences with the aid of a KD-Tree, a geometric loss is computed, which is then back-propagated to the network during training. in their loss functions have demonstrated good estimation performance. Nevertheless, utilizing normal vectors for loss calculation is not trivial, and due to difficult integration of _\"direct optimization approaches into the learning process\"_[24], DeepLO computes its normal estimates with simple averaging methods by explicitly computing the cross product of vertex-points in the image. In the proposed approach, no loss-gradient needs to be back-propagated through the normal vector calculation (i.e. the eigen-decomposition), as normal vectors are calculated in advance. Instead, normal vectors computed offline are simply rotated using the rotational part of the computed transformation matrix, allowing for simple and fast gradient flow with arbitrary normal computation methods. Hence, in this work normal estimates are computed via a direct optimization method, namely principal component analysis (PCA) of the estimated covariance matrix of neighborhoods of points as described in [31], allowing for more accurate normal vector predictions. Furthermore, normals are only computed for points that have a minimum number of valid neighbors, where the validity of neighbors is dependent on their depth difference from the point of interest \\(x_{i}\\), i.e. \\(|\\text{range}(x_{i})-\\text{range}(x_{nb})|_{2}\\leq\\alpha\\), with \\(\\alpha\\) empirically set to \\(0.5m\\) in the conducted experiments. ### _Geometric Loss_ In this work, a combination of geometric losses akin to the cost functions in model-based methods [2] are used, namely point-to-plane and plane-to-plane loss. The rigid body transformation \\(\\tilde{T}_{k-1,k}\\) is applied to the source scan, s.t. \\(\\hat{\\mathcal{S}}_{k-1}=\\tilde{T}_{k-1,k}\\odot\\mathcal{S}_{k}\\), and its rotational part to all source normal vectors, s.t. \\(\\hat{\\mathcal{N}}_{k-1}=\\text{rot}(\\tilde{T}_{k-1,k})\\odot\\mathcal{N}_{k}\\), where \\(\\odot\\) denotes an element-wise matrix multiplication. The loss function then incentivizes the network to generate a \\(\\tilde{T}_{k-1,k}\\), s.t. \\(\\hat{\\mathcal{S}}_{k-1},\\hat{\\mathcal{N}}_{k-1}\\) match \\(\\mathcal{S}_{k-1},\\mathcal{N}_{k-1}\\) as close as possible. Correspondence SearchIn contrast to [21, 24] where image pixel locations are used as correspondences, this work utilizes a full correspondence search in \\(3D\\) using a KD-Tree [32] among the transformed source and target. This has two main advantages: First, as opposed to [24], there is no need for an additional field-of-view loss, since correspondences are also found for points that are mapped to regions outside of the image boundaries. Second, this allows for the handling of cases close to sharp edges, which, when using discretized pixel locations only [24], can lead to wrong correspondences for points with large depth deviations. Once point correspondences have been established, the following two loss functions can be computed. Point-to-Plane LossFor each point \\(\\hat{s}_{b}\\) in the transformed source scan \\(\\hat{\\mathcal{S}}_{k-1}\\), the distance to the associated point \\(s_{b}\\) in the target scan is computed, and projected onto the target surface at that position, i.e. \\[\\mathcal{L}_{\\text{p2n}}=\\frac{1}{n_{k}}\\sum_{b=1}^{n_{k}}|(\\hat{s}_{b}-s_{b}) \\cdot n_{b}|_{2}^{2}, \\tag{2}\\] where \\(n_{b}\\) is the target normal vector. If no normal exists either at the source or at the target point, the point is considered invalid and omitted from the loss calculation. Plane-to-Plane LossIn the second loss term, the surface orientation around the two points is compared. Let \\(\\hat{n}_{b}\\) and \\(n_{b}\\) be the normal vectors at the transformed source and target locations, then the loss is computed as follows: \\[\\mathcal{L}_{n2n}=\\frac{1}{n_{k}}\\sum_{b=1}^{n_{k}}|\\tilde{n}_{b}-n_{b}|_{2}^ {2}. \\tag{3}\\] Again, point correspondences are only selected for the loss computation if normals are present at both point locations. The final loss is then computed as \\(\\mathcal{L}=\\lambda\\cdot\\mathcal{L}_{p2n}+\\mathcal{L}_{n2n}\\). The ratio \\(\\lambda\\) did not significantly impact the performance, with both terms \\(\\mathcal{L}_{p2n}\\) and \\(\\mathcal{L}_{n2n}\\) converging independently. For the conducted experiments \\(\\lambda\\) was set to \\(1\\). ## IV Experimental Results To thoroughly evaluate the proposed approach, testing is performed on three robotic datasets using different robot types, different LiDAR sensors and sensor mounting orientations. First, using the quadrupedal robot ANYmal, the suitability of the proposed approach for real-world autonomous missions is demonstrated by integrating its pose estimates into a mapping pipeline and comparing against a state-of-the-art model-based approach [3]. Next, reliability of the proposed approach is demonstrated by applying it to datasets from the DARPA SubT Challenge [11], collected using a tracked robot, and comparing the built map against the ground-truth map. Finally, to aid numerical comparison with existing work, an evaluation is conducted on the KITTI odometry benchmark [33]. The proposed approach is implemented using _PyTorch_[34], utilizing the KD-Tree search component from _SciPy_. For testing, the model is embedded into a ROS [35] node. The full implementation is made publicly available1. Footnote 1: [https://github.com/sciPy/](https://github.com/sciPy/) ### _ANYmal: Autonomous Exploration Mission_ To demonstrate the suitability for complex real-world applications, the proposed approach is tested on data collected during autonomous exploration and mapping missions conducted with the ANYmal quadrupedal robot [10]. In contrast to wheeled robots and cars, ANYmal with its learning-based controller [36] has more variability in roll and pitch angles during walking. Additionally, rapid large changes in yaw are introduced due to the robot's ability to turn on spot. During these experiments, the robot was tasked to autonomously explore [37] and map [38] a previously unknown indoor environment and autonomously return to its start position. The experiments were conducted in the basement of the CLA building at ETH Zurich, containing long tunnel-like corridors, as shown in Figure 1, and during each mission ANYmal traversed an average distance of \\(250\\) meters. For these missions ANYmal was equipped with a Velodyne VLP-16 Puck Lite LiDAR. In order to demonstrate the robustness of the proposed method, during the test mission the LiDAR was mounted in upside-down orientation, while during training it was mounted in the normal upright orientation. To record the training set, two missions were conducted with the robot starting from the right side-entranceof the main course. For testing, the robot started its mission from the previously unseen left side-entrance, located on the opposing end of the main course. During training and test missions, the robot never visited the starting locations of the other mission as they were physically closed off. To demonstrate the utility of the proposed method for mapping applications, the estimated robot poses were combined with the mapping module of LOAM [3]. Figure 4 shows the created map, the robot path during test, as well as the starting locations for training and test missions. During testing, a single prediction takes about \\(48\\)ms on an _i7-8565U_ low-power laptop CPU, and \\(13\\)ms on a small _GeForce MX250_ laptop GPU, with \\(n_{k}\\approx 32,000\\), \\(H=16\\), \\(W=720\\). Upon visual inspection it can be noted that the created map is consistent with the environmental layout. Moreover, to facilitate a quantitative evaluation due to absence of external ground-truth, the relative pose estimates of the proposed methods are compared against those provided by a popular open-source LOAM [3] implementation2. The quantitative results are presented in Table I, with corresponding error plots shown in Figure 5. A very low difference can be observed between the pose estimates produced by the proposed approach and those provided by LOAM, hence demonstrating its suitability for real-world mapping applications. Footnote 2: [https://github.com/laboshinl/loam_velodyne](https://github.com/laboshinl/loam_velodyne) ### _DARPA SubT Challenge Urban Circuit_ Next, the proposed approach is tested on the DARPA SubT Challenge Urban Circuit datasets [11]. These datasets were collected using an iRobot PackBot Explorer tracked robot carrying an Ouster OS1-64 LiDAR at Satosp Business Park in Washington, USA. The dataset divides the scans of the nuclear power plant facility into Alpha and Beta courses with further partition into upper and lower floors, with a map of each floor provided as ground-truth. It is worth noticing that again a different LiDAR sensor is deployed in this dataset. To test the approach's operational generalization, training was performed on scans from the Alpha course, with testing being done on the Beta course. Similar to before, the robot pose estimates were combined with the LOAM mapping module. The created map is compared with the LOAM implementation2 and ground-truth maps in Figure 3. Due to the complex and narrow nature of the environment as well as the ability of the ground robot to make fast in-spot Fig. 4: Map created for an autonomous test mission of ANYmal robot. The robot path during the mission is shown in green, with the triangles highlighting the different starting positions for the training and test sets. Fig. 5: Relative translation and rotation deviation plots for each axis between the proposed approach with mapping and LOAM2 implementation. Fig. 3: Comparison of maps created by using pose estimates from the proposed approach and LOAM2 implementation against ground-truth map, as provided in the DARPA SubT Urban Circuit dataset. More consistent mapping results can be noted when comparing the proposed map with the ground-truth. yaw rotations, it can be noted that the LOAM map becomes inconsistent. In contrast, the proposed approach is not only able to generalize and operate in the new environment of the test set but it also provides more reliable pose estimates and produces a more consistent map when compared to the DARPA provided ground-truth map. ### _KITTI: Odometry Benchmark_ To demonstrate real-world performance quantitatively and to aid the comparison to existing work, the proposed approach is evaluated on the KITTI odometry benchmark dataset [33]. The dataset is split in a training (Sequences 00-08) and a test set (Sequences 09,10), as also done in DeepLO [24] and most other learning-based works. The results of the proposed approach are presented in Table II, and are compared to model-based approaches [3, 13], supervised LiDAR odometry approaches [20, 21] and unsupervised visual odometry methods [8, 22, 23]. Only the 00-08 mean of the numeric results of LO-Net and Velas _et al._ needed to be adapted, since both were only trained on 00-06, yet the results remain very similar to the originally reported ones. Results are presented for both, the pure proposed LiDAR scan-to-scan method, as well as for the version that is combined with a LOAM [3] mapping module, as also used in Section IV-A and Section IV-B. Qualitative results of the trajectories generated by the predicted odometry estimates, as well as by the map-refined ones are shown in Figure 6. The proposed approach provides good estimates with little drift, even on challenging sequences with dynamic objects (01), and previously unobserved sequences during training (09,10). Nevertheless, especially for the test set the scan-to-map refinement helps to achieve even better and more consistent results. Quantitatively, the proposed method achieves similar results to the only other self-supervised LiDAR odometry approach [24], and outperforms it when combined with mapping, while also outperforming all other unsupervised visual odometry methods [8, 22, 23]. Similarly, by integrating the scan-to-map refinement, results close to the overall state of the art [3, 13, 21] are achieved. Furthermore, to understand the benefit of utilizing both geometric losses, two networks were trained from scratch on a different training/test split of the KITTI dataset. The results are presented in Table III and demonstrate the benefit of combining plane-to-plane (pl2pl) loss and point-to-plane (p2pl) loss over using the latter one alone, as done in [24]. ## V Conclusions This work presented a self-supervised learning-based approach for robot pose estimation directly from LiDAR data. The proposed approach does not require any ground-truth or labeled data during training and selectively applies geometric losses to learn domain-specific features while exploiting all available scan information. The versatility and suitability of the proposed approach towards real-world robotic applications is demonstrated by experiments conducted using legged, tracked and wheeled robots operating in a variety of indoor and outdoor environments. In future, integration of multi-modal sensory information, such as IMU data, will be explored to improve the quality of the estimation process. Furthermore, incorporating temporal components into the network design can potentially make the estimation process robust against local disturbances, which can especially be beneficial for robots traversing over rougher terrains. Fig. 6: Qualitative results of the proposed odometry, as well as the scan-to-map refined version of it. From left to right the following sequences are shown: \\(01,07\\) (training set), \\(09,10\\) (validation set). ## Acknowledgment The authors are thankful to Marco Tranzatto, Samuel Zimmermann and Timon Homberger for their assistance with ANYmal experiments. ## References * [1]A. Segal, D. Haehnel, and S. Thrun (2009) Generalized-icp. In Robotics: science and systems, Vol. 2, pp. 435. Cited by: SSI. [MISSING_PAGE_POST] . Gong, M. Niessner, M. Fisher, J. Xiao, and T. Funkhouser (2017) 3dmatch: learning local geometric descriptors from rgb-d reconstructions. In CVPR, Cited by: SSII-A. * [42]A. Gilhot, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: SSII-A. * [43]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: SSII-A. * [44]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: SSII-A. * [45]M. Melussiere, J. Lacoste, and F. Pomerleau (2020) Geometry preserving sampling method based on spectral decomposition for large-scale environments. Frontiers in Robotics and AI7, pp. 134. Cited by: SSII-A. * [46]M. Melussiere, J. Lacoste, and F. Pomerleau (2020) Geometry preserving sampling method based on spectral decomposition for large-scale environments. Frontiers in Robotics and AI7, pp. 134. Cited by: SSI. * [47]M. Melussiere, J. Lacoste, and F. Pomerleau (2020) Geometry preserving sampling method based on spectral decomposition for large-scale environments. Frontiers in Robotics and AI7, pp. 134. Cited by: SSI. * [48]M. Melussiere, J. Lacoste, and F. Pomerleau (2020) Geometry preserving sampling method based on spectral decomposition for large-scale environments. Frontiers in Robotics and AI7, pp. 134. Cited by: SSI. * [49]M. Melussiere, J. Lacoste, and F. Pomerleau (2020) Geometry preserving sampling method based on spectral decomposition for large-scale environments. Frontiers in Robotics and AI7, pp. 134. Cited by: SSI. * [50]M. Melussiere, J. Lacoste, and F. Pomerleau (2020) Geometry preserving sampling method based on spectral decomposition for large-scale environments. Frontiers in Robotics and AI7, pp. 134. Cited by: SSI. * [51]M. Melussiere, J. Lacoste, and F. Pomerleau (2020) Geometry preserving sampling method based on spectral decomposition for large-scale environments. Frontiers in Robotics and AI7, pp. 134. Cited by: SSI. * [52]M. Melussiere, J. Lacoste, and F. Pomerleau (2020) Geometry preserving sampling method based on spectral decomposition for large-scale environments. Frontiers in Robotics and AI7, pp. 134. Cited by: SSI. * [53]W. Lu, Y. Zhou, G. Wan, S. Hou, and S. Song (2019) L3-net: towards learning based lidar localization
Reliable robot pose estimation is a key building block of many robot autonomy pipelines, with LiDAR localization being an active research domain. In this work, a versatile self-supervised LiDAR odometry estimation method is presented, in order to enable the efficient utilization of all available LiDAR data while maintaining real-time performance. The proposed approach selectively applies geometric losses during training, being cogniant of the amount of information that can be extracted from scan points. In addition, no labeled or ground-truth data is required, hence making the presented approach suitable for pose estimation in applications where accurate ground-truth is difficult to obtain. Furthermore, the presented network architecture is applicable to a wide range of environments and sensor modalities without requiring any network or loss function adjustments. The proposed approach is thoroughly tested for both indoor and outdoor real-world applications through a variety of experiments using legged, tracked and wheeled robots, demonstrating the suitability of learning-based LiDAR odometry for complex robotic applications.
Provide a brief summary of the text.
199
arxiv-format/2304_05530v1.md
# SceneCalib: Automatic Targetless Calibration of Cameras and Lidars in Autonomous Driving Ayon Sen, Gang Pan, Anton Mitrokhin, Ashraful Islam The authors are with NVIDIA Corporation, Santa Clara, CA 95051 USA. Corresponding author: Ayon Sen ([email protected]) ## I Introduction Autonomous vehicles typically incorporate several different sensor types for maximum coverage of their surroundings and robustness to different environmental conditions, as shown in Fig. 1. An array of cameras with various fields-of-view provides high resolution visual information, whereas lidar sensors provide direct measurements of depth at a sparser set of points around the vehicle. In order to provide a consistent description of the vehicle's surroundings, all sensors must be accurately registered to the same coordinate system, and intrinsic sensor properties must be adequately captured. A common approach to solving this problem involves a _static calibration_ process in which the vehicle is parked in a garage and views of a specially designed calibration target are captured [1, 2, 3, 4]. The target is designed with features that can be easily detected in each sensor, establishing correspondences used for calibration. Static calibration does not scale to a large fleet of vehicles due to the time and/or human effort involved. Periodic re-calibration may also be required if the sensors shift slightly between drives. To address these issues, several approaches exist for performing calibration automatically using data collected from the environment that do not require specific targets or driving behavior. These methods typically require extracting specific types of features that can be reliably detected in both camera images and lidar point clouds to establish correspondence. Alternatively, they may involve maximizing correlation between signals in very different domains. Outdoor environments may have a significant amount of variation in terms of the landmarks available, so cross-sensor correspondence requirements can degrade the robustness of these approaches. Furthermore, many existing techniques only calibrate a single pair of sensors or assume camera intrinsic parameters are already known. Our method, termed SceneCalib, does not require finding cross-modal correspondences and can jointly calibrate all extrinsic parameters and camera intrinsic parameters in a multi-camera/single-lidar system. This is achieved by: * relying only on image correspondences without assuming _a priori_ knowledge of which scene points they correspond to, * demonstrating a reliable method for finding cross-camera image correspondences, and * minimizing a purely geometric loss function between image feature pairs that constrains structure estimates to surfaces derived from lidar point clouds. ## II Related Work Many approaches for performing targetless camera-to-lidar calibration (e.g., without targets specifically placed in the environment) exist in the literature, and they can be broadly categorized into two groups. _Correspondence-based methods_ seek to find calibration parameters that maximize alignment between features that have a detectable signal in both the camera images and the lidar point clouds. One common approach is to extract straight line or edge features from images and assume they must correspond to sharp discontinuities in the lidar depth. Levinson _et al_[5] and Kang _et al_[6] construct differentiable loss functions penalizing edge misalignment, and Cui _et al_[7] specifically explores line-based alignment for panoramic cameras. Ma _et al_[8] searches for lane edges and poles as a source of line features. Munoz-Banon _et al_[9] adds some preprocessing logic to extract object edges and associated direction vectors to create a signature that can be aligned Figure 1: A typical autonomous vehicle sensor configuration (NVIDIA’s Hyperon 8.1 platform). Camera FOVs are shown in gray, radar FOVs in green, and a front grille-mounted lidar in purple. between domains. Yuan _et al_[10] introduces a method to mitigate the effects of occlusion and bloom when extracting edges in lidar point clouds. Planar surfaces can also be used for establishing correspondence; Tamas _et al_[11] constructs an algebraic error that penalizes non-overlap in the camera image projection and is able to determine intrinsic parameters if two non-coplanar pairs are found (which are assumed to be known _a priori_). Another correspondence-based method is direct comparison of photometric information from both sensors. Some lidar sensors collect intensity information along with depth, and changes in these values should be correlated to changes in pixel intensity across camera images. Pandey _et al_[12] presents a mutual information maximization method based on exactly this principle, and Taylor _et al_[13] adds lidar plane normals as another source of information. Shi _et al_[14] introduces an occlusion filtering method on top of the mutual information maximization approach. Other methods combine multiple types of correspondences to form a multi-objective optimization problem [15, 16]. End-to-end learning-based methods for calibration exist as well. The networks in these approaches directly estimate the amount of miscalibration error given camera-to-lidar extrinsics. These can be considered correspondence-based methods since the network contains layers for explicitly extracting correspondences between the lidar data and camera data [17, 18]. Finally, some methods have been developed for extracting correspondences based on high-level object detection. Semantic segmentation of the scene in both the image domain and lidar domain is performed to find where specific object types appear, allowing correspondences to be established. Nagy _et al_[19, 20] and Yoon _et al_[21] extract objects from a structure-from-motion point cloud derived from only camera frames to align to the same objects extracted from the lidar point cloud. Liu _et al_[22] performs the alignment between the lidar object projections and the segmented camera image pixels. Because correspondence-based methods attempt to find features in two very different sensing modalities, it can be challenging to obtain accurate correspondences reliably, and it is a relatively strict requirement that a feature can be detected in both modalities. While straight line features may be prevalent in many environments, there is no guarantee that the same edges appear in both sensors. Objects in the lidar point cloud can look larger than they really are due to bloom, so the depth discontinuity will be offset from the edge in image space [10]; occlusion due to different sensor mounting positions also means the same features may not be visible in all sensors, and therefore occlusion handling is required. Photometric matching across sensor domains requires the presence of lidar intensity information that is not always available or accurate. Object-based correspondence is the most restrictive approach, requiring the use of pre-trained detectors for known object classes in both images and point clouds, so objects that the detectors have not been trained upon cannot be used. To avoid correspondence detection altogether, some _correspondence-free methods_ have been developed. One method of registration between the camera and lidar coordinate systems is to use an odometry trajectory derived independently from both sensors and determine coarse alignment by solving the hand-eye calibration problem. Typical structure-from-motion algorithms are used to solve for camera odometry (up to scale), and ICP is used to solve for lidar odometry. Taylor _et al_[23] uses per-sensor odometry as the primary constraint across each pair of sensors (allowing for camera-to-lidar and camera-to-camera constraints), but they combine this approach with photometric correspondence for finer alignment. Park _et al_[24] solves a bundle adjustment problem where the camera-to-lidar pose offset is constrained by the odometry alignment. On the other hand, Hu _et al_[25] iteratively solves for camera-to-lidar alignment with a geometric error and camera-to-camera alignment with photometric error for a stereo pair. The geometric error comes from misalignment between the stereo camera point cloud and the lidar point cloud, and the two objectives are alternately and iteratively refined until convergence. While these methods avoid explicit cross-sensor correspondence, they do not allow all sensor measurements to be used simultaneously to constrain all calibration parameters. In addition, no existing methods solve for the extrinsic and intrinsic parameters for generic multi-camera configurations without requiring explicit camera-to-lidar feature correspondence. ## III Methods We propose a method, illustrated in Fig. 2, that relies only upon image feature correspondences between camera frames and the relatively weak assumption that image features are locally planar. We construct an optimization problem that minimizes a geometric loss function that Figure 2: Overall flow of SceneCalib algorithm; (a) per-camera processing, (b) joint-camera processing. encodes the notion that corresponding image features are views of the same point on a locally planar surface (surfel or mesh) reconstructed from lidar scans, essentially removing structure estimation from the optimization problem. ### _Preliminaries_ We assume that the system consists of \\(N\\) cameras with different intrinsic parameters and one lidar sensor. We require an invertible camera model for backprojection of image pixels as rays. _Monocular structure-from-motion_: A monocular SfM method, such as ORB-SLAM [26], is applied to images from each camera to extract feature tracks - lists of pixels across multiple frames that correspond to the same world point. We want to emphasize that monocular SfM is used for extracting high-quality 2D feature tracks only, and we do not use its 3D structure or motion results. _Lidar mapping_: Lidar scans provide a 3D point cloud, usually at a lower frame rate than cameras. At the lidar mapping step, we accumulate the lidar scans after ICP alignment (initialized by an IMU-based egomotion estimate). Moving objects are removed by checking for outliers when aligning scans. From the accumulated lidar scans, we build a ground surface mesh using Poisson Surface Reconstruction [27] and surfels [28] for structures above the ground. The ground mesh and structure surfels together provide the locally planar surface representation of the scene. ### _Geometric Loss Function_ We define a geometric loss function based on two-view feature matching and known structures in the lidar map (see Figure 3). Let \\(u_{l,t_{1}}\\)be an pixel in the \\(i\\)th camera frame at timestamp \\(t_{1}\\), and let \\(u_{j,t_{2}}\\) be a pixel in the \\(j\\)th camera's frame at timestamp \\(t_{2}\\). If \\(u_{l,t_{1}}\\) and \\(u_{j,t_{2}}\\) are different views of the same point in the scene, their backprojected image rays should intersect at that point on a locally planar surface. In the case where there is misalignment, the spread of ray-to-plane intersection points quantifies the degree of misalignment. This metric can be expressed in image space to control for the effect of distance. Let \\(\\Pi_{i}\\) represent the function that projects a camera-centered 3D point to its image pixel for the \\(i\\)th camera; this is a function only of the camera's intrinsic parameters. Let \\(T_{l}^{L}\\in SE(3)\\) represent the pose that transforms the camera-centered coordinate system for the \\(i\\)th camera to the lidar-centered coordinate system \\(L\\); similarly, let \\(T_{L}^{W}(t)\\in SE(3)\\) represent the pose that transforms \\(L\\) at timestamp \\(t\\) into the map coordinate system \\(W\\). Finally, let the normal and position \\((n^{W},p^{W})\\) parameterize a specific plane in the map coordinate system. The operation \\(\\Gamma\\) transforms the image ray into map coordinates via \\(T_{L}^{W}(t_{z})T_{L}^{L}\\) and finds the point of intersection \\(x^{W}\\) with the plane \\((n^{W},p^{W})\\). Then: \\[x^{W}=\\Gamma\\left(\\Pi_{i}^{-1}(u_{i,t_{1}}),T_{L}^{W}(t_{1})T_{i}^{L},n^{W},p^ {W}\\right) \\tag{1}\\] \\[e_{i,j,t_{1},t_{2}}=\\left\\|\\Pi_{j}\\big{(}(T_{L}^{W}(t_{2})T_{j}^{L})^{-1}x^{W }\\big{)}-u_{j,t_{2}}\\right\\|^{2} \\tag{2}\\] \\[e=\\sum_{i,j,t_{2},t_{2}}e_{i,j,t_{2},t_{2}} \\tag{3}\\] The unknowns are the \\(N\\) camera-to-lidar poses and camera intrinsic parameters \\(\\{(T_{l}^{L},\\Pi_{i})\\}_{l=1}^{N}\\). Note that \\(T_{L}^{W}(t)\\) is known from the lidar trajectory obtained in the lidar mapping step. Since we assume that the association of the plane with the image ray is known _a priori_, the operation \\(\\Gamma\\) is differentiable. This formulation allows refinement of the unknowns while constraining the structure points to the mesh surface, but without prescribing their 3D positions. It also makes no distinction between whether the pixel correspondences come from the same camera (e.g., \\(i=j\\)) or not. ### _Track-to-Map Association_ Since the plane normals for the features must be known in the loss function, they are derived from the planar surface element that the backprojected image rays intersect with. This process is performed using initial estimates of the camera-to-lidar poses and camera intrinsics, so the accuracy of this association depends on how poor those initial estimates are. However, it also depends on how large the planar surface elements are. In a typical outdoor scene, there are features of many scales - the road surface, for example, may be a very coarse mesh, but foliage on trees can manifest as a collection of very small surfels with very different normal directions. To utilize the information present at multiple scales, we progressively build a set of tracks associated with large-to-small feature scales while performing a coarse-to-fine refinement of the calibration parameters and intersection points. Specific details are given in Appendix A. ### _Per-Camera Optimization_ We first implement a single-camera-to-lidar extrinsics optimization loop to improve upon the initial estimates, reject outlier tracks, and estimate feature point positions for use in cross-camera correspondence detection. At each iteration, track-to-map association is performed, low-score tracks are removed, and RANSAC is used to find an inlier set of tracks (using the per-track RMSE of (2)) and corresponding extrinsic parameters. Track-to-map association thresholds are made tighter, and the process is repeated, adding the new tracks to the inlier set until the extrinsic parameters have converged. Fig. 3: Geometric loss function minimized during automatic calibration. The 3D point x* is obtained by intersecting the backprojected ray with locally planar surfaces in the lidar map. The loss characterizes the difference between the feature transferred from another frame and the independent observation of the feature in the current frame. ### _Cross-Camera Correspondences_ Cross-camera correspondences are obtained in two steps: cross-camera track association, and image space match refinement. The processing flow is illustrated in Fig. 4. The result is a list of pixel correspondences between pairs of camera images from different cameras that were derived independently from the initially estimated camera-to-lidar poses (due to the image-space alignment). Note that this cross-camera feature matching is done without requiring simultaneously overlapping views. _Cross-camera track association_: Via backprojection of feature tracks, the previous per-camera optimization step yields a set of \\(M_{i}\\) feature points \\(\\{x_{k}^{W}\\}_{k=1}^{M_{i}}\\) in the map coordinate system for each of the \\(i=1, ,N\\) cameras. A nearest-neighbor search (limited to a small radius) between the point clouds for camera pair \\((i,j)\\) allows us to find which pairs of feature tracks (one from the \\(i\\)th camera, and one from the \\(j\\)th camera) are likely to be views of the same feature point. _Image space match refinement_: Consider a single map point \\(x^{W}\\) derived from the nearest-neighbor search. Let the pixel \\(u_{i,\\ell_{1}}\\) be the projection of \\(x^{W}\\) into the \\(i\\)th camera's image at timestamp \\(t_{1}\\) (taken from the feature track for the \\(i\\)th camera). Following the assumption that map features are locally planar, we assume that a small neighborhood of pixels \\(S_{\\ell,t_{1}}\\) centered upon \\(u_{\\ell,t_{1}}\\)(e.g., a 65x65 pixel patch) is an image of a plane. Then \\[H_{i,j,t_{1},t_{2}}=\\Pi_{j}(T_{L}^{W}(t_{2})T_{j}^{L})^{-1}T_{L}^{W}(t_{1})T_{ i}^{L}\\Pi_{i}^{-1} \\tag{4}\\] defines a planar homography. The values for \\(T_{L}^{L},T_{j}^{L},\\Pi_{i}\\), and \\(\\Pi_{j}\\) come from the individual single-camera-to-lidar optimization problems solved earlier, so this homography is not exact but can be used to correct most of the planar distortion. The last step is using simple cross-correlation between the image patches \\(H_{i,j,t_{1},t_{2}}\\)(\\(S_{\\ell,t_{1}}\\)) and \\(S_{j,t_{2}}\\)to find a more precise pixel correspondence. In practice, this involves converting the patches to grayscale, equalizing their intensity histograms, and computing a subpixel-accurate alignment by finding the center of mass of the cross-correlation values. ### _Joint Optimization_ Using the single- and cross-camera correspondences, we can now minimize the geometric loss function given in equation (3) with constraints between all sensor pairs. Cross-camera matches with wider cross-correlation peaks are removed, and RANSAC is used to determine which correspondences are inliers. Some cameras naturally will produce many more feature matches than others due to their placement and FOV, so the total sum of all residuals involving projection into a given camera is normalized to 1. Both camera-to-lidar extrinsic parameters and camera intrinsic parameters are optimized. Ceres Solver is used to implement the loss function and perform nonlinear least-squares optimization [29]. ## IV Results We evaluate the performance of the algorithm using datasets recorded from vehicles with 12 cameras of varying fields of view (as shown in Fig. 1) and one 360-degree lidar placed on the roof of the vehicle. Camera intrinsic parameters are modeled using the f-theta model (see Appendix B). The eight non-fisheye cameras (30\\({}^{\\circ}\\), 70\\({}^{\\circ}\\), and 120\\({}^{\\circ}\\) FOVs) record 3848x2168 resolution images at 30 fps, and the four fisheye cameras (200\\({}^{\\circ}\\) FOV) record 1280x720 resolution images at 30 fps. Lidar spins are taken at 10 fps. Recordings are generally taken from daytime driving scenarios, and camera recordings are not synchronized with each other or with the lidar recordings. Note that we only use a fraction of each recording, employing an automatic selection algorithm that scans the egomotion trajectory, extracting around five 1-km segments with significant motion. This still results in anywhere from 200 thousand to 2 million SfM features being used in the final joint optimization problem. Several examples qualitatively illustrating the lidar-to-camera calibration accuracy achieved by SceneCalib are shown in Fig. 5. ### _Miscalibration Analysis_ To quantify calibration quality, we develop a metric that is based off Levinson _et al_'s [4] miscalibration detection algorithm. For a pair of frames from the \\(i\\)th and \\(j\\)th cameras taken at similar timestamps, we project the lidar map on to both camera frames and compute the photometric error between projections of the same lidar point. Performing this process for many noisy values of \\(T_{j}^{L^{-1}}\\), we count the number of samples with lower photometric error than the calibrated pose. We define the ratio of samples with lower error, Figure 4: Cross-camera correspondence detection. Feature tracks from two cameras are associated via the lidar map. One viewpoint is warped to the other using an approximate planar homography and a more precise subpixel match is found using cross-correlation. averaged across many camera frame pairs, as the _miscalibration rate_. A perfectly calibrated, noise-free system should always have minimum photometric error when all deviations are zero, so this scenario is represented by a miscalibration rate of zero. Pose perturbations are performed separately for each axis; position component (\\(x\\), \\(y\\), \\(z\\)) perturbations are uniformly randomly sampled from \\([-2,2]\\) cm, and rotation component (roll, pitch, yaw) perturbations are uniformly randomly sampled from \\([-0.2,0.2]^{\\circ}\\). Because the deviations are applied in the camera-centered coordinate system, they are reported in right-down-forward (RDF) coordinates. ### _Comparison to Static Calibration_ We can first compare the performance of SceneCalib to a static garage calibration process performed before a drive. The garage calibration consists of collecting views of a checkerboard target to solve for camera intrinsic parameters and camera-to-lidar extrinsic parameters [30]. The dataset used for comparison consists of 25 recordings from the same vehicle, ranging from 25 to 65 minutes in length and spanning a two-month period. For a fair comparison, SceneCalib is initialized using a result that is significantly offset from both the final result and the static calibration (e.g., by up to 0.5\\({}^{\\circ}\\) in orientation and 5 cm in position). As shown in Fig. 6, our automatic calibration algorithm has a similar or better miscalibration rate for all degrees of freedom when compared to static calibration. Since certain camera subsystems are used for different autonomous driving tasks, we divide the error based on the FOVs of the camera pairs. Note that wider FOV and lower resolution cameras have lower overall miscalibration rates since the metric is a measure of sensitivity and depends on the size of the interval the perturbations are sampled from. ### _Robustness Across Multiple Driving Scenarios_ Since static calibration is a time-consuming and manual process, it is hard to obtain a large dataset with a high level of scene diversity. Because of this, all 25 recordings in the previous section were taken from the same city. To determine whether our automatic calibration algorithm is robust across more diverse scenes, we compute miscalibration rates for only the automatic calibration results using a dataset of 55 recordings from several vehicles that are 30 to 70 minutes in length and span a 7-month period. Figure 5: Lidar point cloud projections (in green) using SceneCalib results exhibit precise alignment with camera image features. Examples are shown for 70\\({}^{\\circ}\\) (top) and 120\\({}^{\\circ}\\) (middle) FOV cameras. After calibration, backprojected image rays converge to a point on a planar surface in the lidar map (bottom). Figure 6: Comparison of miscalibration rates for static calibration and automatic calibration (SceneCalib) per axis, grouped by camera FOV. These datasets were recorded in multiple cities from multiple countries during daytime hours. Miscalibration rates for the more diverse dataset are comparable or only slightly worse in all camera categories and axes when compared to the results from the less diverse dataset (see Table 1). ### _Weather and Lighting Conditions_ While SceneCalib is robust to different scenes, some weather and lighting conditions are difficult to handle. In general, nighttime driving scenarios are unlikely to produce a good automatic calibration result due to the number of camera features produced by the SfM algorithm. While a daytime drive may produce hundreds of thousands of features, a nighttime drive may only produce hundreds or thousands. Other low-visibility conditions (such as heavy rain) are similarly difficult to deal with. Furthermore, recordings containing snow can be unsuitable for the lidar mapping step due to noisy lidar scans. We typically avoid directly calibrating such recordings. Instead, we use a result from a different recording from the same vehicle taken under more favorable conditions, using the assumption that the extrinsic and intrinsic parameters will not change significantly over short periods of time. ## V Conclusion We propose a fully automatic, targetless calibration algorithm for autonomous vehicles with multiple cameras of different characteristics and a lidar sensor. The algorithm can calibrate camera-to-lidar extrinsic parameters, camera intrinsic parameters, and explicitly constrains camera-to-camera pose transformations while also constraining structure estimation with the high-quality 3D information provided by lidar. We have demonstrated that the algorithm is completely free of human intervention (and thus highly scalable), achieves calibration quality comparable to manual calibration, and is robust to various scenes. While low light and poor weather conditions are difficult to calibrate directly, a major area of future work will be to explore the feature detection module component. Significant improvements to the robustness of the algorithm could potentially be made by having the ability to extract more image features under such conditions. ### _Track-to-Map Association Thresholds_ We construct two scores for the quality of the ray bundle association with a planar surface. Let \\(i,j\\) index features in the same track (which has \\(N\\) observations) and let \\(x_{i}\\) and \\(n_{i}\\) represent the mesh intersection point and mesh intersection normal, respectively. \\[s_{\\theta,i}=\\sum_{j\ eq i}\\exp\\left(-\\frac{1}{2}\\Big{(}\\frac{\\alpha\\cos\\left( n_{i}r_{i}\\right)j}{\\sigma_{\\theta}}\\Big{)}^{2}\\right) \\tag{5}\\] \\[s_{\\theta}=\\frac{1}{N-1}\\max_{l}s_{\\theta,i} \\tag{6}\\] The first score, \\(s_{\\theta}\\), is larger if the angle between the normal vectors of the intersected planes is small. \\[i^{*}=\\operatorname*{argmax}_{i}s_{\\theta,i} \\tag{7}\\] \\[s_{d}=\\frac{1}{N-1}\\sum_{i\ eq i^{*}}\\exp\\left(-\\frac{1}{2}\\Big{(}\\frac{n_{i} \\cdot\\left(x_{i}-x_{i^{*}}\\right)^{2}}{\\sigma_{d}}\\Big{)}^{2}\\right) \\tag{8}\\] The second, \\(s_{d}\\), is larger if the distance between the intersection points on the plane is small. The parameters \\(\\sigma_{\\theta}\\) and \\(\\sigma_{d}\\) can be adjusted to allow for smaller or larger spreads in these statistics. ### _F-theta Model_ We use a _captured rays-based model_[31] for the camera projection with a high-order polynomial function of the ray angle to capture distortion [32]. Given a point \\(p=[p_{x},p_{y},p_{z}]\\) in the camera-centered world coordinate system, we compute the projected pixel \\(u\\) via: \\[\\hat{p}=[p_{x}/p_{z},p_{y}/p_{z}],\\ r=\\|\\hat{p}\\|,\\ \\theta=\\tan^{-1}r \\tag{9}\\] \\[f(\\theta)=\\sum_{i=1}^{5}k_{i}\\theta^{i} \\tag{10}\\] \\[u=\\Pi(p)=(f(\\theta)/r)\\hat{p}+u_{0} \\tag{11}\\] The parameters are the polynomial coefficients \\(k_{l}\\) and the optical center \\(u_{0}\\), and \\(\\theta\\) is the angle of the ray with the optical axis. Note that this model is not analytically invertible due to (10). Since we require an initial guess for the intrinsic parameters in our optimization scheme, we solve for an approximate inverse \\(f^{-1}(x)\\approx\\sum_{i=1}^{5}m_{i}x^{i}\\). Rather than optimizing \\(\\{k_{l}\\}\\) and \\(\\{m_{i}\\}\\), we introduce a scale factor \\(s_{f}\\): \\[f\\big{(}s_{f}\\theta\\big{)}=\\sum_{i=1}^{5}(s_{f}^{i}k_{l})\\theta^{i} \\tag{12}\\] and leave the polynomial coefficients fixed from the initial guess. This factor can be applied analytically to the approximate inverse without introducing much additional error. In practice, the higher order coefficients tend to be several orders of magnitude smaller than the linear coefficient and contribute very little to the overall variation in the polynomial. Thus, in our formulation, intrinsic parameters are tuned by solving for \\(s_{f}\\) and \\(u_{0}\\). ## Acknowledgment The authors would like to thank Cheng-Chieh Yang for assistance with monocular SfM processing, Chen Chen for providing the static calibration dataset, and Yuchen Deng for providing lidar trajectory benchmarks. ## References * [1] L. Zhou, Z. Li, and M. Kaess, \"Automatic Extrinsic Calibration of a Camera and a 3D LiDAR Using Line and Plane Correspondences,\" in _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, Madrid, Oct. 2018, pp. 5562-5569. doi: 10.1109/IROS.2018.8593660. * [2] Z. Chai, Y. Sun, and Z. Xiong, \"A Novel Method for LiDAR Camera Calibration by Plane Fitting,\" in _2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM)_, Auckland, New Zealand, Jul. 2018, pp. 286-291. doi: 10.1109/AIM.2018.8452339. * [3] E. Kim and S.-Y. Park, \"Extrinsic Calibration between Camera and LiDAR Sensors by Matching Multiple 3D Planes,\" _Sensors_, vol. 20, no. 1, p. 52, Dec. 2019, doi: 10.3390/s20010052. * [4] P. Baker and Y. Alimonos, \"Complete calibration of a multi-camera network,\" in _Proceedings IEEE Workshop on Omnidirectional Vision (Cat. No.PR00704)_, Hilton Head Island, SC, USA, 2000, pp. 134-141. doi: 10.1109/OMVIS.2000.853820. * [5] J. Levinson and S. Thrun, \"Automatic Online Calibration of Cameras and Lasers,\" in _Robotics: Science and Systems IX_, Jun. 2013. doi: 10.15607/RSS.2013.IX.029. * [6] J. Kang and N. L. Doh, \"Automatic targetless camera-LIDAR calibration by aligning edge with Gaussian mixture model,\" _J. Field Robot._, vol. 37, no. 1, pp. 158-179, Jan. 2020, doi: 10.1002/rob.21893. * [7] T. Cui, S. Ji, J. Shan, J. Gong, and K. Liu, \"Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping,\" _Sensors_, vol. 17, no. 12, p. 70, Dec. 2016, doi: 10.3390/s17010070. * [8] T. Ma, Z. Liu, G. Yan, and Y. Li, \"CRLF: Automatic Calibration and Refinement based on Line Feature for LiDAR and Camera in Road Scenes.\" arXiv, Mar. 08, 2021. Accessed: Sep. 13, 2022. [Online]. Available: [http://arxiv.org/abs/2103.04558](http://arxiv.org/abs/2103.04558) * [9] M. A. Munoz-Banon, F. A. Candelas, and F. Torres, \"Targetless Camera-LiDAR Calibration in Unstructured Environments,\" _IEEE Access_, vol. 8, pp. 143692-143705, 2020, doi: 10.1109/ACCESS.2020.3014121. * [10] C. Yuan, X. Liu, X. Hong, and F. Zhang, \"Pixel-Level Extrinsic Self Calibration of High Resolution LiDAR and Camera in Targetless Environments,\" _IEEE Robot. Autom. Lett._, vol. 6, no. 4, pp. 7517-7524, Oct. 2021, doi: 10.1109/LRA.2021.3098923. * Perspective Camera Pair,\" in _2013 IEEE International Conference on Computer Vision Workshops_, Sydney, Australia, Dec. 2013, pp. 668-675. doi: 10.1109/ICCVW.2013.92. * [12] G. Pandey, J. McBride, S. Savarese, and R. Eustice, \"Automatic Targetless Extrinsic Calibration of a 3D Lidar and Camera by Maximizing Mutual Information,\" _Proc. AAAI Conf. Artif. Intell._, vol. 26, no. 1, pp. 2053-2059, Sep. 2021, doi: 10.1609/aaai.v26i1.8379. * [13] Z. Taylor and J. Nieto, \"Automatic Calibration of Lidar and Camera Images using Normalized Mutual Information,\" p. 8. * [14] C. Shi, K. Huang, Q. Yu, J. Xiao, H. Lu, and C. Xie, \"Extrinsic Calibration and Odometry for Camera-LiDAR Systems,\" _IEEE Access_, vol. 7, pp. 120106-120116, 2019, doi: 10.1109/ACCESS.2019.2937990. * [15] J. Jeong, Y. Cho, and A. Kim, \"The Road is Enough! Extrinsic Calibration of Non-overlapping Stereo Camera and LiDAR using Road Information,\" _IEEE Robot. Autom. Lett._, vol. 4, no. 3, pp. 2831-2838, Jul. 2019, doi: 10.1109/IRA.2019.2921648. * [16] K. Irie, M. Sugiyama, and M. Tomono, \"Target-less camera-LiDAR extrinsic calibration using a bagged dependence estimator,\" in _2016 IEEE International Conference on Automation Science and Engineering (CASE)_, Fort Worth, TX, USA, Aug. 2016, pp. 1340-1347. doi: 10.1109/COSE.2016.7743564. * [17] X. Lv, B. Wang, Z. Dou, D. Ye, and S. Wang, \"LCCNet: LiDAR and Camera Self-Calibration using Cost Volume Network,\" in _2021 IEEE/CIF Conference on Computer Vision and Pattern Recognition Workshops (CPFRW)_, Nashville, TN, USA, Jun. 2021, pp. 2888-2895. doi: 10.1109/CVPRW35098.2021.00324. * [18] S. Wu, A. Hadachi, D. Vivet, and Y. Prabhakar, \"This Is the Way: Sensors Auto-Calibration Approach Based on Deep Learning for Self-Driving Cars,\" _IEEE Sens. J._, vol. 21, no. 24, pp. 27779-27788, Dec. 2021, doi: 10.1109/JSEN.2021.3124788. * [19] B. Nagy, L. Kovacs, and C. Benedek, \"Online Targetless End-to-End Camera-LiDAR Self-Calibration,\" in _2019 16th International Conference on Machine Vision Applications (MVA)_, Tokyo, Japan, May 2019, pp. 1-6. doi: 10.23919/MVA.2019.8757887. * [20] B. Nagy, L. Kovacs, and C. Benedek, \"SFM and Semantic Information Based Online Targetless Camera-LiDAR Self-Calibration,\" in _2019 IEEE International Conference on Image Processing (ICIP)_, Taipei, Taiwan, Sep. 2019, pp. 1317-1321. doi: 10.1109/ICIP.2019.8804299. * [21] B.-H. Yoon, H.-W. Jeong, and K.-S. Choi, \"Targetless Multiple Camera-LiDAR Extrinsic Calibration using Object Pose Estimation,\" in _2021 IEEE International Conference on Robotics and Automation (ICRA)_, Xi'in, China, May 2021, pp. 13377-13383. doi: 10.1109/ICRA48506.2021.9560936. * [22] Z. Liu, H. Tang, S. Zhu, and S. Han, \"Seman: Annotation-Free Camera-LiDAR Calibration with Semantic Alignment Loss,\" in _2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, Prague, Czech Republic, Sep. 2021, pp. 8845-8851. doi: 10.1109/IROS51168.2021.9635964. * [23] Z. Taylor and J. Nieto, \"Motion-Based Calibration of Multimodal Sensor Extrinsics and Timing Offset Estimation,\" _IEEE Trans. Robot._, vol. 32, no. 5, pp. 1215-1229, Oct. 2016, doi: 10.1109/TRO.2016.2596771. * [24] C. Park, P. Moghadam, S. Kim, S. Sridharan, and C. Fookes, \"Spatiotemporal Camera-LiDAR Calibration: A Targetless and Structuresless Approach,\" _IEEE Robot. Autom. Lett._, vol. 5, no. 2, pp. 1556-1563, Apr. 2020, doi: 10.1109/LRA.2020.2969164. * [25] H. Hu, F. Han, F. Bieder, J.-H. Pauls, and C. Stiller, \"TEScalib: Targetless Extrinsic Self-Calibration of LiDAR and Stereo Camera for Automated Driving Vehicles with Uncertainty Analysis.\" arXiv, Feb. 28, 2022. Accessed: Sep. 13, 2022. [Online]. Available: [http://arxiv.org/abs/2202.13847](http://arxiv.org/abs/2202.13847) * [26] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, \"ORB-SLAM: A Versatile and Accurate Monocular SLAM System,\" _IEEE Trans. Robot._, vol. 31, no. 5, pp. 1147-1163, Oct. 2015, doi: 10.1109/TRO.2015.2463671. * [27] M. Kazhdan, M. Bolitho, and H. Hoppe, \"Poisson surface reconstruction,\" p. 10. * [28] J. Belley and C. Stachniss, \"Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments,\" in _Robotics: Science and Systems XII_, Jun. 2018. doi: 10.15607/RSS.2018.XIV.016. * [29] S. Agarwal, K. Mierle, and The Ceres Solver Team, \"Ceres Solver.\" Mar. 2022. [Online]. Available: [https://github.com/ceres-solver/ceres-solver](https://github.com/ceres-solver/ceres-solver) * [30] M. D. Wheeler and L. Yang, \"Lidar to camera calibration for generating high definition maps,\" US10531004B2, Jan. 07, 2020 Accessed: Sep. 13, 2022. [Online]. Available: [https://patents.google.com/content/US10531004B2/en](https://patents.google.com/content/US10531004B2/en) * [31] J. Courbon, Y. Mezour, L. Eckt, and P. Martinet, \"A generic fisheye camera model for robotic applications,\" in _2007 IEEE/RSJ International Conference on Intelligent Robots and Systems_, San Diego, CA, Oct. 2007, pp. 1683-1688. doi: 10.1109/IROS.2007.4399233. * [32] D. Scaramuzza, A. Martinelli, and R. Siegwart, \"A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion,\" in _Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)_, New York, NY, USA, 2006, pp. 45-45. doi: 10.1109/ICVS.2006.3.
Accurate camera-to-lidar calibration is a requirement for sensor data fusion in many 3D perception tasks. In this paper, we present SceneCalib, a novel method for simultaneous self-calibration of extrinsic and intrinsic parameters in a system containing multiple cameras and a lidar sensor. Existing methods typically require specially designed calibration targets and human operators, or they only attempt to solve for a subset of calibration parameters. We resolve these issues with a fully automatic method that requires no explicit correspondences between camera images and lidar point clouds, allowing for robustness to many outdoor environments. Furthermore, the full system is jointly calibrated with explicit cross-camera constraints to ensure that camera-to-camera and camera-to-lidar extrinsic parameters are consistent.
Provide a brief summary of the text.
147
arxiv-format/2211_04163v1.md
# Designing robots with the context in mind- One design does not fit all Ela Liberman-Pincu0000-0002-9753-7714 1 1 Ben-Gurion University of the Negev, Beer-Sheva, Israel 1 Elmer D. van Grondelle 2Delft University of Technology, Delft, The Netherlands 2 Tal Oron-Gilad10000-0002-9523-01611 1 1 Ben-Gurion University of the Negev, Beer-Sheva, Israel 1 ## 1 Introduction ### Socially Assistive Robots Recent years indicate a growing need for Socially Assistive Robots (SARs) [1-3]. Examples of SARs exist in the para-medical field [4,5] and for domestic use, taking care of the elderly or people with disabilities [6,7], or helping with children [8,9]. Since SAR types have functional differences, we expect the user-robot interaction to vary by use context, functionality, user characteristics, and environmental conditions [10,11]. Yet, our market research revealed that SAR manufacturers often design and deploy the same robotic embodiment for diverse contexts (see section 1.3, Table 2). Most studies in the field of SARs' appearance evaluate users' perceptions of existing off-the-shelfSARs [12-15]. Few studies looked at isolated visual qualities using designated SARs [16]. The lack of design research, standards, or a consistent body of knowledge in this field forces designers to start from scratch when designing new robots [17,18]. Thus, the design of SARs requires a more scientific approach considering their evolving roles in future society. Technological products, even innovative and cutting-edge ones, often fail in the market when the design does not evoke the desired human cognitive response and action, does not apply to environmental conditions, or leads to unrealistic expectations [19-21]. When developing new SARs, the focus is mainly on guaranteeing functionality and safety. Aesthetics and robot look are part of the design process but not necessarily context-specific, i.e., one design fits all. Hekkert and van Dijk (2011) [22] suggest the designer should begin by defining a vision for the context and desired interaction of a new product to set the most appropriate visual qualities (VQs) and come up with a suitable solution for particular design problems. In this paper, we first classify SARs by their use contexts by outlining four contextual layers: the domain in which the SAR exists, its physical environment, intended users, and role. For example, a robot for the para-medical field, supporting non-professional older adults in their private homes for physical exercises. Then, we examine how potential users perceive SARs in context by delving into the robots' essential characteristics and desired VQs. We used an online questionnaire to collect participants' expectations of robot characteristics by contexts of use and their related VQs. We then analyze this data to evaluate users' perceptions of each robot's desired character and the factors affecting the participant's selection of VQs. Finally, we compared the findings with previous work to form design tools to support user- and interaction-centered designs for diverse tasks and use cases of SARs. ### Initial mapping of Visual Qualities Perceptions Previously we evaluated the effect of three VQs for SARs: body structure, outline, and color, on users' perception of the SAR's characteristics [23]. We have empirical findings on how isolated VQs impact people's perception of its characteristics: friendly, childish, innovative, threatening, old-fashioned, massive, elegant, medical, and the robot's gender, as presented in Table 1. For example, to achieve the perception of a friendly SAR, a designer should consider using A-shape or hourglass structure and avoid V-shape, choose light colors (e.g., a combination of white and blue), and avoid dark colors. \\begin{table} \\begin{tabular}{l|c|c c c c} \\hline \\hline \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{Friendly} & \\multicolumn{1}{c}{Childish} & \\multicolumn{1}{c}{Innovative} & \\multicolumn{1}{c}{Threatening Old-} & \\multicolumn{1}{c}{Massive} & \\multicolumn{1}{c}{Elegant} & \\multicolumn{1}{c}{Medical} & \\multicolumn{1}{c}{Robot} \\\\ & & & & fashioned & & sex \\\\ \\hline Structure & & & & & & \\\\ Color & & & & & & \\\\ Outline & & & & & & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: VQs’ effect on self-designed SAR characteristics. Dark boxes represent significant effects (adapted from Liberman Pincu et al. [23]). These links between SAR characteristics and VQs provide designers with an initial mapping for selecting the VQs most suitable to the robot's role and its desired characteristics, increasing the possibility of aligning with user expectations, at least for the initial encounters with the SAR. ## 2 Deconstruction of Contexts Layers- Domain, Environment, Users, and Role. To further enhance the design guidelines to assist designers in the design process of a new SAR and to align these characteristics with the relationship models [24], we now evaluate user expectations in different contexts of use. We map the relevant and desired characteristics for different SARs by a deconstruction process parsing into four contextual layers: Domain, Environment, Users, and Role. The following sections details each layer. ### Domains Our literature survey and market research lead to seven popular domains for SARs: Healthcare (including Eldercare and Therapy), Educational, Authority (including Security), Companion, Home assistance, Business, and Entertainment. Table 2 provides examples for each domain. ### Physical Environments SARs are intended for varied environments. The basic level refers to the robot's intended physical location: indoor or outdoor. This classification affects many engineering decisions considering environmental conditions such as light, noise, humidity, dust, and surface conditions (floor, carpet, grass, etc.). The second level refers to privacy: Personal (i.e., home or private office); Semi-public, meaning there \\begin{table} \\begin{tabular}{l l l} \\hline \\hline Domain & Commercial examples & References \\\\ \\hline Healthcare & Temi (Robotemi), Pepper (Softbank), NAO (Softbank), & 2,4,5 \\\\ & Misty (Misty Robotics), QTrobot (LuxAI) & \\\\ Educational & Pepper (Softbank), NAO (Softbank), QTrobot (LuxAI), & 25-27 \\\\ & Buddy (Blue Frog Robotics) & \\\\ Authority & Knightscope (Knightscope), Cobalt (cobalt robotics) & 28-31 \\\\ Companion & Temi (Robotemi), Misty (Misty Robotics), Buddy (Blue Frog Robotics), Aido (Aido) & 32-34 \\\\ Home assistance & Misty (Misty Robotics), Aido (Aido) & 6,35,36 \\\\ Business & Temi (Robotemi), Pepper (Softbank), NAO (Softbank), & 37,38 \\\\ & Cobalt (cobalt robotics), Buddy (Blue Frog Robotics) & \\\\ Entertainment & Pepper (Softbank), NAO (Softbank), Buddy (Blue Frog Robotics), Aido (Aido) & 39,40 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Common domains in the market and the literature are different users all familiar with the robot (e.g., workplace, assisted living residence, etc.); or public, meaning there are multiple users, some are passersby interacting with the robot for the first time. Figure 1 illustrates the levels of physical environments. ### Users Users can be classified by demographic information, like gender, age, or culture, or by their needs, abilities, and disabilities (cognitive and physical) [41, 42]. In addition, users can be professional (trained to work with the robot, e.g., a trained nurse working with a medical robot) or non-professional [43] (e.g., a hotel guest interacting with a receptionist robot), as well as random (occasional passersby) or familiar with the robot (regularly interacting in the workplace or elsewhere but not as part of their professional work, e.g., a security robot placed at a building entrance). Figure 2 illustrates a classification of users. ### Roles and tasks The human-robot relationship links to the robot's role and tasks, answering questions such as: Is this robot here to help me? In what way? Different human-robot relationship theories suggest classifying relationships by hierarchy [44]; Should I obey the robot? Who supervises who? Who leads the interaction? Figure 1: The two levels of physical environments (physical location and privacy). Figure 2: Classification of users by their demographics, characteristics, and familiarity with the robot. Different role categorizations are found in the literature; Onnasch and Roesler (2021)[45], for example, classified eight abstract roles for various application domains: Information exchange, Precision (e.g., robots for micro-invasive surgery), Physical load reduction, Transport (transport objects from one place to another), Manipulation (the robot physically modifies its environment), Cognitive stimulation, Emotional stimulation, Physical stimulation. Abdi et al. [46] identified five roles of SAR in elder care: affective therapy, cognitive training, social facilitator, companionship, and physiological therapy. For our model, we used the eight roles based on Onnasch and Roesler (2021)[45], excluding _precision_ (which is less related to social relationships) that was replaced with _regulation_: Information exchange, Physical load reduction, Transport, Manipulation, Cognitive stimulation, Emotional stimulation, Physical stimulation, and Regulation. Each role is classified into a three-level hierarchy of human-robot relationships (robot-led interaction, equal or human-led interaction). For example, in the _physical stimulation_ section, we can have a training robot at the _robot-led interaction_ level, a teammate at the _equal_ level, and a physical therapy robot at the _human-led interaction_ level. Fig 3 illustrates the hierarchy of relationships. ## 3 Evaluating Users' Expectations and design perceptions ### Aim and Scope Based on potential users' evaluations, this study aims to define the appropriate characteristics for robots in different contexts of use. The outcomes will be used together with previous findings [23, 47] to form tools and guidelines for designers and manufacturers. Figure 3: Eight SAR roles (adapted from Onnasch and Roesler (2021)) by three hierarchy levels of leadership (robot-led interaction, equal or human-led interaction). ### Four use cases To apply the deconstruction layers of design, we defined four SAR use cases that differ by their contextual layers: a service robot for an Assisted Living/retirement residence facility (ALR), a Medical Assistant Robot (MAR) for a hospital environment, a Covid-19 Officer Robot (COR), and a Personal Assistant Robot (PAR) for home/domestic use. The following paragraph details each case. Table 3 summarizes them. **A service robot for an Assisted Living/retirement residence facility (ALR)** aims to roam the lobby and be used by the facility residents to register for various classes and activities. In addition, it provides information and helps communicate (via video calls and chats) with staff members. **A Medical Assistant Robot (MAR) for a hospital environment** aims to assist the medical team, especially when social distancing is required. Through it, the medical team can communicate in video calls with isolated patients and bring equipment, food, and medicine into patients' rooms. **A COVID-19 Officer Robot (COR)** aims to ensure passersby comply with Covid-19 restrictions like social distancing or wearing a face mask. **A Personal Assistant Robot (PAR) for home/domestic use** seeks to assist users with daily tasks, recommend activities at home and outside, and remind them of their duties and appointments. The robot allows users to watch videos, listen to music, play, and have video chats with family and friends. \\begin{table} \\begin{tabular}{l l l l l l l} \\hline \\hline & Domain & Environment & Users & & Role \\\\ \\hline ALR & Business & An & Semi- & Older adults & Non- & Information \\\\ & & assisted & public & & professional & exchange \\\\ & & living & Indoor & & & Human-led \\\\ & & residence & & & & & interaction \\\\ & & facility & & & & & \\\\ MAR & Healthcare & Hospital & Public & Medical & Professional & Information \\\\ & & & & & & & exchange/ \\\\ & & & Indoor & Hospitalized & Non- & Personal & Transport \\\\ & & & & & & & \\\\ & & & & & & & \\\\ COR & Authority & Public & Public & Passersby & Non- & professional & \\\\ & & places & & & & & \\\\ & & & Indoor/ & & & & \\\\ & & outdoor & & & & & interaction \\\\ PAR & Home & Home & Personal & Diverse & Non- & Professional & Physical load \\\\ & assistance & & & Indoor & & & & \\\\ & & & & & & & \\\\ & & & & & & & \\\\ \\end{tabular} \\end{table} Table 3: Four SAR use cases ### Evaluation Method and Online Questionnaire Design Using Qualtrics, we designed an online questionnaire where participants were exposed to one of the four use cases. First, they were asked to define the robot's desired characteristics by marking relevant words out of a word bank. The word bank contained twelve words based on previous studies related to SARs' perception [48-50] and that were found relevant to our four use cases: innovative, inviting, cute, elegant, massive, friendly, authoritative, aggressive, reliable, professional, intelligent, and threatening. Following Benedek & Miner's product reaction cards (2002) [51], we followed a similar procedure to our previous studies; however, in this case, participants were not reacting to a design but an idea. In addition, they had the option of adding their own words. Following, they were asked to select three types of VQs: body structure, outline, and color scheme from a set of options, by the alternative that, in their opinion, best expresses the desired characteristics that they have chosen. Figure 4 illustrates the questionnaire design. The online questionnaire was distributed using social media and snowball distribution between November 2021 to March 2022 (via posts on Facebook and WhatsApp). In total, we collected data from 228 adult respondents. Table 4 Summarizes the respondents' demographics by use case. \\begin{table} \\begin{tabular}{l c c c c c} \\hline \\hline Case study & \\multicolumn{3}{c}{Gender} & Age & Total \\\\ & Other & Males & Females & & \\\\ \\hline ALR & 1 & 29 & 24 & M=38.7, SD=17.5 & 54 \\\\ MAR & 1 & 24 & 23 & M=35.3, SD=12.2 & 48 \\\\ COR & - & 30 & 25 & M=37.7, SD=15.5 & 55 \\\\ PAR & 1 & 39 & 31 & M=43.0, SD=18.0 & 71 \\\\ **Total** & **3** & **122** & **103** & **M=39.0, SD=16.4** & **228** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Summary of the respondents’ data for each case study Figure 4: questionnaire design. Results ### The Effect of Context on Users' Expectations We used Chi-square tests of independence to evaluate the effect of the context and the participants' demographic information on their selections of desired characteristics. Participants were grouped into three age groups: up to 29, ages 30-49, and 50 and above. Three participants preferred not to indicate gender; hence in our evaluations of gender effect, N=225 instead of N=228. The words _massive_, _aggressive_, and _threatening_ were selected by less than 2% of the participants and therefore were excluded from our analysis. Results confirm that users have different expectations regarding the robot's characteristics suitable for each context of use. Table 5 presents the top three words selected by participants for each context. Figure 5 illustrates the chosen words for each context in a radar chart. We have found statistically significant relations between the contexts and four describing words: _Inviting, Friendly, Elegant,_ and _Authoritative_. For example, 78% of the participants indicated that ALR should look inviting, much more than PAR (56%), MAR (54%), and COR (42%), \\(X^{2}\\) (3, \\(N=228\\)) = 14.8742, \\(p<.01\\). The word _Friendly_ is more likely to be ascribed to ALR (81%), MAR (81%), and PAR (75%) but less likely to be attributed to COR (55%), \\(X^{2}\\) (3, \\(N=228\\)) = 13.1663, \\(p<.01\\). Elegant is more suitable for describing a PAR than all three other contexts, \\(\\mathrm{X}^{2}\\) (3, \\(N=228\\)) = 11.5077, \\(p<.01\\). Finally, authoritative is significantly more suitable for describing a COR than all three other contexts, \\(\\mathrm{X}^{2}\\) (3, \\(N=228\\)) = 44.4546, \\(p<.01\\). In addition, we have found that the participants' demographic data (gender and age) affect their selections. Female participants were significantly more likely to select the words _Cute_ (\\(X^{2}\\) (1, \\(N=225\\)) = \\(6.68,p<.01\\)) and _Friendly_ (\\(X^{2}\\) (1, \\(N=225\\)) = 10.813, \\(p<.01\\)). Though a Chi-square test of independence showed that there were no significant associations between gender and the selection of the word _Innovative,_ male participants were more likely to select it (48%) than female participants (37%), (\\(X^{2}\\) (1, \\(N=225\\)) = 2.503, \\(p=.11\\). The words _Aggressive_ and _Threatening_ were selected only by male participants. \\begin{table} \\begin{tabular}{l l l l} \\hline \\hline Case study & Most selected word & \\(2^{\\mathrm{nd}}\\) word & \\(3^{\\mathrm{rd}}\\) word \\\\ \\hline ALR & Friendly (81\\%) & Inviting (78\\%) & Reliable (67\\%) \\\\ MAR & Friendly (81\\%) & Professional (67\\%) & Reliable (67\\%) \\\\ COR & Professional (67\\%) & Reliable (60\\%) & Authoritative (58\\%) \\\\ PAR & Friendly (75\\%) & Reliable (69\\%) & Professional (65\\%) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Top three words selected by participants for each context and their rate. User age category was found to affect the desire for Innovative robots significantly; younger participants (up to 29) selected this character more frequently (53%) than mid-age (ages 30-49) (42%) and older (50 and above) participants (26%), the relationship between these variables was significant, \\(X^{2}\\) (2, \\(N=228\\)) = 10.78, \\(p<.01\\). Furthermore, results indicate two worth noting yet, not statistically significant trends. Implying that age has a positive correlation with selecting the word _Cute_ and a negative correlation with choosing the word Professional (i.e., older participants tend to desire cuter, less professional-looking robots). Table 6 summarizes the factors affecting the participants' selection of words. ### Participants' selection of visual qualities After selecting the characteristics of the SAR, participants were asked to choose the most suitable visual qualities for the context of use they had. Some VQs were selected more frequently than others regardless of the use context or other factors. For example, most respondents (79%) preferred rounded edges over chamfered ones. Only 10% of the respondents chose the dark color scheme, and most participants (49%) preferred the white color scheme. The two most selected structures were the Hourglass (27%) and the A shape (26%). The context of use impacted just the body structure selection. For the ALR, respondents showed a higher preference for the A shape (37% compared to 26% in the overall data). PAR and COR increased the respondents' tendency to select the Hourglass structure (32% and 31%, respectively, compared to 27% in the overall data). \\begin{table} \\begin{tabular}{l|c|c|c|c|c} \\hline \\hline & Innovative & Inviting & Cute & Elegant & Friendly & Authoritative \\\\ \\hline Context & & & & & \\\\ Gender & & & & & \\\\ Age & & & & & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 6: Factors affecting the participants’ selection of words. Figure 5: Users’ assigned characteristics by the context of use. Participants' gender was found to affect their selection of colors significantly, \\(X^{2}\\) (2, \\(N=225\\)) = 7.939, \\(p<.05\\); the male participants were more likely to select the white and blue combination (54%), while the female participants preferred the white option. Participants' age did not affect their selections, although there were minor differences among the three groups. Participants' expectations (according to the selected words) significantly affected their selection of several VQs. Wanting to express _Inviting_ increased the participants' probability of selecting a rounded outline, \\(X^{2}\\) (1, \\(N=228\\)) = 7.946, \\(p<.01\\), and decreased the participants' probability of selecting the _Dark_ color scheme, \\(X^{2}\\) (2, \\(N=228\\)) = 5.537, \\(p=.06\\). To express _Cute_ participants selected using the _white_ color, \\(X^{2}\\) (2, \\(N=228\\)) = 16.03, \\(p<.01\\), and were more likely to select the _A shape_ structure, \\(X^{2}\\) (4, \\(N=228\\)) = 13.23, \\(p<.05\\). Participants who wished to express _Elegant_ showed a tendency to select chamfered outline significantly more than in the general population of the study (36% compared to 21% in the overall data), \\(X^{2}\\) (1, \\(N=228\\)) = 23.26, \\(p<.01\\). Wanting to express _Friendly_ increased the participants probability to select the _Hourglass_ structure, \\(X^{2}\\) (4, \\(N=228\\)) = 9.5, \\(p<.05\\), the _Rounded_ outline \\(X^{2}\\) (1, \\(N=228\\)) = 6.43, \\(p<.05\\), and the _White and blue_ color combination, \\(X^{2}\\) (2, N = 228\\)) = 8.15, p \\(<.05\\). Table 7 concludes our findings. \\begin{table} \\begin{tabular}{l l l l} \\hline & Structure & Outline & Color \\\\ & (Five levels) & (Two levels) & (Three levels) \\\\ \\hline Overall & A shape (26\\%) & Rounded (79\\%) & Dark (10\\%) \\\\ & Diamond (18\\%) & Chamfered (21\\%) & White (49\\%) \\\\ & Hourglass (27\\%) & & White and blue (41\\%) \\\\ & Rectangle (10\\%) & & \\\\ & V shape (19\\%) & & \\\\ \\hline Assisted & Living & A Shape (37\\%) & Rounded (78\\%) & White (46\\%) \\\\ service & (ALR) & & \\\\ Personal & assistant & Hourglass (32\\%) & Rounded (85\\%) & White (46\\%) \\\\ robot (PAR) & & & \\\\ COVID-19 & Officer & Hourglass (31\\%) & Rounded (75\\%) & White (47\\%) \\\\ Robot (COR) & & & \\\\ Medical & Assistant & - & Rounded (77\\%) & White (56\\%) \\\\ Robot (MAR) & & & \\\\ \\hline Male & - & Rounded (79\\%) & White and blue (54\\%) \\\\ Female & - & Rounded (80\\%) & White (50\\%) \\\\ \\hline Up to the age of 29 & Hourglass (31\\%) & Rounded (80\\%) & White and blue (54\\%) \\\\ Ages 30-49 & - & Rounded (75\\%) & White and blue (48\\%) \\\\ 50 and above & A Shape (30\\%) & Rounded (83\\%) & White (53\\%) \\\\ \\hline Inviting & - & Rounded* & Not Dark \\\\ Cute & **A**shape* & - & White* \\\\ Elegant & - & *** & - \\\\ Friendly & Hourglass* & Rounded * & White \\& blue* \\\\ \\hline \\end{tabular} \\end{table} Table 7: Participants’ selection of Visual qualities. Gray boxes represent a significant level of p\\(<\\)05; black boxes represent a significance level of p\\(<\\)01. ## 5 Discussion and Future Work SARs are becoming more prevalent in everyday life [1-3], establishing different kinds of relationships [44]. The body of knowledge in human-robot interaction keeps growing to ensure that these robots follow human' social norms and expectations [10,11,52]. However, knowledge is limited regarding the design research of SARs [17-18]. Most research in the field focuses on evaluating users' perceptions of existing off-the-shelf SARs [12-15]. In this work, we sought to define what these expectations are. Using an online questionnaire, we collected data from 228 respondents regarding their expectations from SARs in four use cases differ in their four layers of context. Results confirm that users have different expectations regarding the robot's characteristics that are suitable for each context. However, in most cases (excluding COR), the top selected word was _Friendly_ (see section 4.1 and Table 5). Further, when asked to select VQs that best express these expectations, we found that participants' demographic data significantly affected their selections (see section 4.2, and Table 7). The four final designs (formed by looking at most users' selections) are almost similar and differ only by the structure; all designs are rounded-edged white robots. Fig 6 presents the four designed robots by use case. We then combined our findings of this study with our previous empirical findings [23] to set up design guidelines to assist designers in creating a new SAR according to its context. Figure 7 presents desired characteristics in each context according to our recent findings; the table on the right presents design suggestions for each characteristic. For example, in designing a new service robot for an assisted living Figure 6: Four options of robots design by use case (by looking at the majority of users’ selections). See Table 7 for the full details. residence facility (ALR), designers should inspire for a friendly, inviting, and reliable look; hence, they may consider choosing a rounded, white and blue, a-shaped design. The results align with previous studies that found that design preferences are more related to users' personal preferences. There is no consensus among **users** regarding the appropriate appearance for SARs. Hence, specifically for the case of a personal robot (PAR), the design should allow users to make adjustments using mass customization [53]. In addition, these results may indicate that the participatory design of SARs should be done carefully using two-way evaluations, allowing users to express themselves without relying on them to make design decisions. Participatory design outcomes should be assessed with other users using evaluation tools like Microsoft reaction cards [51]. This research, however, is subject to several limitations. First, participants were not asked to justify their selections; hence, we could not track their intentions. Second, the participants could only select three VQs: structure, outline, and color, with a closed set of options. Therefore, all outcomes share the exact proportions and dimensions. The robot's height affects user perception [54]; in our previous study evaluating the effect of the COVID-19 officer robot's appearance, some participants mentioned it should be taller [28]. Our subsequent studies will explore stakeholders' perceptions as well as the effect of culture. Such findings will provide further support for the design process of new SARs depending on their context of use, their intended role, and users. And form design guidelines for future SARs. ## 6 Acknowledgment This research was supported by Ministry of Innovation, Science and Technology, Israel (grant 3-15625), and by Ben-Gurion University of the Negev through the Helmsley Charitable Trust, the Agricultural, Biological and Cognitive Robotics Initiative, the W. Gunther Plaut Chair in Manufacturing Engineering and by the George Shrut Chair in Human performance Management. Figure 7: The desired characteristics in each context according to our recent findings; the table on the right presents design suggestions for each characteristic ## References * Attitudes and perceptions among older people, carers and care professionals in Ireland: A questionnaire study. Health and Social Care in the Community. [https://doi.org/10.1111/hsc.13327](https://doi.org/10.1111/hsc.13327) * [2] Chita-Tegmark, M., & Scheutz, M. (2021). Assistive robots for the social management of health: a framework for robot design and human-robot interaction research. International Journal of Social Robotics, 13(2), 197-217. * [3] Zachiotis, G. A., Andrikopoulos, G., Gornez, R., Nakamura, K., & Nikolakopoulos, G. (2018, December). A survey on the application trends of home service robotics. In 2018 IEEE international conference on Robotics and Biomimetics (ROBIO) (pp. 1999-2006). IEEE. * [4] Aymerich-Franch, L., & Ferrer, I. (2021). Socially assistive robots' deployment in healthcare settings: a global perspective. arXiv e-prints, arXiv-2110. * [5] Tavakoli, M., Carriere, J., & Torabi, A. (2020). Robotics, smart wearable technologies, and autonomous intelligent systems for healthcare during the COVID-19 pandemic: An analysis of the state of the art and future vision. Advanced Intelligent Systems, 2000071. * [6] Smarr, C. A., Fausset, C. B., & Rogers, W. A. (2011). Understanding the potential for robot assistance for older adults in the home environment. Georgia Institute of Technology. * [7] Papadopoulos, I., Kouloughioti, C., Lazzarino, R., & Ali, S. (2020). Enablers and barriers to the implementation of socially assistive humanoid robots in health and social care: a systematic review. BMJ open, 10(1), e033096. * [8] Guneysu, A., & Arnrich, B. (2017, August). Socially assistive child-robot interaction in physical exercise coaching. In 2017 26th IEEE international symposium on robot and human interactive communication (RO-MAN) (pp. 670-675). IEEE. * [9] Cagiltay, B., Ho, H. R., Michaelis, J. E., & Mutlu, B. (2020, June). Investigating family perceptions and design preferences for an in-home robot. In Proceedings of the interaction design and children conference (pp. 229-242). * [10] Caudwell, C., Lacey, C., & Sandoval, E. B. (2019, December). The (Ir) relevance of Robot Cuteness: An Exploratory Study of Emotionally Durable Robot Design. In Proceedings of the 31st Australian Conference on Human-Computer-Interaction (pp. 64-72). * [11] Onnasch, L., & Roesler, E. (2020). A Taxonomy to Structure and Analyze Human-Robot Interaction. International Journal of Social Robotics, 1-17. * [12] Lazar, A., Thompson, H. J., Piper, A. M., & Demiris, G. (2016, June). Rethinking the design of robotic pets for older adults. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems (pp. 1034-1046). * [13] Wu, Y.H., Fassert, C. and Rigaud, A.S., 2012. Designing robots for the elderly: appearance issue and beyond. Archives of gerontology and geriatrics, 54(1), pp.121-126. * [14] von der Putten, A., & Kramer, N. (2012, March). A survey on robot appearances. In 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 267-268). IEEE. * [15] Reeves, B., & Hancock, J. (2020). Social robots are like real people: First impressions, attributes, and stereotyping of social robots. Technology, Mind, and Behavior, 1(1). * [16] Bjorklund, L. (2018). Knock on Wood: Does Material Choice Change the Social Perception of Robots?. * [17] Sandoval, E. B., Brown, S., & Velonaki, M. (2018, December). How the inclusion of design principles contribute to the development of social robots. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 535-538). * [18] Hoffman, G. (2019). Anki, Jibo, and Kuri: What We Can Learn from Social Robots That Didn't Make It. _IEEE Spectrum_. * [19] Fairbanks, R. J., & Wears, R. L. (2008). Hazards with medical devices: the role of design. Annals of emergency medicine, 52(5), 519-521. * [20] Bartneck C (2020). Why do all social robots fail in the market?. [Podcast]. [http://doi.org/10.17605/OSF.IO/7KFRZ](http://doi.org/10.17605/OSF.IO/7KFRZ) ISSN 2703-4054 * [21] Bhimasta, R. A., & Kuo, P. Y. (2019, September). What causes the adoption failure of service robots? A Case of Henn-na Hotel in Japan. In Adjunct proceedings of the 2019 ACM international joint conference on pervasive and ubiquitous computing and proceedings of the 2019 ACM international symposium on wearable computers (pp. 1107-1112). * [22] Hekkert, P., & Van Dijk, M. (2011). ViP-Vision in design: A guidebook for innovators. BIS publishers. * [23] Liberman-Pincu, E., Parmet, Y., & Oron-Gilad, T. (2022). Judging a socially assistive robot (SAR) by its cover; The effect of body structure, outline, and color on users' perception. arXiv preprint arXiv:2202.07614. * [24] Liberman-Pincu, E., van Grondelle, E.D., and Oron-Gilad. T. 2021. Designing robots with relationships in mind- Suggesting two models of human- socially assistive robot (SAR) relationship. In Proceedings of 2021 HRI '21 Companion, March 8-11, 2021, Boulder, CO, USA. ACM, New York., NY, USA, 5 pages. [https://doi.org/10.1145/3434074_3447125](https://doi.org/10.1145/3434074_3447125) * [25] Rosanda, V., & Istenic Starcic, A. (2019, September). The robot in the classroom: a review of a robot role. In International Symposium on Emerging Technologies for Education (pp. 347-357). Springer, Cham. * [26] Chin, K. Y., Hong, Z. W., & Chen, Y. L. (2014). Impact of using an educational robot-based learning system on students' motivation in elementary education. IEEE Transactions on learning technologies, 7(4), 333-345. * [27] Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., & Tanaka, F. (2018). Social robots for education: A review. Science robotics, 3(21), eaat5954. * [28] Liberman-Pincu, E., David, A., Sarne-Fleischmann, V., Edan, Y., & Oron-Gilad, T. (2021). Comply with Me: Using Design Manipulations to Affect Human-Robot Interaction in a COVID-19 Officer Robot Use Case. Multimodal Technologies and Interaction, 5(11), 71. * [29] Espinas, M. F. C., Roguel, K. M. G., Salamat, M. A. A., & Reyes, S. S. G. Security Robots vs. Security Guards. * [30] Agrawal, S., & Williams, M. A. (2018, August). Would you obey an aggressive robot: A human-robot interaction field study. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 240-246). IEEE. * [31] Geiskkovitch, D. Y., Cormier, D., Seo, S. H., & Young, J. E. (2016). Please continue, we need more data: an exploration of obedience to robots. Journal of Human-Robot Interaction, 5(1), 82-99. * [32] Robinson, H., MacDonald, B., Kerse, N., & Broadbent, E. (2013). The psychosocial effects of a companion robot: a randomized controlled trial. Journal of the American Medical Directors Association, 14(9), 661-667. * [33] Borenstein, J., & Pearson, Y. (2013). Companion robots and the emotional development of children. Law, Innovation and Technology, 5(2), 172-189. * [34] Gasteiger, N., Loveys, K., Law, M., & Broadbent, E. (2021). Friends from the future: a scoping review of research into robots and computer agents to combat loneliness in older people. Clinical interventions in aging, 16, 941. * [35] Yamazaki, K., Ueda, R., Nozawa, S., Kojima, M., Okada, K., Matsumoto, K., & Inaba, M. (2012). Home-assistant robot for an aging society. Proceedings of the IEEE, 100(8), 2429-2441. * [36] Zachiotis, G. A., Andrikopoulos, G., Gornez, R., Nakamura, K., & Nikolakopoulos, G. (2018, December). A survey on the application trends of home service robotics. In 2018 IEEE international conference on Robotics and Biomimetics (ROBIO) (pp. 1999-2006). IEEE. * [37] Fuentes-Moraleda, L., Diaz-Perez, P., Orea-Giner, A., Munoz-Mazon, A., & Villace-Molinero, T. (2020). Interaction between hotel service robots and humans: A hotel-specific Service Robot Acceptance Model (sRAM). Tourism Management Perspectives, 36, 100751. * [38] Rosete, A., Soares, B., Salvadorinho, J., Reis, J., & Amorim, M. (2020, February). Service robots in the hospitality industry: An exploratory literature review. In International Conference on Exploring Services Science (pp. 174-186). Springer, Cham. * [39] Kwak, S. S., & Kim, M. S. (2005). USER PREFERENCES FOR PERSONALITIES OF ENTERTAINMENT ROBOTS ACCORDING TO THE USERS'PSYCHOLOGICAL TYPES. Bulletin of Japanese Society for the Science of Design, 52(4), 47-52. * [40] Bogue, R. (2022). The role of robots in entertainment. Industrial Robot: the international journal of robotics research and application. * [41] Flandorfer, P. (2012). Population ageing and socially assistive robots for elderly persons: the importance of sociodemographic factors for user acceptance. International Journal of Population Research, 2012. * [42] Cortellessa, G., Scopelliti, M., Tiberio, L., Svedberg, G. K., Loutfi, A., & Pecora, F. (2008, November). A Cross-Cultural Evaluation of Domestic Assistive Robots. In _AAAI fall symposium: AI in eldercare: new solutions to old problems_ (pp. 24-31). * [43] Raigoso, D., Cespedes, N., Cifuentes, C. A., Del-Ama, A. J., & Munera, M. (2021). A survey on socially assistive robotics: Clinicians' and patients' perception of a social robot within gait rehabilitation therapies. Brain sciences, 11(6), 738. * [44] Prescott, T. J., & Robillard, J. M. (2021). Are friends electric? The benefits and risks of human-robot relationships. Iscience, 24(1), 101993. * [45] Onnasch, L., & Roesler, E. (2021). A taxonomy to structure and analyze human-robot interaction. International Journal of Social Robotics, 13(4), 833-849. * [46] Abdi, J., Al-Hindawi, A., Ng, T., & Vizcaychipi, M. P. (2018). Scoping review on the use of socially assistive robot technology in elderly care. BMJ open, 8(2), e018815. * Evaluating the effect of Visual Qualities among Children. In proceedings of the 30th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) August 8 - 12, 2021 - Vancouver, BC, CA (Virtual Conference) * [48] Bartneck, C., Kulic, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International journal of social robotics, 1(1), 71-81. * [49] Carpinella, C. M., Wyman, A. B., Perez, M. A., & Stroessner, S. J. (2017, March). The Robotic Social Attributes Scale (RoSAS) Development and Validation. In Proceedings of the 2017 ACM/IEEE International Conference on human-robot interaction (pp. 254-262). * [50] Kalegina, A., Schroeder, G., Allchin, A., Berlin, K., & Cakmak, M. (2018, February). Characterizing the design space of rendered robot faces. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (pp. 96-104). * [51] Benedek, J., & Miner, T. (2002). Product reaction cards. Microsoft, July, 29. * [52] Saunderson, S., & Nejat, G. (2019). How robots influence humans: A survey of nonverbal communication in social human-robot interaction. International Journal of Social Robotics, 11(4), 575-608. * [53] Liberman-Pincu, E., & Oron-Gilad, T. (2022, March). Exploring the Effect of Mass Customization on User Acceptance of Socially Assistive Robots (SARs). In Proceedings of the 2022 ACM/IFEE International Conference on Human-Robot Interaction (pp. 880-884). * [54] Wu, Y.H., Fassert, C. and Rigaud, A.S., 2012. Designing robots for the elderly: appearance issue and beyond. Archives of gerontology and geriatrics, 54(1), pp.121-126.
Robots' visual qualities (VQs) impact people's perception of their characteristics and affect users' behaviors and attitudes toward the robot. Recent years point toward a growing need for Socially Assistive Robots (SARs) in various contexts and functions, interacting with various users. Since SAR types have functional differences, the user experience must vary by the context of use, functionality, user characteristics, and environmental conditions. Still, SAR manufacturers often design and deploy the same robotic embodiment for diverse contexts. We argue that the visual design of SARs requires a more scientific approach considering their multiple evolving roles in future society. In this work, we define four contextual layers: the domain in which the SAR exists, the physical environment, its intended users, and the robot's role. Via an online questionnaire, we collected potential users' expectations regarding the desired characteristics and visual qualities of four different SARs: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. Results indicated that users' expectations differ regarding the robot's desired characteristics and the anticipated visual qualities for each context and use case. Keywords:context-driven design, visual qualities, socially assistive robot.
Condense the content of the following passage.
257
isprs/8d72e852_3a75_48f3_8910_f0fdb5017aff.md
Continuous Change Detection of Urban Lakes in Wuhan, China using Multi-Temporal Remote Sensing Images Wenyuan Zhang 1 National Research Center of Cultural Industries, Central China Normal University, Luoyu Road, Wuhan, China - [email protected], [email protected], [email protected], [email protected] Xiaohan Kong 1 National Research Center of Cultural Industries, Central China Normal University, Luoyu Road, Wuhan, China - [email protected], [email protected], [email protected], [email protected] Guoxin Tan 1 National Research Center of Cultural Industries, Central China Normal University, Luoyu Road, Wuhan, China - [email protected], [email protected], [email protected], [email protected] Songyin Zheng 1 National Research Center of Cultural Industries, Central China Normal University, Luoyu Road, Wuhan, China - [email protected], [email protected], [email protected], [email protected] ## 1 Introduction Urban lakes are important freshwater resources available for ecosystems, they provide precious water to residents, fish and waterfows, as well as regulate the urban environment, i.e., humidity, temperature, flood storage (J. Zhu, Zhang, and Tong 2015; Shehal and Unnati 2012; W. Zhu, Jia, and Lv 2014; Taravat et al. 2016). However, with human activity increasing and rapid urbanization, many urban lakes in China have been shrunk significantly or disappeared in recent years. Thus there is an increasing necessity to understand the dynamical relationship between lake shrinking and urbanization, not only temporally but also spatially for the improvement of urban environments. Remote sensing, as an advanced technology of Earth observation and an important tool for providing spatially consistent image information, has been widely used in various applications, such as land use or land cover change (Z. Zhu and Woodcock 2014; Demir, Bovolo, and Bruzzone 2013), urban sprawd (Bagan and Yamagata 2012), and hydrology (Dronova, Gong, and Wang 2011; W. Zhu, Jia, and Lv 2014; Rokni et al. 2014). In particular, since the Landsat archive was open to public by the U.S. Geological Survey (USGS) in 2008 (Woodcock 2008), Many studies have focused on water change detection, lake monitoring and skeleton land use classification using multi-temporal Landsat images (Rokni et al. 2014; Taravat et al. 2016; Z. Zhu and Woodcock 2014). For example, Michishita et al. (2012) examined the two decades of urbanization in the Poyang Lake area in China, using a time-series Landsat-5 TM dataset, and performed a quantification and visualization of the changes in time-series urban land cover fractions through spectral unmixing. Song et al. (2013) estimated the water storage changes in the lakes of the Tibetan Palteau by combining time-series water level and area data derived from optical satellite images over a long time scale, and analysed the changes therein. Zhu et al. (2014) monitored the fluctuation of Qinghai Lake by estimating the variations of water volume based on MODIS (Moderate Resolution Imaging Spectroradiometer) and Landsat TM/ETM+ images from 1999 to 2009. Zhu et al. (2015) quantitatively analysed the impacts of lackfront land use changes on lake area in Wuhan, based on two Landsat TM/ETM+ images taken in 1991 and 2005. However, only two states of satellite images were used in most change detection algorithms (J. Zhu, Zhang, and Tong 2015), qualitative analysis of the temporal effects of phenomenon are limited. Moreover, a unified category or single lake were often chosen to monitor in lots of publications (Zheng et al. 2015; W. Zhu, Jia, and Lv 2014). In this paper, we have performed a case study for urban lake change detection in Wuhan, China, using multi-temporal Landsat images. The aim of this study is quantitatively change analysis of urban lakes shrinking and impact of the factors over the past few decades. This paper starts with a description of our study area, and collected remote sensing datasets, and data pre-processing in Section 2. Section 3 describes our approaches to extract the surface water extent of urban lakes, as well as the land cover classification in the study area by using multi-temporal images. A detail change analysis and the impact of factors about lake shrinking based on experimental results are discussed in Section 4. Section 5 summarizes and concludes the results of this study. ## 2 Study Area and Data ### Study Area Wuhan is located at 113*41E to 115*05' E and 29*58' N to 31*22' N. As the capital of Hubei Province, Wuhan is the only sub-provincial city in Central China with an area of 8,494 square kilometres. The world's 3rd longest river-the Yangtze River and its greatest branch, the Hanshui River flow across the city and divide it into three parts, namely Wuchang, Hankou and Hanyang. (Figure 1). Wuhan is famous for its lake resources. There are more than 166 lakes now, so it is well known as the City of Hundreds of Lakes. Since the 1990s, Wuhan has witness the rapid urban expansion. During the urbanization process, most urban areas are developed and expanded through the alteration of other land types including lakes, vegetation, and agricultural fields. A statistic report on the lakes in Wuhan suggests that the count of urban lakes has been decreased from 127 to 38, 89 lakes have vanished completely during the past 60 years (Wang 2013). The downtown Wuhan covering Wuchang, Hankou and Hanyang was selected as study area, and its total area was 962.8227 square kilometers. Nowadays, there are only 26 major lakes lie in this region, including the famous East Lake. ### Data In order to interpret the lake surface extent and land-cover change for this study area, four multi-spectral Landsat images covering Wuhan in different periods were collected from USGS website ([http://eartheexplorer.usgs.gov/](http://eartheexplorer.usgs.gov/)). These images were acquired on four separate years from 1991 to 2017 (path/row 123/39), and all of them were loud free. The characteristics of these images are provided in Table 1. The first two images of the Landsat-5 satellite was captured by Thematic Mapper (TM) sensor with seven bands, and the third image captured by Landsat-7 Enhanced Thematic Mapper Plus (ETM+) sensor included eight bands. The spatial resolution of TM and ETM+ data was 30 m for bands 1-5 and 7. While TM band 6 and ETM+ band 6 (thermal infrared) were acquired at 120m resolution, and ETM+ band 8 (panchromatic) was 15m resolution. The last image was from Landsat-8 Operational Land Imager (OLI) sensor with nine bands, the spatial resolution is 30m, while excluding one panchromatic band (15m). All images had been geo-referenced and projected to the same Universal Transverse Mercator (UTM) coordinate system (UTM zone 49 N, WGS-84 geodetic datum). ### Data Pre-processing A series of pre-processing procedures were performed on the acquired data. Firstly, a local linear histogram matching technique was used to fill the gaps in the ETM+ image, due to the failure of Landsat-7 scan-line corrector since May 31, 2003. The processed image was spatially continuous without any obvious striping patterns. Secondly, polynomial correction method was applied to reduce or eliminate the geometric distortion of all images. Thirdly, radiation calibration and atmospheric correction based on FLAASH model were performed to all images. Fourth, taking the last image as a reference, a histogram matching method was used to adjust the hue and contrast of the rest images. Finally, all the corrected images were then cropped to cover the whole downtown area of Wuhan by using the same area of interest (AOI), and the image size was 1662 \\(\\times\\) 1214 pixels. All the aforementioned processing were performed using ENVI (The Environment for Visualizing Images) 5.1 software package. Figure 2 shows an example of these image with specific band combination (near infrared band, red band and green band from each Landsat image). ## 3 Methodology ### Workflow The methodology of this investigation is represented in Figure 3. \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline Acquisition Date & Satellite & Sensor & Band Count \\\\ \\hline 1991/10/23 & Landsat-5 & TM & 7 \\\\ \\hline 2002/09/03 & Landsat-5 & TM & 7 \\\\ \\hline 2011/11/23 & Landsat-7 & ETM+ & 8 \\\\ \\hline 2017/10/30 & Landsat-8 & OLI & 9 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Characteristics of data sources Figure 1: Location and administrative division of Wuhan City. Figure 2: Multi-temporal Landsat images for the study area: (a) 1991, (b) 2002, (c) 2011, and (d) 2017. The workflow includes four major stages: MNDWI calculation, image segmentation, image classification, and change detection. First, the MNDWI index was calculated to obtain a gray-scale image for each dataset. Then, OTSU method was used to segment the MNDWI image, and the water bodies could be extracted from the segmented image. Third, classification algorithm such as SVM was adopted to classify the gray-scale image. Finally, a detailed change analysis was performed based on post-classification. ### Water Surface Extraction In recent years, various algorithms have been proposed to extract water information from remote sensing images from different sensors. As we known, the normalized difference water index (NDWI) proposed by McFeeters [11] is one of the most efficient approaches for water extraction from multiband images, which is calculated as follows: \\[NDWI=\\frac{GREEN-NIR}{GREEN+NIR} \\tag{1}\\] Where GREEN is a green band such as TM band 2, and NIR is a near infrared band such as TM band 4. However, the extracted water information using NDWI is often mixed with built-up land noise in urban districts, because many built-up land features in Landsat TM/ETM+ images also have positive values in the NDWI image. To overcome the limitation of NDWI, a modified NDWI (MNDWI) was thus introduced by [12], which is more suitable for improving open water features extraction from remote sensing images where water regions surrounded by lots of built-up areas. The MNDWI is expressed as follows: \\[MNDWI=\\frac{GREEN-MIR}{GREEN+MIR} \\tag{2}\\] Where MIR is a middle infrared band such as TM band 5. For Landsat TM/ETM+ image, MNDWI can be formed as: \\[MNDWI=\\frac{B2-B5}{B2+B5} \\tag{3}\\] Where _B2_ and _B5_ are the pixel values of the second and fifth bands in the Landsat images, respectively. As shown in Equation (1) and (2), _NIR_ in NDWI is replaced by _MIR_ because the water body has lower reflectivity in the middle infrared band than construction. Thus, MNDWI could effectively suppress and even remove built-up features as well as vegetation and soil features [10]. ### Image Segmentation Threshold selection is a key step for in defining MNDWI. Because of their high absorption in the MIR band and high reflectance in the Green band, water features usually have positive MNDWI values, while the MNDWI values of non-water features are usually negative. Thus, the threshold value for Xu's MNDWI was set to zero. However, threshold adjustment in individual scenes may achieve a more accurate delineation of water bodies [11, 12]. Therefore, dynamic threshold values are needed when different regions of remote sensing images are employed to extract water body information. In this study, the maximum between-class variance method (the OTSU method) was adopted to each individual image [11], so as to identify the optimal threshold values for separating water bodies from the background features. According to the Equation (2), the MNDWI pixels range from [-1, 1]. Since the OTSU method will lose some of the overall information for two groups of objects in the case of the unidentified grayscale, an enhanced OTSU method using grayscale stretching was adopted to solve this problem. A grayscale stretching strategy was applied to increase the gray level, so as to enhance the gray differences between the first two groups of objects. The stretching method in this study was defined as: \\[y=\\frac{x-x_{\\text{min}}}{x_{\\text{max}}-x_{\\text{min}}}\\times 255 \\tag{4}\\] After stretching, the MNDWI pixel values range from [0, 255]. ## 4 Experiment and Analysis ### Lake Extraction According to the Equation (2), the MNDWIs of different Landsat images were calculated to produce water-body-enhanced images by using the band math module in ENVI software. Then, the MNDWI pixels were stretched to [0, 255], as shown in Figure 4. Figure 3: Schematic representation of the methodology In order to identify a suitable threshold for segmentation, histogram statistics of MNDWI images were then performed using ENVI. Taking Figure 4 (c) as an example, the histogram was a bimodal distribution (Figure 5). Since the OTSU algorithm was implemented using MATLAB in this study, the histogram statistic data were exported to MATLAB to calculate optimal threshold. The OTSU algorithm indicated that the best threshold of Figure 4 (c) was 95.6250, the corresponding normalized value was -0.3497. Spectral analysis of the pixels in the range [-0.3497, 0] shown that there is a big difference between the spectral curves of pure water pixel and other pixel, -0.3497 was thus an optimal threshold for separating water from other land cover features. Based on the optimal threshold, a binary image of the water body was obtained by threshold segmentation of each MNDWI image, which were shown in Figure 6. Considering the extracted water bodies contained the Yangtze River and other small river, an accurate vectorization was carried out to get the shapes of urban lakes (Figure 7). Then, the area of urban lakes can be easily calculated from these vector maps. The statistics of urban lake areas and the percentage of urban lake area in downtown Wuhan have been presented in Table 2. The results show that the coverage of urban lakes continues to decrease over these years, and the proportion of urban lake areas in the downtown has dropped by 4.93% from 1991 to 2017. The total area of urban lakes has been reduced by 47.37 km\\({}^{2}\\) during the past 27 years, which indicates that the urban lakes of Wuhan had experienced significant shrink or disappearance in this period. The most intense decrease of urban lakes was detected between 1991 and 2002, during which the surface area of urban lakes lost 38.97 km\\({}^{2}\\), over 36.07% of their original surface area was reduced in comparison with the year 1991. From 2002 to 2011, the decrease of urban lakes cannot be ignored as well, there was 6.52 km\\({}^{2}\\) lake area lost. In the last 7 years, the destruction of urban lakes is much smaller than before. ### Image Classification In order to get detailed changes and factors for urban lakes shrinking, SVM (Support Vector Machine) classification algorithm was further applied to classify the MNDWI image into three classes: water body, built-up area, and vegetation. The classification results of these images are shown in Figure 8. \\begin{table} \\begin{tabular}{|l|c|c|} \\hline Year & Urban lakes area (km\\({}^{2}\\)) & Percentage of lake area (\\%) \\\\ \\hline 1991 & 108.01 & 11.23 \\\\ \\hline 2002 & 69.04 & 7.17 \\\\ \\hline 2011 & 62.52 & 6.49 \\\\ \\hline 2017 & 60.64 & 6.30 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Area statistics of Wuhan urban lakes over the years. Figure 4: MNDWI images: (a) 1991, (b) 2002, (c) 2011, and (d) 2017. Figure 5: 2011 MNDWI image histogram. Figure 6: Binary maps of different MNDWI images based on OTSU segmentation: (a) 1991, (b) 2002, (c) 2011, and (d) 2017. Figure 7: Vector maps of extracted urban lakes in different year: (a) 1991, (b) 2002, (c) 2011, and (d) 2017. The classification accuracy were also evaluated by using confusion matrix. The validation samples for accuracy assessment were carefully selected from each individual imagery by visual interpretation. The classification result of each separate image was quantitatively assessed through the overall accuracy and Kappa coefficients, which was reported in Table 3. It can be seen that an average of 99.16% overall accuracy was achieved, and the mean Kappa coefficient was 0.9865. ### Change Detection In order to investigate what leads to the intense shrink of urban lakes over the past few decades, post-classification comparison between the four classified images was performed to produce a change detection analysis. Different \"from-to\" change maps were generated by using ArcGIS 10.2.2 software, which were shown in Figure 9. Post-classification comparison showed that the lost lake surface area had been replaced by a large number of built-up area and vegetation from 1991 to 2017. The changes of the land cover categories were summarized in Table 4. From the decrease of urban lakes in the three maps, it can be seen that the central area of Wuhan had mainly experienced intense urbanization. During the period from 1991 to 2002, a large number of lakes were buried or shrink, 52.9893 km\\({}^{2}\\) water were change into built-up area and vegetation during this period. The city was still in a rapid construction stage from 2002 to 2011, urban lakes continued to be devoured by real estate developments and other damages, and the built-up area changed from water bodies had increased by 56.80 km\\({}^{2}\\) between 1991 and 2011. From 2011 to 2017, due to the improvement of local government's lake protection policy, the area of urban lakes did not change much, and the area of vegetation around some lakes began to increase. It indicates that the urbanization process in downtown Wuhan began to slow down. To sum up, the rapid urbanization since 1991 has a significant impact on the losses of urban lakes. ## 5 Conclusion Water extraction and change detection based on remotely sensed images have increasingly been recognized as one of the most effective approaches for environment monitoring and management. This study showed the advantages of using multi-temporal Landsat images to perform historical change analysis of urban lakes in respect to specific areas of interest. The experiment results revealed that there were significant changes in the surface area of urban lakes from 1991 to 2017. A large number of urban lakes have shrunk or disappeared during this period, and been replaced by a large number of buildings. The remarkable decrease of urban lakes is witnessed in the expansion of built-up areas. Since the legislation for the protection of urban water resources was not perfect before 2002, the local governments have taken stringent measures to strength the protection and management of urban lakes, such as \"The Lake Protection Regulations of Wuhan\", and a lake protection plan of \"three lines and one way\" was also published on December 2012. In terms of the techniques used in the study, it was found that modified NDWI and OTSU algorithm were able to provide \\begin{table} \\begin{tabular}{|l|l|l|l|} \\hline \\multirow{2}{*}{Period} & \\multirow{2}{*}{Water (km\\({}^{2}\\))} & Water to & Water & to \\\\ & & Vegetation (km\\({}^{2}\\)) & Built-up & Area \\\\ & & & (km\\({}^{2}\\)) & \\\\ \\hline 1991-2002 & 193.4397 & 31.9833 & 21.0060 \\\\ \\hline 1991-2011 & 166.3029 & 23.3226 & 56.8035 \\\\ \\hline 1991-2017 & 168.1209 & 31.9275 & 46.3806 \\\\ \\hline \\end{tabular} \\end{table} Table 4: The statistic of change detection results in different periods Figure 8: The classification results of multi-temporal images: (a) 1991, (b) 2002, (c) 2011, and (d) 2017. \\begin{table} \\begin{tabular}{|l|l|l|} \\hline Image & Overall accuracy (\\%) & Kappa coefficient \\\\ \\hline 1991 & 99.77 & 0.9963 \\\\ \\hline 2002 & 99.81 & 0.9968 \\\\ \\hline 2011 & 98.64 & 0.9791 \\\\ \\hline 2017 & 98.42 & 0.9737 \\\\ \\hline \\end{tabular} \\end{table} Table 3: The classification accuracy of each Landsat image. Figure 9: The post-classification change detection results in different periods: (a) 1991-2002, (b) 1991-2011, and (c) 1991-2017. rapid and accurate extraction of water information from Landsat multi-spectral images. In addition, post-classification change detection based on SVM algorithm produced high accurate results, which was benefit to detailed land cover change analysis. It can be stated that continuous change detection based on remote sensing techniques enable us to better understand the detailed spatial changes of urban lakes on one hand, and the trends of urban growth on the other hand. ## References * Bagan et al. (2012) Bagan, Hasi, and Yoshiki Yamagata, 2012. Landsat Analysis of Urban Growth: How Tokyo Became the World's Largest Megacity during the Last 40 years. _Remote Sensing of Environment_, 127, pp. 210-222. [https://doi.org/10.1016/j.rse.2012.09.011](https://doi.org/10.1016/j.rse.2012.09.011). * Demir et al. (2013) Demir, Begim, Francesca Bovolo, and Lorenzo Bruzzone, 2013. Updating Land-Cover Maps by Classification of Image Time Series: A Novel Change-Detection-Driven Transfer Learning Approach. _IEEE Transactions on Geoscience and Remote Sensing_, 51 (1), pp. 300-312. [https://doi.org/10.1109/TGRS.2012.2195727](https://doi.org/10.1109/TGRS.2012.2195727). * Dronova et al. (2011) Dronova, Iryna, Peng Gong, and Lin Wang, 2011. Object-Based Analysis and Change Detection of Major Wetland Cover Types and Their Classification Uncertainty during the Low Water Period at Poyang Lake, China. _Remote Sensing of Environment_, 115 (12), pp. 3220-3336. [https://doi.org/10.1016/j.rse.2011.07.006](https://doi.org/10.1016/j.rse.2011.07.006). * Ji et al. (2009) Ji, Lei, Li Zhang, and Bruce Wylie, 2009. Analysis of Dynamic Thresholds for the Normalized Difference Water Index. _Photogrammetric Engineering & Remote Sensing_, 75 (11), pp. 1307-1317. [https://doi.org/10.14358/PERS.75.11.1307](https://doi.org/10.14358/PERS.75.11.1307). * Li et al. (2013) Li, Wenbo, Zhiqiang Du, Feng Ling, Dongbo Zhou, Hailei Wang, Yuanmiao Gui, Bingyu Sun, and Xiaoming Zhang, 2013. A Comparison of Land Surface Water Mapping Using the Normalized Difference Water Index from TM, ETM+ and ALI. _Remote Sensing_, 5 (11), pp. 5530-5549. [https://doi.org/10.3390/rs5115530](https://doi.org/10.3390/rs5115530). * McFeeters (1996) McFeeters, S. K., 1996. The Use of the Normalized Difference Water Index (NDWI) in the Delineation of Open Water Features. _International Journal of Remote Sensing_, 17 (7), pp. 1425-1432. [https://doi.org/10.1080/01431169608948714](https://doi.org/10.1080/01431169608948714). * Michishita et al. (2012) Michishita, Ryo, Zhiben Jiang, and Bing Xu, 2012. Monitoring Two Decades of Urbanization in the Poyang Lake Area, China through Spectral Unmixing. _Remote Sensing of Environment_, 117, pp. 3-18. [https://doi.org/10.1016/j.rse.2011.06.021](https://doi.org/10.1016/j.rse.2011.06.021). * Otsu (1979) Otsu, Nobuyuki, 1979. A Threshold Selection Method from Gray Level Histogram. _IEEE Transactions on Systems Man & Cybernetics_, 9 (1), pp. 62-66. * Rokni et al. (2014) Rokni, Komeil, Anuar Ahmad, Ali Selamat, and Sharifeh Hazini, 2014. Water Feature Extraction and Change Detection Using Multitemporal Landsat Imagery. _Remote Sensing_, 6 (5), pp. 4173-4189. [https://doi.org/10.3390/rs6054173](https://doi.org/10.3390/rs6054173). * Snehal et al. (2012) Snehal, Patil, and Padalia Unnati, 2012. Challenges Faced and Solutions towards Conservation of Ecology of Urban Lakes. _International Journal of Scientific & Engineering Research_, 3 (10), pp. 1-14. * Song et al. (2013) Song, Chunqiao, Bo Huang, and Linghong Ke, 2013. Modeling and Analysis of Lake Water Storage Changes on the Tibetan Plateau Using Multi-Mission Satellite Data. _Remote Sensing of Environment_, 135, pp. 25-35. [https://doi.org/10.1016/j.rse.2013.03.013](https://doi.org/10.1016/j.rse.2013.03.013). * Taravat et al. (2016) Taravat, Alireza, Masih Rajaei, Iraj Emaddoin, Hamidera Hasheminejad, Rahman Mousavian, and Ehsan Biniya, 2016. A Spaceborne Multisensory, Multitemporal Approach to Monitor Water Level and Storage Variations of Lakes. _Water_, 8 (11), pp. 478-486. [https://doi.org/10.3390/w8110478](https://doi.org/10.3390/w8110478). * Wang (2013) Wang, Zhaolei, 2013. Study on Protection and Utilization of Wuhan City Lake since 1990s. _Environmental Science and Management_, 38, pp. 38-45. * Woodcock et al. (2008) Woodcock, Curtis, et al., 2008. Free Access to Landsat Imagery. _Science_, 320 (5879), pp. 1011-1012. [https://doi.org/10.1126/science.320.5879.1011a](https://doi.org/10.1126/science.320.5879.1011a). * Xu (2006) Xu, Hanqiu, 2006. Modification of Normalised Difference Water Index (NDWI) to Enhance Open Water Features in Remotely Sensed Imagery. _International Journal of Remote Sensing_, 27 (14), pp. 3025-3033. [https://doi.org/10.1080/01431160060589179](https://doi.org/10.1080/01431160060589179). * Zheng et al. (2015) Zheng, Zhubin, Yumnei Li, Yulong Guo, Yifan Xu, Ge Liu, and Chenggong Du, 2015. Landsat-based Long-Term Monitoring of Total Suspended Matter Concentration Pattern Change in the Wet Season for Dongting Lake, China. _Remote Sensing_, 7 (10), pp. 13975-13999. [https://doi.org/10.3390/rs71013975](https://doi.org/10.3390/rs71013975). * Zhu et al. (2015) Zhu, Jianfeng, Qiuwen Zhang, and Zhong Tong, 2015. Impact Analysis of Lakefront Land Use Changes on Lake Area in Wuhan, China. _Water_, 7 (9), pp. 4869-4886. [https://doi.org/10.3390/w7094869](https://doi.org/10.3390/w7094869). * Zhu et al. (2014) Zhu, Wenbin, Shaofeng Jia, and Aifeng Lv, 2014. Monitoring the Fluctuation of Lake Qinghai Using Multi-Source Remote Sensing Data. _Remote Sensing_, 6 (11), pp. 10457-10482. [https://doi.org/10.3390/rs61110457](https://doi.org/10.3390/rs61110457). * Zhu and Woodcock (2014) Zhu, Zhe, and Curtis E. Woodcock, 2014. Continuous Change Detection and Classification of Land Cover Using All Available Landsat Data. _Remote Sensing of Environment_, 144, pp. 152-171. [https://doi.org/10.1016/j.rse.2014.01.011](https://doi.org/10.1016/j.rse.2014.01.011).
Urban lakes are important natural, scenic and pattern attractions of the city, and they are potential development resources as well. However, lots of urban lakes in China have been shrunk significantly or disappeared due to rapid urbanization. In this study, four Landsat images were used to perform a case study for lake change detection in downtown Wuhan, China, which were acquired in 1991, 2002, 2011 and 2017, respectively. Modified NDWI (MNDWI) was adopted to extract water bodies of urban areas from all these images, and OTSU was used to optimize the threshold selection. Furthermore, the variation of lake shrinkage was analysed in detail according to SVM classification and post-classification comparison, and the coverage of urban lakes in central area of Wuhan has decreased by 47.37 km\\({}^{2}\\) between 1991 and 2017. The experimental results revealed that there were significant changes in the surface area of urban lakes over the 27 years, and it also indicated that rapid urbanization has a strong impact on the losses of urban water resources. + Footnote †: Corresponding author: Xiaohan Kong [email protected]
Give a concise overview of the text below.
246
arxiv-format/2308_05019v1.md
ProWis: A Visual Approach for Building, Managing, and Analyzing Weather Simulation Ensembles at Runtime Carolina Veiga Ferreira de Souza, Suzanna Maria Bonnet, Daniel de Oliveira, Marcio Cataldi, Fabio Miranda, and Marcos Lage de Souza is with the Universidade Federal Fluminense and the University of Illinois. E-mails: [email protected], [email protected]. Bomnet is with the Universidade Federal do Rio de Janeiro. de Oliveira, Cataldi, and Lage are with the Universidade Federal Fluminense. E-mail: mageie.ufp.br Fabio Miranda is with the University of Illinois Chicago. Manuscript received xx xx. 201x; accepted xx.xx. 201x. Date of Publication xx xx. 201x; date of current version xx.xx. 201x. For information on obtaining reprints of this article, please send e-mail to: [email protected]. Digital Object Identifier: xx.xxx/TVCG.201x.xxxxxxxx ## 1 Introduction Weather conditions significantly impact agriculture, transportation, public safety (_e.g._, during extreme climate events), and many other critical areas, making forecasting indispensable to designing operational strategies, enabling weather-resilient services, helping decision-making, and defining public policies [16, 34]. Although accurate weather forecasting is crucial, it is still a challenging and active research topic. To perform weather predictions, climate specialists must consider an enormous amount of data (_e.g._, from satellites, weather stations), construct ensembles of numerical simulations, and rely on past experience. Ensembles of numerical weather simulations are computed using mathematical models that describe atmospheric behavior through equations based on physical laws [31]. These models depend on initial conditions, terrain characterization, parameterization of physical processes, and discretization strategies. Weather experts must go through dependent and complex steps to run a simulation. These steps and their data dependencies resemble large-scale scientific workflows [8]. First, meteorologists must define the simulation domain considering terrain characteristics, land-use information, and spatial discretization aspects. Then, they configure the time horizon and discretization to be used during the simulation. Based on the spatiotemporal setup, the user must set the initial and boundary atmospheric conditions. Finally, previous steps are considered to specify the dynamic/thermodynamic behaviors and the micro-scale processes. Usually, setting up these steps requires editing large text files to define several parameters. Also, to run each step, the weather expert must execute various commands in a terminal environment. Orchestrating the setup and running of these steps can become laborious and error-prone if manually performed. For example, the weather forecasting community has widely adopted an important class of models, known as limited-area models, since it allows running simulations at higher resolutions than global models [7]. However, configuring the simulation domain demands the conciliation of technical nuances like the selection of projection methods, the verification of nested grid coherence, _etc._ Even though some open and commercial software provide tools [17] to facilitate the configuration and execution of a simulation, they are still complex to use, especially when the researcher's goal is to build an ensemble of simulations considering multiple configurations. In this scenario, members of an ensemble may use shared configurations and steps, which opens opportunities for developing solutions that take better advantage of the available computational resources. However, running a single weather forecast is already time-consuming, let alone building and managing simulation ensembles. The ability to orchestrate multiple runs simultaneously, easily start new simulations, inspect parfa forest results at runtime, and cancel unpromising runs may enable faster and more precise analyses. Moreover, analyzing the output of weather simulations requires the inspection of multiple variables (rain volume, temperature, _etc._) over space and time. This task becomes more arduous when a simulation ensemble is considered since they allow the study of uncertainties in the outputs. To do so, it is necessary to compute statistics and probabilistic information of groups of ensemble members on demand, which is also challenging to do manually. Given the aforementioned challenges involved in building, managing, and analyzing ensembles of weather simulations, new approaches to make the work of weather forecasts less manual, time-consuming, and error-prone may significantly impact their workflow. Also, new approaches can potentially increase meteorologists' ability to interpret models, calculate risks, and identify relevant weather scenarios. In this work, we studied the analysis processes of weather experts and how to combine knowledge from meteorology, atmospheric modeling, visualization, and provenance to help make weather predictions more efficient and reliable. From that, we built ProWis, a visual analytics system to assist weather professionals to work with the Weather Research and Forecasting model (WRF) [13]. Our contributions are: * A collection of visual elements that allow specialists to easily configure and run WRF simulations by setting parameters such as time horizon, simulation domain, initial/boundary conditions, and micro-scale physical phenomena representation approaches. * The use of workflows to facilitate the construction of simulation ensembles. Such workflows are managed by a Workflow Management System (WMS), which captures and stores provenance data (_i.e._, the data derivation process [14]) to allow for integrated analysis and reproducibility of simulation results. * A set of interactive visualizations that enable the exploration of multiple atmospheric variables and the investigation of weather scenarios at runtime. The visualizations can adapt to represent single or multiple ensemble members. * Two case studies performed in collaboration with weather experts demonstrating the effectiveness of ProWis. In summary, this paper presents ProWis, a web-based system that allows the setup, execution, management, tracking, and exploration of simulation ensembles through a visual, interactive, and integrated evaluation of the multivariate and spatiotemporal output data at runtime. ## 1 Related work This work contributes to several aspects of the weather forecasting process: setup of WRF simulations, orchestration of runs, monitoring of simulations at runtime, and visualization of simulation ensembles. To the best of our knowledge, the primary tool to aid WRF simulations is the so-called WRF Domain Wizard [17], a graphical interface that assists in the definition of any WRF parameter and can run individual WRF simulations. However, since it is a general-purpose tool, it is still a laborious and complex task to build and run multiple simulations (a common scenario in the daily duties of weather professionals) using the tool. For example, researchers working in specific regions, _e.g._, Brazil, rarely need to change the geographic projection method used by the model. More generally, it is reasonable to expect some model settings to be preserved throughout several runs, assuming default values. These usage features are not present in the WRF Domain Wizard, which requires the definition of all simulation parameters at every new simulation setup. In this sense, user-friendly WRF configuration tools that facilitate the study of scenarios and enable the management, monitoring, and analysis of multiple simulations at runtime can help experts save time and not expose the simulations to human-made errors. Previous commercial or academic tools do not support these features. Atmospheric sciences are closely related to visualization since the latter effectively bridges the gap between simulation data and knowledge about climate conditions. Rautenhaus _et al._[21] reported the need to bring together specific concepts of meteorology and state-of-the-art visualizations to enhance the analytical skills of weather professionals. However, one of the challenges visualization researchers face is that experts in this domain resist adopting new interactive visualizations. Visualization researchers must be aware of the demands and fears of domain experts so they can focus their efforts on increasing the acceptance of visualization tools that enable superior analysis capabilities. Most tools commonly used in meteorological data exploration are command-line-based pieces of software that implement functions for data import, statistical analysis, and image generation (_e.g._, Ferret [19], GrADS [4], GMT [25]). Recently, Nikfal [18] proposed a tool called PostWRT aimed at helping WRF users visually handle and explore outputs. Even though PostWRT facilitates the work of some experts, it can be considered a technical tool since it requires programming skills. **Ensemble visualization.** Several works contributed to the visualization of weather simulation ensembles. Potter _et al._[20] developed Ensemble-Vis, a visualization system for analyzing climate ensembles considering different initial conditions. It was one of the pioneers in showing the utility of interactive and linked structures for exploring the average behavior of multiple atmospheric fields in space and time. Sanyal _et al._[24] proposed a tool that exploits the advantages of visual structures such as glyphs and ribbons for uncertainty studies of ensembles. Cox _et al._[6] presented an alternative visualization technique to support the forecast of hurricanes. They primarily used isocontours, which appear and disappear dynamically as the risk varies. Waser _et al._[32] presented a data-flow-based approach for studying uncertainty and constructing simulation ensembles. Diehl _et al._[10] created an interactive visual dashboard to help experts that use WRF to study its outputs. It uses small maps strategically placed on timelines to provide an overview of model outputs and enable the identification of visual patterns. Rautenhaus _et al._[22] proposed an open-source application that provides statistical and probabilistic ensemble analysis based on 2D and 3D visualizations. Biswas _et al._[2] proposed an interactive visual system that allows the analysis of multiple ensembles. Wang _et al._[30] developed a visual strategy to explore correlations among simulation parameters. Watanabe _et al._[33] proposed an angle-based parallel coordinate graph for exploring large sets of simulations. Souza _et al._[9] developed a visual system that enables the analysis of large ensembles of extreme event simulations. **Provenance-aided visualization.** Previous research achieved positive results by combining data visualization and provenance capabilities, including the visualization of weather and climate data [1, 35, 23]. Callahan _et al._[3] proposed VisTrails, a workflow management system that orchestrates the execution of multiple tasks and captures provenance data to support the exploration and comparison of data, simulations, visualizations, _etc._ Santos _et al._[23] used provenance features from VisTrails to support the visual exploration of climate data. Stitz _et al._[27] combined data visualization and provenance control to record interactions, restore previously accessed views, and find similar analyses. Gratzl _et al._[12] introduced a model that combines data exploration and presentation through visual stories generated by the user's analysis history and interactivity. Xu _et al._[36] created a system to assist team analysis of complex data exploration. Behrens _et al._[1] presented a system aimed at managing, creating, and maintaining sets of simulations of coupled models. Their work used provenance control to prevent repetitive steps throughout the analyses. Although some previous research related to our work exists, none addresses our main contribution: the proposal of a visualization and user-friendly system designed to help weather experts set up, execute, manage, track, and visually explore the simulation ensembles at runtime. As shown in the case studies (Section 7), ProWis allows professionals to perform detailed analyses that would be difficult to achieve without the tool. ## 2 Background **Numerical models.** A Numerical Weather Model (NWM) can approximate the solution of complex mathematical equations that describe physical behaviors such as thermodynamic laws, Newton's second law, and the continuity equation, which can depict the atmosphere's state at a particular time given the initial and boundary conditions. To do so, the area of interest must be represented as a three-dimensional grid used to discretize the equations. The grid's resolution influences the quality of the solution, but using high resolutions may be computationally unfeasible if considering large areas. Limited-area NWMs (NWMs considering restricted geographic areas, small-scale phenomena, and short periods, such as days or hours) are widely adopted to overcome this limitation. The initial conditions needed to solve limited-area NWMs comprise samples of the atmospheric state, measured _in-situ_ by meteorological instruments and data extracted from satellite data. Also, boundary conditions are obtained from global NWMs, _i.e._, NWMs that consider the entire globe as the simulation domain but computed using coarser grids. They are constantly updated by global organizations such as the National Centers for Environmental Prediction 1 and the European Centre for Medium-Range Weather Forecasts 2. When the spatial resolution used to generate the initial and boundary conditions is much coarser than the resolution of interest in the limited-area NWM, using nested domains to smooth the data is common. Suppose the global NWM utilizes a resolution of 25 \\(km\\), and the expert wants to build a grid of 2 \\(km\\) in the limited-area NWM. In that case, it is possible to set up nested grids of 18 \\(km\\), 6 \\(km\\), and 2 \\(km\\) and solve the model at multiple resolutions, using the solution of the coarser resolutions as the initial/boundary condition of the finer ones. Footnote 1: [https://www.ncdc.noaa.gov/](https://www.ncdc.noaa.gov/) Footnote 2: [https://www.ecmw.fint/](https://www.ecmw.fint/) **Weather Research and Forecasting workflow.** The WRF model [26] is a numerical weather model widely adopted for research and operational purposes by several areas that depend on the weather, such as for atmospheric chemistry [13], hydrological modeling [11], and wildland fires [5]. WRF supports limited-area NWM simulations and provides a free and open-source implementation that can be integrated into other platforms. To run a WRF simulation, a well-defined workflow must be executed. This workflow, depicted in Figure 2, can be decomposed into two connected sub-workflows named WRF Preprocessing System (WPS) and Processing (PRC). Each sub-workflow is, in turn, composed of multiple steps and their data dependencies. Running the complete workflow requires editing several configuration files, downloading initial and boundary conditions (ICBC) data, and executing multiple WPS and PRC programs using the command line. Most of the WPS and PRC configuration files' content is organized by columns, each representing one nested domain. Using these files, users can configure different simulations on each nested domain. WPS sub-workflow consists of three steps, each one associated with the execution of a program: (a) _Georgid_ defines the domain(s) discretization and interpolates static data, such as topography and land use categories, over the domain; (b) _Ungrib_ processes the ICBC data; (c) _Mtegrid_ receives the Georgid/Ungrib outputs, and horizontally interpolates the meteorological information over the domain(s). The PRC sub-workflow is composed of two steps: (a) _Real_ program receives the Metgrid's outputs and vertically interpolates meteorological fields, and (b) the _WRF_ implementation uses the output of the Real program to start a simulation. The simulation execution time depends on the number of domains, the spatiotemporal discretization, and the computational environment, among other variables. The final simulation output is a collection of files in the NetCDF [29] format. Each file corresponds to the simulation output obtained using one domain. Three NetCDF output files will be produced if three nested domains are defined. Each file stores a \\(n_{x}\\times n_{y}\\times n_{t}\\) tensor for each atmospheric field (such as temperature and precipitation) and vertical level if applicable, where \\(n_{x}\\times n_{y}\\) is the grid and \\(n_{t}\\) the temporal discretization. The entries of the matrix are the simulated values in each grid cell. **Parameterizations and ensembles.** Grid resolutions typically used in NWMs cannot capture micro-scale physical processes essential to increase weather simulation quality. For this reason, scientists have created physical parameterizations, _i.e._, statistical methods, and algorithms aimed at mimicking these effects in NWMs. Since the parameterizations are approximations of real physical processes, several approaches are proposed in the literature to represent each physical process. The choice of parameterizations can drastically affect its result. Usually, the choice of parameterizations relies on the expert's experience. In fact, the selection of parameterizations, together with the spatiotemporal discretization and ICBC data generation methodology, are examples of sources of uncertainty in an NWM. A common approach adopted by experts to reduce their impact is constructing simulation ensembles by fixing the geographical region and the temporal horizon but varying other configuration parameters, such as the spatiotemporal discretization, the ICBC data, and the physical parameterizations [21]. Although the outputs of a single simulation are deterministic, building an ensemble of simulations enables probabilistic analytical approaches. Also, building simulation ensembles enables the comparison of different results, which is essential in operational weather forecasting [9]. ## 3 Challenges Working with numerical weather ensembles, from setting up a single run to analyzing multiple simulations in an integrated way, is challenging. With WRF, starting a run is accomplished by directly editing various text files that store several parameters, opening up many setup possibilities. Configuring a simulation run may become problematic even when most parameters are set to default values. First, to delimit nested simulation domains, the user must set non-trivial parameters such as the size, position, and number of cells in each grid used by the model. WRF may fail to find a valid atmospheric state solution if the parameters are not precisely defined. Usually, weather professionals set these parameters through a trial-and-error approach or using tools such as the WRF Domain Wizard. Although helpful, the WRF Domain Wizard can be difficult to use since it is designed to be a general-purpose WRF grid configuration tool. Second, several parameters must be defined multiple times (in different text files), which is exhaustive and error prone. Although it is important to make WRF a general model, in most real-world scenarios the excess of degrees of freedom is under-utilized. A viable workaround to these difficulties is using scripts to automatize the WRF setup phases: edit the configuration files, download ICBC data, and execute the WPS and PRC modules. Systematizing these processes also avoids manually running command line tools, which may be challenging for weather experts. After starting a simulation run, experts must wait until its execution is complete to analyze the produced results. However, a single run can take hours or days to complete, depending on the simulation parameters and the available computational environment. Analyzing results _insitu_ could save time, especially for computing-intensive simulations. Nevertheless, it is not recommended to manually access intermediate Figure 2: WPS and PRC sub-workflows. Geogrid, Ungrib, and Metgrid process terrain, ICBC, and meteorological data. Real and WRF consume the WPS output and perform the simulation. simulation files during a run since it can interrupt the model and force it to start over. Also, a single run generates a large amount of multivariate spatiotemporal data outputs that experts usually analyze using standard tools such as GrADS [4], NCL [28], or Vapor [15]. These tools allow the visualization of atmospheric variables in two-dimensional maps saved as static images. It is essential to notice that the workflow associated with a simulation is both computing- and data-intensive. Although some popular WRF analysis tools offer dynamic aggregation and interactive data exploration, they are limited to small-scale analyses. In this way, the user may still need to perform laborious tasks to identify patterns and extract information from simulation outputs, made worse when analyzing ensembles. It is common for users to run a series of simulations, build ensembles, and evaluate them. However, their construction must be careful since not all simulations have promising results, and bad runs may camouflage valuable information. To get around this issue, the user must run and evaluate one simulation at a time before adding them to an ensemble. This task becomes more arduous as the number of simulations increases. However, simulations commonly share parameters, and previous configurations may be used as a base for new ones. Also, waiting for multiple runs to finish and then analyzing the results may take a long time. During the analyses of the results, it is hard to identify patterns while being aware of parameter choices and the uncertainties they impose. In summary, creating and investigating simulation ensembles pose several challenges. Therefore, research aimed at facilitating these processes is relevant. We intend to simplify and optimize simulation ensembles' building, management, and analysis in runtime, employing provenance and visualization techniques. ## 5 Requirements In this work, we have engaged two weather experts with extensive forecasting experience. Through a series of meetings, we discussed with them the requirements a system should meet to make the use of the WRF models simpler and faster. Our main goal was to simplify their workflow, from building a single run to analyzing the results of simulation ensembles, without restricting their analysis capabilities. In this sense, we adopted some scope boundaries, which may be expanded in future work to adapt ProWis to other uses. We decided to consider limited-area simulation ensembles where members may differ by the chosen physical parameterizations (what is called physical ensembles) and/or by the source of ICBC data. These two setup possibilities are highly relevant since: (1) different parameterizations could lead to highly distinct predictions; (2) studying parameterizations is essential for operational weather forecasting since it requires a deep understanding of their impacts, especially in dealing with extreme weather events; (3) initial and boundary conditions that faithfully represent the atmosphere state are crucial. Different ICBC data are rarely identical, and small perturbations may drastically impact predictions. More precisely, ProWis must satisfy requirements: **[R1] Support simulation setup.** Enable easy definition of nested domains, start/end dates, ICBC, and parameterization schemes. Since many physical processes exist, our collaborator experts selected the most important for their work: cloud microphysics, cumulus, land surface, surface layer, and planetary boundary layer (PBL). **[R2] Automate simulation runs.** Manage inputs and outputs of the WRF workflow. The system should automatically start a simulation and allow users to stop or restart runs easily. In addition, if a distributed environment is available (_e.g._, a small cluster or a computing cloud), the workflow runs can be scheduled to different machines. **[R3] Manage provenance data.** Capture and store the data derivation path of each piece of data, as well as any other metadata associated with each workflow run (the user who started the run, the analysis project it belongs to, the execution time of each step of the workflow, the directory paths used to store the input and output files, _etc._) Such metadata allows users to track the history of each run. **[R4] Optimize preprocessing.** Automatically decide when there is no need to execute a WRF step and use the output of previous runs (_i.e._, caching). We assume that the result of an activity of the workflow is stored correctly to be reused if and only if the activity consumes the same parameter values as the previously executed one. For example, suppose an expert wants to perform a new simulation using the same domain configuration as a previous run but considering a different period. In that case, they should not need to rerun the Geogrid step. **[R5] Automate the extraction and storage of atmospheric fields.** Filter, organize and store relevant atmospheric field data from the output files of an in-progress simulation without interrupting the run or damaging the files. The extracted values should be stored in the same database as provenance data, allowing for an integrated analysis. WRF provides hundreds of output variables. The important ones depend on each application. Our collaborators selected 7 atmospheric fields related to the forecast of extreme rainfall events: precipitation, temperature 2 meters above the surface, divergence at the vertical level of 300 hPa (\\(\\sim\\) 10 km above sea level), upward vertical wind at 500 hPa (\\(\\sim\\) 5.5 km above sea level), convergence at 850 hPa (\\(\\sim\\) 1.5 km above sea level), k-index (an indicator of atmospheric instability), and relative humidity at 850 hPa. **[R6] Support the visual analysis of ongoing simulations.** Allow the interactive visual investigation of the results of in-progress simulations. This requirement would allow the expert to detect patterns early and decide whether it is worth keeping or stopping a run. **[R7] Support the creation of ensembles.** Allow users to seamlessly create as many simulations as necessary and organize them into ensembles according to their technical necessities. For example, if several simulations use the same domain configuration but different ICBC sources, the user should be able to choose to create one ensemble with all runs or multiple ensembles, one for each ICBC source. **[R8] Support the visual analysis of simulation ensembles.** Visually examine several atmospheric fields in space and time to identify regions and periods associated with relevant patterns and/or target weather scenarios. ## 6 ProWis system ProWis is a web-based visualization system designed to meet the requirements defined by our collaborators. Using a visual and user-friendly approach, the system enables the creation and exploration of weather simulation ensembles based on the WRF model. The visual interface includes a set of interactive visual structures to configure (**R1**), run (**R2**, **R3**), and analyze (**R5**) individual simulations. Also, it provides visual elements to facilitate the construction (**R6**) and analysis of ensembles (**R7**). The system also has a backend containing a provenance and metadata database and a workflow management system. The Server Core automatically extracts and stores atmospheric field data (**R4**) and manages metadata, input/output workflow files, and WRF runs (**R8**). Using ProWis, the user can configure and investigate weather simulations by simply using a web browser, without complex command line applications. Figure 3 shows the system's components. Backend Figure 3: Overview of the ProWis components. ### Backend The backend dynamically manages the WPS and PRC sub-workflows. It records, stores, and loads relevant atmospheric field results, and computes statistical and probabilistic aggregations. It contains the provenance and metadata database deployed on the column-oriented MonteDB database system. MonetDB was chosen because it performs well with the query types required by ProWis (_i.e._ it took on average less than 1s to handle queries from the interface in our case studies). It also offers advanced features such as clustering, data partitioning, and distributed query processing that can be explored. The backend also has an embedded workflow system (Apache Airflow) to manage the execution of the WRF workflow. The Server Core directly connects to the interface using WebSockets, _i.e._, it can notify the interface when new data is processed, updating the visualizations. The Server Core was built using Python, primarily relying on the libraries Flask-SocketIO, Pymonetdb, NetCDF4, and WRF-Python. **Provenance and metadata database**. The provenance and metadata database organizes data in a structured and queryable way. It allows storing information regarding several projects, which can be associated with multiple simulations (**R3**). The parameters and input/output data are registered in each simulation. Similarly, the metadata associated with the execution of each workflow program (start/end time, errors, _etc._) is stored. This database is a rich source of information that can be used to optimize the setup of simulations or to provide analytical capabilities to the experts. Using a database also has made data loading faster since it can handle aggregation or filter queries directly. This strategy is more efficient than loading the entire simulation output to memory and extracting the relevant information afterwards. **Automatic storage.** The Sever Core module checks for new files produced by the WRF simulations every minute. When new output files are detected, it stores aggregated information, together with the files' content on MonetDB (**R5**), enabling the analysis of simulation results at runtime (**R6**). The Server Core computes aggregations, _i.e._, spatiotemporal statistics (minimum, maximum, and average), considering the entire domain and predefined intervals (1 hour, 3 hours, 24 hours, and the complete simulation period). Data are stored (and can be retrieved) with information regarding the nested grid, atmospheric field, grid point, and time. **Dynamic data aggregation.** ProWis handles dynamic aggregation queries. For example, assume that a user has created an ensemble with five runs and selected three atmospheric fields to analyze in the spatial dimension in a custom period. In this case, given an aggregation function (minimum, maximum, average, and probability), the Server Core computes, at each grid point, the aggregation of each field considering the five runs. If the user modifies the ensemble (_e.g._ adding a member), the backend dynamically updates the aggregations. **Run management.** We adopted the Apache Airflow workflow management system to orchestrate the multiple WRF runs. It allows users to model their workflows as directed acyclic graphs (DAGs), each gathering and organizing tasks to be executed sequentially and/or in parallel (**R2**, **R4**). Our collaborators' daily tasks involve the following: providing the WPS setup file, running Geogrid, downloading ICBC data, running Ungrib, running Metgrid, providing the PRC setup file, and running the Real and WRF programs. Considering all possible workflows that can be modeled based on the combinations of programs, we identified and modeled six DAGs, illustrated in Figure 4, that are available for execution within ProWis. The first five tasks in the figure are related to the WPS sub-workflow, while the last three are associated with PRC. ProWis automatically defines what DAG must be used based on the configurations of the current and previous runs: DAG1 is triggered to generate a new simulation from scratch. DAG2 is triggered when ICBC is available, but the other tasks must be performed. DAG3 is executed when domain settings and intermediate files can be reused from a previous run. DAG4 works under the same assumptions as DAG3, but the ICBC must be available. DAG5 can only be executed when Ungrib outputs can be reused from a previous run. Finally, DAG6 is used when no WPS program needs to be run, _e.g._ the user has only changed the physical parameterizations of a previous run. ### Visual exploration interface ProWis' interface design and functionalities aim to meet weather experts' technical demands while adopting a user-friendly and visual approach. Most of its visual components rely on two-dimensional designs that professionals are familiar with, such as scatter plots, line graphs, and heatmaps. Nevertheless, we also incorporated visual representations that are not so usual to them, such as heat matrices and sunburst charts, since they provide expressive and concise visualizations of simulation outputs. As shown in Figure 3, the system interface comprises three interactive views: the Setup, the Simulation, and the Ensemble views. When a new analysis project is started, the user is directed to the Setup view to configure and run their first simulation. If a previously created project is loaded, the user can explore previous runs individually or as ensembles of simulations using the Simulation and Ensemble views. #### 6.2.1 Setup view The Setup view, illustrated in Figure 5 (a), was designed to facilitate the setup of a new WRF simulation from scratch (**R1**). It comprises a top menu, a domain map, a domain summary table, and a parameterization menu. Users can set the simulation domains' grid resolutions and central points using the top menu, define the start/end simulation time, and select the ICBC data source. The domain map and summary table allow the visual definition and verification of the domains' configuration. Finally, the parameterization menu allows the selection of the physical parameterizations considered in the simulation. The user must follow a simple and visual process to configure the simulation. Once the coarse grid resolution is set in the top menu (Figure 5 (b)), the user interactively defines a new domain by brushing on the map (Figure 5 (c)). The latitude and longitude of the grid's central point are linked to the top menu, allowing the user to fine-tune the domain location. We notice that, by using the interface, the users don't need to manually make the parameters of all nested grids compatible since the system automates these tasks. When a new domain is defined, an overview of its settings (the parent domain's identifier, resolution, latitude and longitude of the central point, and the number of points) is shown in the domain summary table (Figure 5 (d)). The user can delete a previously created domain through the summary table by clicking the trash icon. After completing the setup, the user can run the simulation through the top menu. As described in Section 6.1, the execution and management of the running simulation are automated by the backend (**R2**). After the run starts, the Simulation view is loaded. #### 6.2.2 Simulation view The Simulation view, shown in Figure 6 (a), provides individual simulation visual analysis capabilities. Its design went through several refinements based on our interactions with weather experts. Its final composition contains a top menu and three panels: the runs overview, the temporal, and the spatial analysis panels. The top menu allows the user to configure the currently selected run, the grid considered by the visualizations, the function (average, maximum value) used to compute atmospheric field aggregations over time, and up to three atmospheric variables of interest to be jointly analyzed in the spatial dimension. The runs overview panel was designed to enable the monitoring of active simulations and the creation of new simulations that share configuration parameters with previous runs. It is composed of a graph and a scatter plot. The graph represents the parent-children relationship Figure 4: ProWis’s DAGs and tasks. among runs. Each node of the graph represents a different simulation run. The graph's root is constructed using the Setup view. The setup data of any node can be reused to create a child node representing a simulation that shares setup parameters with its parent (**R4**). For example, in Figure 6 (b), runs 2 and 3 were based on run 1. Similarly, run 3's parameters were the basis for the construction of run 5; run 2 was used as the starting point of the setup of run 4, _etc._ The idea is that the structure of the provenance database mirrors the structure of this graph. Each node of the graph has mouse click and mouse over implementations. As shown in Figure 6 (b), when the mouse is over a node, its primary information (name, status, simulation start/end date, ICBC metadata, physical parameterizations, _etc._) is displayed. If a node is clicked, the user can choose to _Analyze_ the simulation in the Simulation view, even when it is incomplete (**R6**); create a _New child_ reusing its setup parameters (**R1**, **R4**), _Restart_ or _Abort_ a simulation that presented failures or has unpromising results (**R4**); and _Delete_ a simulation and its graph node (**R7**). It is also possible to _Add_ or _Remove_ the simulation from an ensemble of simulations (**R7**). The scatter plot, illustrated in Figure 6 (c) and (d), shows how similar the performed simulations are considering the predicted precipitation volume. Each circle in the chart represents a simulation; the closer the dots, the more similar precipitation volumes they have. To construct the scatter plot, we computed feature vectors representing each simulation. The vectors contain the statistics of the predicted rainfall volumes (maximum, average, and standard deviation) considering time intervals of 3h, 24h, and the entire simulation period. So, each simulation is represented by a 9-dimensional feature vector. Then, we applied the principal component analysis (PCA) method to project the feature vectors. We decided to use PCA since the experts were already familiar with the method, but any other projection technique could be used. This visualization helps the user relate the precipitation simulation output to its setup parameters. This relation is important since the user can make sense of the parameters' sensitivity and use this knowledge as a guide to create simulation ensembles (**R7**). The user can color the graph nodes and scatter plot dots by the run's status (_i.e._ success, running, or failed - Figure 6 (b)), the parameterizations (Figure 6 (c)), the ICBC data source (Figure 6 (d)), and the containing ensemble's id. The Simulation view implements the temporal analysis panel to enable the time-based analysis of simulations. The panel is composed of a collection of line charts (middle column in Figure 6 (a)) and a sunburst chart (bottom left chart in Figure 6 (a)). The line charts show the simulated value of different atmospheric fields at each time step. Weather experts widely use this type of visual representation, especially to study accumulated rainfall, because it allows them to quickly identify when rainfall events may happen and their magnitude. The sunburst chart is a collection of concentric circles representing different time aggregations and is colored based on the accumulated precipitation in each period. The outer cells represent one-hour intervals. The second group of cells represents three-hour intervals. Following this, the third layer represents 24-hour intervals, and the central cell represents the entire time horizon. Although weather experts are not familiar with this visualization, the extra information about accumulated values is fundamental to interpreting and identifying high-volume precipitation events. In Figure 7 (a), the cell identified as _72h (24h)_ represents the rainfall accumulated between the 48h and the 72h simulation steps, _i.e._ the rainfall accumulated in the 24 hours immediately before the 72h time step. In the example, the time horizon has 72 hours. Because of that, the sunburst has 72 one-hour cells in the first layer, eight 3-hour cells in the second layer, three 24-hour cells in the third layer, and one 72-hour cell in the center. The user can interact with the chart by clicking and brushing the cells. Doing so, the spatial rainfall distribution shown in the spatial analysis panel is updated to reflect the selected period (see Figure 7 (b)). Finally, the spatial analysis panel allows the analysis of the distribution of atmospheric fields. It contains three maps for the visualization of the distributions of different fields at a given instant (or the accumulated field distribution over a time interval, in the case of precipitation volumes). We use the marching squares algorithm to generate the contours, primarily because of its efficiency. The user can control the time reference used to build the maps using the top menu and, in the case of the rainfall field, using the sunburst. The active period is indicated in the line charts by a vertical line. In the case of accumulated rain volumes over a time range, a rectangle is used to indicate the selected time interval. If the user wants to investigate a specific domain point, they can click on a map to select it. In doing so, the temporal analysis panel is updated only to show data associated with the selection. #### 6.2.3 Ensemble view Ensemble view, illustrated in Figure 8 (a), was designed to enable the temporal and spatial visual analysis of dynamically constructed ensembles, regardless of the number of members (**R7**, **R8**). Similar to the Simulation view, its final design was defined in conjunction with experts. It contains a top menu, a temporal, and a spatial analysis panel. The Ensemble view uses the same spatial visualizations provided in the Simulation view. The only difference to the maps in the Simulation view is that it aggregates values of multiple ensemble members. Experts can explore worst and average scenarios by changing the aggregation function. Using similar metaphors facilitates the experts' interpretation of results and increases their engagement with the system. The temporal analysis panel comprises a collection of heat matrices, one for each atmospheric field. Each row (y-axis) of a matrix represents an ensemble member, while each column (x-axis) is associated with a time step. The color of each cell is mapped to the aggregated value of the simulated atmospheric field over the domain (or a grid point selected using the map). Using these matrices was important to enable the visualization of multiple ensemble members simultaneously while still encoding the global ensemble patterns. The time information used to build the maps is represented in the heat matrices, as illustrated in Figure 8 (b, c). In the example, the time step 24h (highlighted in the last two matrices) is used to construct the maps associated with the temperature at 2 meters and divergence at 300 hPa variables. Using the precipitation volume matrix, the user can brush an interval to study accumulated rainfall values (top of Figure 8 (b)). The precipitation map shows the maximum volumes observed between 24h and 36h. We observe that different time references can be used to analyze rain volumes and other variables. This feature is vital since atmospheric fields are interpreted to investigate future precipitation. Figure 5: (a) The Setup view lets users configure WRF simulations interactively. (b–d) Interactions used to set up domains. In addition to using the statistical aggregation functions, the Ensemble view also allows the computation and visualization of the probability of observing scenarios of interest. For example, the user can use the matrices to visualize the likelihood of observing temperatures higher than a certain threshold. To do so, the expert must define the target values for each variable that characterizes the scenario of interest. Each matrix cell displays whether such a scenario occurred at least in one domain point. When computing rainfall probabilities, the user must define a value threshold and a period over which the precipitation accumulation should be considered (for example, investigate the likelihood of 100 mm occurring in one hour). This information is essential since 100 mm of rain in one hour is considered extreme, while 100 mm of rain spread over 7 days is not important. The user can also define the period when the event may happen. For example, the expert may be interested in verifying the probability of a condition (_e.g._, rainfall volume of 100 mm/h) in a specific time window (_e.g._, the first 24 hours) to check the probability of observing 100 mm/h rainfall in the first 24h of simulation. This analysis may be used to trigger weather alerts. ## 6 Case studies Two weather experts conducted two case studies to validate the system. They used a desktop with a Ryzen 7 3700X 3.6GHZ, 16GB, and GeForce GT 210 1GB. The first study refers to an extreme rainfall event in Marica, a city in the state of Rio de Janeiro, Brazil. The second analyzes a rainfall event in Sao Paulo, Brazil. ### Extreme rainfall event in Marica (2022) ProWis was initially used to evaluate WRF forecasts of an extreme rainfall event that happened on April 1, 2022, in Marica (white pin in Figure 1 (d)). This event was associated with a cold front that moved through the southeastern region of Brazil between March 31 and April 2, 2022. The greatest rain volumes were observed in the western area of Marica from 10PM on March 31 to 3AM on April 1 and in the city's central region between 12AM on April 1 and 3AM on April 2 (GMT). The weather stations recorded accumulated rainfall volumes between 88mm and 260mm at these moments, accounting for the previous 24 hours. These volumes characterize an extreme storm and several people lost their homes. The weather experts started the investigation by creating a new project on ProWis. Then, using the Setup view, they built their first simulation, in which the main configuration parameters are illustrated in Figure 1 (a). Using the top menu, they entered the start (03/31/2022 00:00 GMT) and end dates (04/03/2022 00:00 GMT), the coarser domain's spatial resolution (18,000 meters) and selected GFS as the source of the ICBC data. Afterwards, they defined three nested domains using the domain map. The experts checked the domain summary table for each demarcated domain to ensure that the grid's resolution and the position of their central points were adequately defined. Eventually, they preferred to type the latitude and longitude of the central point using the top menu to adjust its location precisely. Then, they selected one parameterization for each physical process they wanted to consider in the simulation and clicked the _Run WRF_ button to start it. They were automatically directed to the Simulation view and observed that the runs overview graph was updated with a new root node. As the simulation evolved, the experts evaluated the results and decided to keep the simulation running. If the simulation results were not promising, ProWis could be used to stop the simulation to save computational resources. After analyzing the outputs using the temporal analysis panel (configured to use the maximum aggregation function), the experts identified that some atmospheric variables indicated an extreme event between time steps 18h and 36h. As red boxes highlight in Figure 1 (b), the divergence at 300 hPa, vertical upward wind at 500 hPa, convergence at 850 hPa, and the precipitation itself were very high, especially in the finer grid resolution. To deepen the analysis, the weather experts selected several different time intervals using the sunburst chart (see Figure 1 (c)) to inspect the accumulated rain spatial distribution in the spatial analysis panel. During this process, they observed that although the model predicted high rainfall volumes, they were concentrated in other Rio de Janeiro areas. In fact, they noticed that the amount of rain predicted for Marica was not compatible with the rain volumes observed in the city during the event (see Figure 1 (d)). By selecting points in Marica on the map, the line charts/sunburst were automatically updated to show the associated data. The experts confirmed that the model could not predict the event's magnitude in the most affected areas throughout the time horizon. Figure 6: (a) Simulation view. (b) Example of the runs overview graph (middle) from the São Paulo case study (Section 6.2). It shows six runs, 5 completed and 1 in progress. By hovering node 2, the run’s information is shown. The user can interact with a run by clicking on its node. (c) Scatter plot from the same case study colored according to the ICBC source. Runs with the same ICBC generated similar precipitation results; (d) Scatter plot of the Marica case study (Section 6.1) colored by the cumulus physical process and showing no precipitation pattern. Figure 7: The map updates if a time interval is set in the sunburst chart. Conversely, the sunburst chart is recomputed if a grid point is clicked. Despite that, the interpretation of other variables still suggested the possibility of heavy rainfall in the city. Aiming at better reproducing the event, the weather experts derived child runs from the root node using the runs overview graph. The new runs reused the base configuration of the first simulation but considered different physical parameterizations. A total of eight runs were created. Figure 1 (e) shows the final graph colored by the cloud micro-physics parameterization choice. By inspecting the overview scatter plot, no particular pattern was identified on the runs precipitation prediction considering their parameterizations (see Figure 1 (f)). Furthermore, very few differences between the runs were identified by individually exploring each run using the Simulation view. As the next step, the experts decided to compose an ensemble. Using the Ensemble view to evaluate the probability of observing an extreme event in Marica, the following threshold values were defined: K index of 27\\({}^{\\circ}\\)C, precipitation at 400nm accumulated in 1h, divergence at 300 hPa of 30\\(\\times\\)10\\({}^{-5}\\)/s, upward vertical wind of 5.0 m/s, convergence at 850 hPa of 30\\(\\times\\)10\\({}^{-5}\\)/s, relative humidity of 100%, temperature at 2 m from the surface of 30\\({}^{\\circ}\\)C. Observing the heat matrices, the experts identified a possibility of rainfall volume of 40mm/h between 18h and 36h (see Figure 1 (g)). Observing the spatial analysis panel, they verified that although the possibility was relevant in some areas of the domain, it was very subtle in Marica (see Figure 1 (j)). Similarly, the simulated K index values were greater than 27\\({}^{\\circ}\\)C from 0h to 27h (see Figure 1 (h)), but these values were mainly observed outside Marica (see Figure 1 (k)). Finally, relative humidity achieved 100% in some regions at time step 13h (see Figure 1 (i)). However, although the experts observed high probabilities of reaching the target value in most areas of the domain, in Marica, these values could only be observed at time step 30h (see Figure 1 (h)). Although the simulation results could not indicate the possibility of an extreme event in Marica, the experts could easily run and analyze several simulation scenarios without worrying about setup, data management, and visualization technical details that would be required if the same experiment were performed without ProWis. False-negative weather predictions are why atmospheric modeling remains an active research field. ### Rainfall event in Sao Paulo (2018) In 2018, a frontal system moved across Sao Paulo, causing rain. In this case study, the weather experts used ProWis to test the WRF model's sensitivity to different ICBC, grid resolutions, and physical parameterizations. The meteorologists performed six runs from 06/01/2018 00:00 GMT to 06/05/2018 00:00 GMT, a 96-hour interval (see Figure 6 (a)). They set up two domains, the first with a grid of 18 km and a nested one of 6 km (the same procedure described in Section 7.1). The six runs used three different parameterization combinations for the cloud microphysics, cumulus, and PBL physical processes (_i.e._ the first/second, third/fourth, and fifth/sixth runs shared the same parameterizations). Also, the odd-numbered runs used ECMWF as the ICBC data source, while the even-numbered ones used GFS. By coloring the points of the runs overview scatter plot according to the ICBC source, the experts identified that considering the coarser grid, simulations that generated similar results used the same ICBC data (Figure 6 (c)). In other words, it was possible to observe two clusters, one for each ICBC data source. Similar patterns were observed in the finer grid (Figure 6 (d)), but interestingly, runs 2 and 4 were closer than in the coarser grid, which shows that the grid resolution significantly affects rainfall predictions. Moreover, after a closer look at the clusters, the experts noticed that runs 1 and 5 were the closest ones in both cases (Figure 6 (c) and (d)), indicating that they have similar precipitation outputs. This finding was surprising since these runs use different parameterizations for all physical processes. In contrast, runs 3 and 5 used the same parameterizations for PBL and cloud microphysics, and runs 1 and 3 used the same for cumulus. The experts considered this finding interesting because it shows that the results are sensitive not necessarily to the parameterizations individually but to their combination. Overall, the analyses of each run in the Simulation view showed that the model predicted an underwhelming event regarding rainfall volume and storm cloud formation. The experts created three ensembles to strengthen the investigation of the ICBC data influence on the results. Each ensemble was composed of exactly two members configured using the same parameterizations and different ICBC (_i.e._ ensemble 1 contained runs 1 and 2; ensemble 2, runs 2 and 4; and ensemble 3, runs 5 and 6). The three ensembles showed similar behaviors. In general, ensembles using GFS ICBC data had rainfall forecasts between 41h and 54h on the coast and in the southern part (Figure 9 (a)). In comparison, the ones using ECMWF data produced rainfall forecasts between 48h and 60h, mainly on the coast of Sao Paulo (Figure 9 (b)). Next, the meteorologists created an ensemble containing all runs. By analyzing the maximum precipitation values, it was possible to visualize (in the spatial analysis panel) a worst-case scenario of heavy rain between 41h to 60h. However, the experts did not consider it an extreme event since the maximum values were up to 70 mm in 20 hours. These predictions would not generate a severe event alert in an operational forecast scenario. Regarding the other variables, the simulations using the GFS data produced higher values for divergence at 300 hPa, vertical upward wind at 500 hPa, and relative humidity at 850 hPa than the ones using ECMWF data. The results were similar for the other atmospheric fields. Following the analysis of rainfall volumes, the predicted values would not justify the declaration of attention stage by the city authorities. In fact, high values of relative humidity (_e.g._, 100%) until time 39h and a temperature at 2 m of 40\\({}^{\\circ}\\)C at some points of the region indicated the possibility of rain, but not a storm. Even considering all variables and the ensemble with all runs, there was no relevant probability of extreme event occurrence. Overall, the experts found the strategies for building and analyzing ensembles provided by ProWis very helpful since they could easily compare multiple scenarios and avoid misinterpretations of the simulation results. Fig. 8: (a) Ensemble view. (b) Examples of precipitation, temperature at 2m, and divergence at 300 hPa heat matrices (from top to bottom). The interval of 25h-30h (precipitation matrix) and step 24h (other matrices) are selected. (c) Accumulated rainfall, temperature at 2 m, and divergence at 300 hPa spatial distributions. ## 8 Experts' feedback According to the experts, ProWis combines attributes that support their work in many ways. First, they said that it provides excellent assistance in setting up a simulation, especially regarding domain delimitation and grid construction, which are complex tasks even for experienced users. They also agreed that the approach used for defining physical parameterizations, setting the simulation time horizon, and downloading the ICBC data is more straightforward than editing the WPS and PRC configuration files. Although manually editing files is not intellectually complicated (except for the construction of the grids), it can become a tricky and error-prone task. Moreover, they reinforced that by automating the step-by-step execution of a run, the system saves experts' time. This is especially true when an error occurs during WPS or PRC execution since the system interface facilitates the identification of configuration errors. They said these capabilities alone make ProWis a huge improvement for their daily workflow. Another good experience reported was the possibility of visualizing the outputs of an ongoing simulation since it allows the evaluation of the results without waiting for hours (or even days) to complete a simulation. Usually, the experts avoid touching incomplete output files to prevent file corruption. When the inspection is required, experts must be extremely careful during the inspection of partial results; otherwise, they can invalidate runs that have already consumed computational resources for an extended period. In this sense, this capability of ProWis represents a substantial contribution in their opinion. They looked favorably at reusing files and data from previous runs in a controlled, safe, and interactive way. They considered that advantageous due to the time it potentially saves when managing related runs, _e.g._, those using the same domains, parameterizations, or ICBC data. They have approved the automatic organization of input/output files and data by users and projects so several experts can use the system simultaneously. In addition, since ProWis keeps the original WRF files, they pointed out that it is possible to use them for other purposes besides the system, not restricting the experts' work. Regarding the analyses, the experts agreed that the available visualizations and interactions greatly favor the rapid exploration of a simulation in space and time. ProWis' organization and quick response to user requests facilitate the cognitive processing of the simulation results. In their opinion, the interface groups familiar visual structures, such as line graphs and heat maps, making the system user friendly. They considered the snuburst chart a novelty and took some time to understand its usefulness. After they became familiar with the visualization, they said it helped to visualize the accumulated rainfall at different intervals. Another positive feedback was related to the dynamic creation of ensembles. They said it enriched their ability to explore ensembles with members selected based on different criteria. They also said that the heat matrices were unfamiliar. However, they enjoyed the experience of visualizing ensembles as a whole. The heat matrices coupled with the maps helped evaluate and compare different runs. The experts also appreciated the identification of custom-defined scenarios provided by visualizations. Finally, the experts questioned the system overhead, since WRF simulations are already costly. In fact, computing the simulations dominates the execution time (207 minutes for the first use case and 27 minutes for the second one). The computation by the Server Core is 3 to 5 times faster (72 minutes for the first use case and 6 minutes for the second one). Since both tasks run in parallel, ProWis adds no overhead regarding running time. The experts contributed with suggestions for the improvement of ProWis. Currently, the system allows users to select a grid point on the map, and experts consider the feature essential. However, they would like to be able to brush custom areas of the map. Another suggestion is to allow the user to freely define the atmospheric fields of interest. They commented that some professionals are used to inspecting specific variables, and their unavailability may limit the use of the system. We report that this functionality is straightforward to implement. The selection of particular fields was based on our collaborators' needs and was implemented to reduce the scope of our prototype implementation. In addition, they said it would be even more interesting to import WRF runs that were neither configured nor executed using ProWis, _i.e._, use the system to explore runs manually created and previously executed by the experts. This would make the system appealing to a larger audience. ## 9 Conclusion and future work ProWis was designed to facilitate the setup, execution, management, inspection, and analysis of WRF runs and dynamically create ensembles, considering different ICBC data, physical parameterizations, and domain configurations. The system was constructed as a client-server web application. The backend comprises a MontDB database, the Apache Airflow workflow system, the WRF model, and a server core. The database stores metadata regarding users, projects, and weather simulations and controls data provenance to enable future queries. The workflow system optimizes the modeling process. The server core connects those modules, extracts, and automatically stores relevant atmospheric fields in the database, and organizes input/output files. Also, it responds to the interface's requests related to single and ensembles of simulations, which usually involve dynamic data aggregations. The system interface consists of three main views: one for setting up a run, one for exploring a simulation, and one for exploring an ensemble. The Setup view, and the entire process behind it, allows the user to save time and effort during the setup and execution of a simulation. This approach facilitates the development of studies in meteorology because it takes the focus away from the model execution, which is laborious in itself, and allows the user to devote time to the analysis of the generated results. The Simulation and Ensemble views provide visual structures that help manage the runs, inspect their outputs, and even perform similarity analysis to identify patterns. Both views offer visualizations that allow temporal and spatial aggregations using statistical and probabilistic metrics. With ProWis, two case studies were performed considering rainfall events caused by cold fronts. During their realization, it was possible to inspect multiple simulation outputs effortlessly, even when the model was running. The experts could use the visualizations to analyze spatiotemporal patterns and compare the results of several simulations. The WRF results did not indicate the possibility of extreme events in the areas of interest. The experts used the system to argue that these simulations provided false negative results. Their analyses reinforce the need for studies to improve atmospheric modeling. Given the results and experts' feedback, ProWis met its primary purpose: to aid weather analysis through data visualization and provenance. In future work, we plan to extend the system so it can be used to configure any WRF simulation. We also plan to propose other visualizations and interactions that may take the visual exploration of the simulation ensemble a step further. Future versions of ProWis can also provide a specialized workflow scheduler component to execute the workflows in parallel. This mechanism can benefit from heterogeneous environments to speed up workflow execution. Also, the provenance data can be used for recommending ProWis configurations for novice users based on the previous runs configured by weather experts. Fig. 9: São Paulo case study: The maximum accumulated rainfall was between 41h and 54h, according to the ensemble members constructed with the GFS data (a), and between 48h and 60h, according to the ensemble members who used the ECMWF data (b). ## Acknowledgments We would like to thank the reviewers for their constructive comments and feedback. This study was partly funded by CNPq (316963/2021-6), FAPERJ (E-26/202.915/2019, E-26/211.134/2019), CAPES (Finance Code 001), and the University of Illinois' Discovery Partners Institute. ## References * Behrens et al. (2019) H. W. Behrens, K. S. Candan, X. Chen, A. Gadkari, Y. Garg, M.-L. Li, X. Li, S. Liu, N. Martinez, J. Mo, E. Nester, S. Poccia, M. Ravindranath, and M. L. Sapino. Datastrom-FE: A data- and decision-flow and coordination engine for coupled simulation ensembles. In _Proceedings of the VLDB Endowment_, vol. 11, pp. 1906-1909, 2018. doi: 10.14778/3229863.326221 * Biswas et al. (2017) A. Biswas, G. Lin, X. Liu, and H.-W. Shen. Visualization of time-varying weather ensembles across multiple resolutions. _IEEE Transactions on Visualization and Computer Graphics_, 23(1):841-850, 2017. doi: 10.1109/TVCG.2016.2598869 * Callahan et al. (2006) S. P. Callahan, J. Freire, E. Santos, C. E. Scheidegger, C. T. Silva, and H. T. Vo. VisTrails: Visualization meets data management. In _Proceedings of the ACM SIGMOD International Conference on Management of Data_, pp. 745-747. Association for Computing Machinery, New York, NY, USA, 2006. doi: 10.1145/1142473.1142574 * Center for Ocean-Land-Atmosphere Studies. Grid analysis and display system (GrADS). [http://opengrads.org/](http://opengrads.org/), 2021 (accessed June 17, 2023). * 38, 2013. doi: 10.1175/JAMC -D1-2023.1 * Cox et al. (2013) J. Cox, D. House, and M. Lindell. Visualizing uncertainty in predicted hurricane tracks. _International Journal for Uncertainty Quantification_, 3(2):143-156, 2013. doi: 10.1615/Int.J.UncertaintyQuantification.2012003966 * Davies (2014) T. Davies. Lateral boundary conditions for limited area models. _Quarterly Journal of the Royal Meteorological Society_, 140(678):185-196, 2014. doi: 10.1002/qj.2127 * de Oliveira et al. (2019) D. C. de Oliveira, J. Liu, and E. Pacitti. Data-intensive workflow management: for clouds and data-intensive and scalable computing environments. _Synthesis Lectures on Data Management_, 14(4):1-179, 2019. doi: 10.2200/S00915ED1V01Y201904DTM060 1 * de Souza et al. (2022) C. V. F. de Souza, P. C. L. Barcellos, L. Crissaff, M. Cataldi, F. Miranda, and M. Lage. Visualizing simulation ensembles of extreme weather events. _Computers & Graphics_, 104:162-172, 2022. doi: 10.1016/j.cag.2022.01.007 * Diehl et al. (2015) A. Diehl, L. Pelorsos, C. Delrieux, C. Saulo, J. Ruiz, M. E. Groller, and S. Bruckner. Visual analysis of spatio-temporal data: Applications in weather forecasting. _Computer Graphics Forum_, 34(3):381-390, 2015. doi: 10.1111/cgf.12650 * Gochis et al. (2020) D. J. Gochis, M. Barlage, R. Cabell, M. Casali, A. Dugger, K. FitzGerald, M. McAllister, J. McCreight, A. RafieeNasab, L. Read, D. Y. K. Sampson, and Y. Zhang. The WRF-Hydro modeling system technical description, version 5.2.0. [https://doi.org/10.5281/zenodo.4479912](https://doi.org/10.5281/zenodo.4479912), 2020 (accessed June 17, 2023). * Gratzl et al. (2015) S. Gratzl, A. Lex, N. Gehlenborg, N. Cosgrove, and M. Streit. From visual exploration to storytelling and back again. _Computer Graphics Forum_, 35(3):491-500, 2016. doi: 10.1111/cgf.12925 * Grell et al. (2005) G. A. Grell, S. E. Peckham, R. Schmitz, S. A. McKeen, G. Frost, W. C. Skamarock, and B. Eder. Fully coupled \"online\" chemistry within the WRF model. _Atmospheric Environment_, 39(37):6957-6975, 2005. doi: 10.1016/j.amsco.2005.04.027 * Koop et al. (2018) D. Koop, M. Mattosos, and J. Freire. Provenance in workflows. In L. Liu and M. T. Ozsu, eds., _Encyclopedia of Database Systems_, pp. 2912-2916. Springer New York, 2018. * Li et al. (2019) S. Li, S. Jaroszynski, S. Pearse, L. Off, and J. Clyne. VAPOR: A visualization package tailored to analyze simulation data in earth system science. _Atmosphere_, 10(9), 2019. doi: 10.3390/atmos10090488 * Mizatori and Guha-Sapirir (2019) M. Mizatori and D. Guha-Sapir. Human cost of disasters 2000-2019. Technical report, United Nations Office for Disaster Risk Reduction, 2020. [https://www.undrr.org/publication/human-cost-disasters-2009-2019](https://www.undrr.org/publication/human-cost-disasters-2009-2019). * National Oceanic and Atmospheric Administration (2013) National Oceanic and Atmospheric Administration. WRF Domain Wizard. [https://esrl.noaa.gov/gsd/wrfportal/DomainWizard.html](https://esrl.noaa.gov/gsd/wrfportal/DomainWizard.html), 2013 (accessed June 17, 2023). * Nikfal (2023) A. Nikfal. PostWRF: Interactive tools for the visualization of the WRF and ERA5 model outputs. _Environmental Modelling & Software_, 160:105591, 2023. doi: 10.1016/j.envsoft.2022.105591 * NOAA Pacific Marine Environmental Laboratory (2012) NOAA Pacific Marine Environmental Laboratory. Ferret. [https://ferret.pmel.noaa.gov/Ferret/](https://ferret.pmel.noaa.gov/Ferret/), 2012 (accessed June 17, 2023). * Potter et al. (2009) K. Potter, A. Wilson, P.-T. Bremer, D. Williams, C. Doutriaux, V. Pascucci, and C. R. Johnson. Ensemble-Vis: A framework for the statistical visualization of ensemble data. In _IEEE International Conference on Data Mining Workshops_, pp. 233-240, 2009. doi: 10.1109/ICDMW.2009.55 * Rautenhaus et al. (2018) M. Rautenhaus, M. Bottinger, S. Siemen, R. Hoffman, R. M. Kirby, M. Mirzargar, N. Rober, and R. Westermann. Visualization in metroology--a survey of techniques and tools for data analysis tasks. _IEEE Transactions on Visualization and Computer Graphics_, 24(12):3268-3296, 2018. doi: 10.1109/TVCG.2017.2779501 * Rautenhaus et al. (2015) M. Rautenhaus, M. Kern, A. Schafler, and R. Westermann. Three-dimensional visualization of ensemble weather forecasts-part I: The visualization tool Met. 3D (version 1.0). _Geoscientific Model Development_, 8(7):2329-2353, 2015. Publisher: Copernicus GmbH. doi: 10.5194/gmd-8-2329-2015 * Santos et al. (2013) E. Santos, J. Poco, Y. Wei, S. Liu, B. Cook, D. N. Williams, and C. T. Silva, UV-CDT: Analyzing climate datasets from a user's perspective. _Computing in Science & Engineering_, 15(1):94-103, 2013. doi: 10.1109/MCSE.2013.15 * Sanyal et al. (2010) J. Sanyal, S. Zhang, J. Dyer, A. Mercer, P. Amburn, and R. Moorhead. Noeddes: A tool for visualization of numerical weather model ensemble uncertainty. _IEEE Transactions on Visualization and Computer Graphics_, 16(6):1421-1430, 2010. doi: 10.1109/TVCG.2010.181 * School of Ocean and Earth Science and Technology of the University of Hawaii. The Generic Mapping Tools (GMT). [https://www.generic-mapping-tools.org/](https://www.generic-mapping-tools.org/), 2019 (accessed June 17, 2023). * Skamarock et al. (2005) W. C. Skamarock, J. B. Klemp, J. Dudhin, D. O. Gill, D. M. Barker, W. Wang, and J. G. Powers. A description of the advanced research WRF version 2. Technical report, National Center For Atmospheric Research, 2005. * Sitz et al. (2018) H. Sitz, S. Gratzl, H. Pringer, T. Zichner, and M. Streit. KnowledgePearls: Provenance-based visualization retrieval. _IEEE Transactions on Visualization and Computer Graphics_, 25(1):120-130, 2019. doi: 10.1109/TVCG.2018.2865024 * University Corporation for Atmospheric (2019) University Corporation for Atmospheric. The NCAR command language. [https://www.ncl.ucar.edu/](https://www.ncl.ucar.edu/), 2019 (accessed June 17, 2023). * University Corporation for Atmospheric Research (2005) University Corporation for Atmospheric Research. Definition of network common data form (NetCDF). [https://www.unizdata.ucar.edu/software/netcdf/](https://www.unizdata.ucar.edu/software/netcdf/), 2019 (accessed June 17, 2023). * Wang et al. (2017) J. Wang, X. Liu, H.-W. Shen, and G. Lin. Multi-resolution climate ensemble parameter analysis with nested parallel coordinates plots. _IEEE Transactions on Visualization and Computer Graphics_, 23(1):81-90, 2017. doi: 10.1109/TVCG.2016.2598830 * Warner (2010) T. T. Warner. _Numerical weather and climate prediction_. Cambridge University Press, 2010. * Waser et al. (2011) J. Waser, H. Ribicic, R. Fuchs, C. Hirsch, B. Schindler, G. Bloschl, and E. Groller. Nodes on Ropes: A comprehensive data and control flow for steering ensemble simulations. _IEEE Transactions on Visualization and Computer Graphics_, 17(12):1872-1881, 2011. doi: 10.1109/TVCG.2011.225 * Watanabe et al. (2022) K. Watanabe, N. Sakamoto, J. Nonaka, and Y. Maejima. Angular-based edge bundled parallel coordinates plot for the visual analysis of large ensemble simulation data. In _IEEE 12th Symposium on Large Data Analysis and Visualization (LDAV)_, pp. 1-10, 2022. doi: 10.1109/LDAV57265.2022.9966393 * Weather Meteorological Organization (1999) Weather Meteorological Organization. _Guide to public weather services_. Pericectartif of the World Meteorological Organization, 1999.
Weather forecasting is essential for decision-making and is usually performed using numerical modeling. Numerical weather models, in turn, are complex tools that require specialized training and laborious setup and are challenging even for weather experts. Moreover, weather simulations are data-intensive computations and may take hours to days to complete. When the simulation is finished, the experts face challenges analyzing its outputs, a large mass of spatiotemporal and multivariate data. From the simulation setup to the analysis of results, working with weather simulations involves several manual and error-prone steps. The complexity of the problem increases exponentially when the experts must deal with ensembles of simulations, a frequent task in their daily duties. To tackle these challenges, we propose ProWis: an interactive and provenance-oriented system to help weather experts build, manage, and analyze simulation ensembles at runtime. Our system follows a human-in-the-loop approach to enable the exploration of multiple atmospheric variables and weather scenarios. ProWis was built in close collaboration with weather experts, and we demonstrate its effectiveness by presenting two case studies of rainfall events in Brazil. Weather visualization, Ensemble visualization, Provenance management, WRF visual setup
Summarize the following text.
233
arxiv-format/2006_13431v3.md
# Multiscale Simulations of Complex Systems by Learning their Effective Dynamics Pantelis R. Vlachas Computational Science and Engineering Laboratory, ETH Zurich, CH-8092, Switzerland School of Engineering and Applied Sciences, 29 Oxford Street, Harvard University, Cambridge, MA 02138, USA Institute for Data, Systems, and Society, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA Georgios Arampatzis Computational Science and Engineering Laboratory, ETH Zurich, CH-8092, Switzerland School of Engineering and Applied Sciences, 29 Oxford Street, Harvard University, Cambridge, MA 02138, USA Caroline Uhler Institute for Data, Systems, and Society, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA Petros Koumoutsakos [email protected] Computational Science and Engineering Laboratory, ETH Zurich, CH-8092, Switzerland School of Engineering and Applied Sciences, 29 Oxford Street, Harvard University, Cambridge, MA 02138, USA Institute for Data, Systems, and Society, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA November 3, 2021 ###### multiscale modeling, complex systems, equation-free, autoencoders Some of the most important scientific advances and engineering designs are founded on the study of complex systems that exhibit dynamics spanning multiple spatiotemporal scales. Examples include protein dynamics [1], morphogenesis [2], brain dynamics [3], climate [4], ocean dynamics [5] and social systems [6]. Over the last fifty years, simulations have become a key component of these studies thanks to a confluence of advances in computing architectures, numerical methods and software. Large scale simulations have led to unprecedented insight, acting as in-silico microscopes [7] or telescopes to reveal the dynamics of galaxy formations [8]. At the same time these simulations have led to the understanding that resolving the full range of spatio-temporal scales in such complex systems will remain out of reach in the foreseeable future. In recent years there have been intense efforts to develop efficient simulations that exploit the multiscale character of the systems under investigation [9; 10; 11; 12]. Multiscale methods rely on judicious approximations of the interactions between processes occurring over different scales and a number of potent frameworks have been proposed including the Equation-Free framework (EFF) [13; 14; 10; 12], the Heterogeneous Multiscale Method (HMM) [15; 16; 11], and the FLow AVeraged integatoR (FLAVOR) [17]. In these algorithms the system dynamics are distinguished into fine and coarse scales or expensive and affordable simulations, respectively. Their success depends on the separation of scales that are inherent to the system dynamics and their capability to capture the transfer of information between scales. Effective applications of multiscale methodologies minimize the computational effort while maximizing the accuracy of the propagated dynamics. The EFF relies on few fine scale simulations that are used to acquire, through \"restricting\", information about the evolution of the coarse-grained quantities of interest. In turn various time stepping procedures are used to propagate the coarse-grained dynamics. The fine scale dynamics are obtained by judiciously \"lifting\" the coarse scales to return to the fine scale description of the system and repeat. When the EFF reproduces trajectories of the original system, the identified low order dynamics represent the intrinsic system dynamics, also called effective dynamics, inertial manifold [18; 19] or reaction coordinates in molecular kinetics. While it is undisputed that the EFF, HMM, FLAVOR and related frameworks have revolutionized the field of multiscale modeling and simulation, we identify two critical issues that presently limit their potential. First, the accuracy of propagating the coarse-grained/latent dynamics hinges on the employed time integrators. Second, the choice of information transfer, in particular from coarse to fine scale dynamics in 'lifting', greatly affects the forecasting capacity of the methods. In the present work these two critical issues are resolved through machine learning (ML) algorithms that (i) deploy recurrent neural networks (RNNs) with gating mechanisms to evolve the coarse-grained dynamics and (ii) employ advanced (convolutional, or probabilistic) autoencoders (AE) to transfer in a systematic, data driven manner, the information between coarse and fine scale descriptions. Over the last years, ML algorithms have exploited the ample availability of data, and powerful computing architectures, to provide us with remarkable successes across scientific disciplines [20; 21]. The particular elements of our algorithms have been employed in the modeling of dynamical systems. Autoencoders (AEs) have been used to identify a linear latent space based on the Koopman framework [22], model high-dimensional fluid flows [23; 24] or sample effectively the state space in the kinetics of proteins [25]. More recently AEs have been coupled with dynamic importance sampling [26] to accelerate multiscale simulations and investigate the interactions of RAS proteins with a plasma membrane. RNNs with gating mechanisms have been shown successful in a wide range of applications, from speech processing [27] to complex systems [28], but their effectiveness in a multiscale setting has yet to be investigated. AEs coupled with RNNs are used in [29; 30; 31] to model fluid flows. In [32], the authors build on the EFF framework, identify a PDE on a coarse representation by diffusion maps, Gaussian processes or neural networks, and utilize forward integration in the coarse representation. These previous works, however, fail to employ one or more of the following mechanisms in contrast to our framework: consider the coarse scale dynamics [23; 24], account their non-Markovian [26; 32] or non-linear nature [22], exploit a probabilistic generative mapping [23; 29; 30; 31] from the coarse to the fine scale, learn simultaneously the latent space and its dynamics in an end-to-end fashion and not sequentially [22; 23; 26; 29; 30; 31; 32], alternate between micro and macro dynamics [22; 23; 29; 30; 31; 32], and scale to high-dimensional systems [29; 30; 32]. Augmenting multiscale frameworks (including EFF, HMM, FLAVOR) with state of the art ML algorithms allows for evolving the coarse scale dynamics by taking into account their time history and by providing consistent lifting (decoding) and restriction (encoding) operators to transfer information between fine and coarse scales. We demonstrate that the proposed framework allows for simulations of complex multiscale systems that reduce the computational cost by orders of magnitude, to capture spatiotemporal scales that would be impossible to resolve with existing computing resources. ## I Learning the effective dynamics (LED) We propose a framework for learning the effective dynamics (LED) of complex systems, that allows for accurate prediction of the system evolution at a significant reduced computational cost. In the following, the high-dimensional state of a dynamical system is given by \\(\\mathbf{s}_{t}\\in\\mathbb{R}^{d_{\\mathbf{s}}}\\), and the discrete time dynamics are given by \\[\\mathbf{s}_{t+\\Delta t}=\\mathbf{F}(\\mathbf{s}_{t}),\\] where \\(\\Delta t\\) is the sampling period and \\(\\mathbf{F}\\) may be non-linear, deterministic or stochastic. We assume that the state of the system at time \\(t\\) can be described by a vector \\(\\mathbf{z}_{t}\\in\\mathcal{Z}\\), where \\(\\mathcal{Z}\\subset\\mathbb{R}^{d_{\\mathbf{z}}}\\) is a low dimension manifold with \\(d_{\\mathbf{z}}\\ll d_{\\mathbf{s}}\\). In order to identify this manifold, an encoder \\(\\mathcal{E}^{\\mathbf{w}_{\\mathcal{E}}}:\\mathbb{R}^{d_{\\mathbf{z}}}\\rightarrow\\mathbb{ R}^{d_{\\mathbf{z}}}\\) is utilized, where \\(\\mathbf{w}_{\\mathcal{E}}\\) are trainable parameters, transforming the high-dimensional state \\(\\mathbf{s}_{t}\\) to \\(\\mathbf{z}_{t}=\\mathcal{E}^{\\mathbf{w}_{\\mathcal{E}}}(\\mathbf{s}_{t})\\). In turn, a decoder maps back this latent representation to the high-dimensional state, i.e. \\(\\mathbf{\\tilde{s}}_{t}=\\mathcal{D}^{\\mathbf{w}_{\\mathcal{D}}}(\\mathbf{z}_{t})\\). For deterministic systems, the optimal parameters \\(\\{\\mathbf{w}_{\\mathcal{E}}^{\\star},\\mathbf{w}_{\\mathcal{D}}^{\\star}\\}\\) are identified by minimizing the mean squared reconstruction error (MSE), \\[\\mathbf{w}_{\\mathcal{E}}^{\\star},\\mathbf{w}_{\\mathcal{D}}^{\\star}=\\operatorname*{ argmin}_{\\mathbf{w}_{\\mathcal{E}},\\mathbf{w}_{\\mathcal{D}}}\\Bigl{(}\\mathbf{s}_{t}-\\mathbf{ \\tilde{s}}_{t}\\Bigr{)}^{2}=\\operatorname*{argmin}_{\\mathbf{w}_{\\mathcal{E}}^{ \\star},\\mathbf{w}_{\\mathcal{D}}}\\Bigl{(}\\mathbf{s}_{t}-\\mathcal{D}^{\\mathbf{w}_{\\mathcal{ D}}}\\bigl{(}\\mathbf{\\mathcal{E}}^{\\mathbf{w}_{\\mathcal{E}}}(\\mathbf{s}_{t})\\bigr{)} \\Bigr{)}^{2}.\\] Convolutional neural network [33] autoencoders (CNN-AE) that take advantage of the spatial structure of the data are embedded into LED. For stochastic systems, \\(\\mathcal{D}^{\\mathbf{w}_{\\mathcal{D}}}\\) is modeled with a Mixture Density (MD) decoder [34]. Further details on the implementation of the MD decoder are provided in the SI, Section 1E, along with other components embedded in LED like AEs in SI, Section 1A, Variational AEs in SI, Section 1B, CNNs in SI, Section 1C. We demonstrate the modularity of LED, as it can be coupled with a permutation invariant layer (see details in the SI, Section 1D), and utilized later in the modeling of the dynamics of a large set of particles governed by the advection diffusion equation (see details in the SI, Section 3A). As a non-linear propagator in the low order manifold (coarse scale), an RNN is employed, capturing non-Markovian, memory effects by keeping an internal memory state. The RNN is learning a forecasting rule \\[\\mathbf{h}_{t}=\\mathcal{H}^{\\mathbf{w}_{\\mathcal{H}}}\\bigl{(}\\mathbf{z}_{t},\\mathbf{h}_{t- \\Delta t}\\bigr{)},\\quad\\mathbf{\\tilde{z}}_{t+\\Delta t}=\\mathcal{R}^{\\mathbf{w}_{ \\mathcal{R}}}\\bigl{(}\\mathbf{h}_{t}\\bigr{)},\\]where \\(\\mathbf{h}_{t}\\in\\mathbb{R}^{d_{h}}\\) is an internal hidden memory state, \\(\\mathbf{\\tilde{z}}_{t+\\Delta t}\\) is a latent state prediction, \\(\\mathcal{H}^{\\mathbf{w}_{\\mathcal{H}}}\\) and \\(\\mathcal{R}^{\\mathbf{w}_{\\mathcal{R}}}\\) are the hidden-to-hidden, and the hidden-to-output mappings, and \\(\\mathbf{w}_{\\mathcal{H}}\\), \\(\\mathbf{w}_{\\mathcal{R}}\\) are the trainable parameters of the RNN. One possible implementation of \\(\\mathcal{H}^{\\mathbf{w}_{\\mathcal{H}}}\\) and \\(\\mathcal{R}^{\\mathbf{w}_{\\mathcal{R}}}\\) is the long short-term memory (LSTM) [35], presented in the SI, Section 1F. The role of the RNN is twofold. First, it is updating its hidden memory state \\(\\mathbf{h}_{t}\\), given the current state provided at the input \\(\\mathbf{z}_{t}\\) and the hidden memory state at the previous time-step \\(\\mathbf{h}_{t-\\Delta t}\\), tracking the history of the low order state to model non-Markovian dynamics. Second, given the updated hidden state \\(\\mathbf{h}_{t}\\) the RNN forecasts the latent state at the next time-step(s) \\(\\mathbf{\\tilde{z}}_{t+\\Delta t}\\). The RNN is trained to minimize the forecasting loss \\(||\\mathbf{\\tilde{z}}_{t+\\Delta t}-\\mathbf{z}_{t+\\Delta t}||_{2}^{2}\\) by backpropagation through time [36]. The LSTM and the AE, jointly referred to as LED, are trained on data from simulations of the fully resolved (or microscale) dynamical system. The two networks can either be trained sequentially, or together. In the first case, the AE is pretrained to minimize the reconstruction loss, and then the LSTM is trained to minimize the prediction loss on the latent space (AE-LSTM). In the second case, they are seen as one network trying to minimize the sum of reconstruction and prediction losses (AE-LSTM-end2end). For large, high-dimensional systems, the later approach of end-to-end training is computationally expensive. After training, LED is employed to forecast the dynamics on unseen data, by propagating the low order latent state with the RNN and avoiding the computationally expensive simulation of high-dimensional dynamics. We refer to this mode of propagation, iteratively propagating only the latent/macro dynamics, as Latent-LED. We note that, as non-Markovian models are not self-starting, an initial small warm-up period is required, feeding the LED with data from the micro dynamics. The LED framework allows for data driven information transfer between coarse and fine scales through the AE. Moreover it propagates the latent space dynamics without the need to upscale back to the high-dimensional state space at every time-step. As is the case for any approximate iterative integrator (here the RNN), the initial model errors will propagate. In order to mitigate potential instabilities, inspired by the Equation-Free [10], we propose the multiscale forecasting scheme in Figure 1, alternating between micro dynamics for \\(T_{\\mu}\\) and macro dynamics for \\(T_{m}\\). In this way, the approximation error can be reduced at the cost of the computational complexity associated with evolving the high-dimensional dynamics. We refer to this mode of propagation as Multiscale-LED, and the ratio \\(\\rho=T_{m}/T_{\\mu}\\) as multiscale ratio. In Multiscale-LED, the interface with the high-dimensional state space is enabled only at the time-steps and scales of interest. This is in contrast to [37; 38], and is easily adaptable to the needs of particular applications thus augmenting the arsenal of models developed for multiscale problems. Training of LED models is performed with the Adam stochastic optimization method [39], and validation based early Figure 1: Multiscale-LED: Starting from an initial condition use the equations/first principles to evolve the high-dimensional dynamics for a short period \\(T_{warm}\\). During this warm-up period, the state \\(\\mathbf{s}_{t}\\) is passed through the encoder network. The outputs of the autoencoder are iteratively provided as inputs to the RNN, to warm-up its hidden state. Next, iteratively, **(1)** starting from the last latent state \\(\\mathbf{z}_{t}\\) the RNN propagates the latent dynamics for \\(T_{m}\\gg T_{warm}\\), **(2)** lift the latent dynamics at \\(t=T_{warm}+T_{m}\\) back to the high-dimensional state, **(3)** starting from this high-dimensional state as an initial condition, use the equations/first principles to evolve the dynamics for \\(T_{\\mu}\\ll T_{m}\\). stopping is employed to avoid overfitting. All LED models are implemented in Pytorch, mapped to a single Nvidia Tesla P100 GPU and executed on the XC50 compute nodes of the Piz Daint supercomputer at the Swiss national supercomputing centre (CSCS). ## II Results We demonstrate the application of LED in a number of benchmark problems and compare its performance with existing state of the art algorithms. In the SI Section 3D, we provide additional results on LED applied on alanine dipeptide in water. The stochastic dynamics of the molecular system are handled with an MD decoder, and an MD-LSTM in the latent space [40]. ### FitzHugh-Nagumo Model (FHN) LED is employed to capture the dynamics of the FitzHugh-Nagumo equations (FHN) [41; 42]. The FHN model describes the evolution of an activator \\(u(x,t)=\\rho^{ac}(x,t)\\) and an inhibitor density \\(v(x,t)=\\rho^{in}(x,t)\\) on the domain \\(x\\in[0,L]\\): \\[\\begin{split}\\frac{\\partial u}{\\partial t}&=D^{u} \\frac{\\partial^{2}u}{\\partial x^{2}}+u-u^{3}-v,\\\\ \\frac{\\partial v}{\\partial t}&=D^{v}\\frac{\\partial^{ 2}v}{\\partial x^{2}}+\\epsilon(u-\\alpha_{1}v-\\alpha_{0}).\\end{split} \\tag{1}\\] The system evolves periodically under two timescales, with the activator/inhibitor density acting as the \"fast\"/\"slow\" variable respectively. The bifurcation parameter \\(\\epsilon=0.006\\) controls the difference in the time-scales. We choose \\(D^{u}=1\\), \\(D^{v}=4\\), \\(L=20\\), \\(\\alpha_{0}=-0.03\\) and \\(\\alpha_{1}=2\\). Equation (1) is discretized with \\(N=101\\) grid points and solved using the Lattice Boltzmann (LB) method [43], with time-step \\(\\delta_{t}=0.005\\). To facilitate comparison with [32], we employ the LB method to gather data starting from 6 different initial conditions to obtain the mesoscopic solution considered here as the fine-grained solution. The data is sub-sampled, keeping every \\(200^{\\text{th}}\\) data point, i.e. the coarse time step is \\(\\Delta t=1\\). Three time series with 451 points are considered for training, two time series with 451 points for validation, and \\(10^{4}\\) data points from a different initial condition for testing. For the identification of the latent space, we compare principal component analysis (PCA), diffusion maps, feed-forward AE, and CNN-AE, in terms of the mean squared error (MSE) of the reconstruction in the test data, plotted in Figure 2 A. The MSE is plateauing after \\(d_{\\tt z}=2\\), and the AE and CNN-AE exhibit at least an order of magnitude lower MSR compared to PCA and DiffMaps. For this reason, we employ an AE with \\(d_{\\tt z}=2\\) for the LED. The hyper-parameters of the networks (reported in the SI, Table 3 along with training times) are tuned based on the MSE on the validation data. The architecture of the CNN is reported in Table 5, and depicted in Figure 10 of the SI. In Figure 2 B, we compare various propagators in forecasting of the macro (latent dynamics), starting from 32 different initial conditions in the test data, up to a horizon of \\(T_{f}=8000\\). We benchmark an AE-LSTM trained end-to-end (AE-LSTM-end2end), an AE-LSTM where the AE is pretrained (AE-LSTM), a multi-layered perceptron (AE-MLP), Reservoir Computers (AE-RC) [28; 44], and the SINDy algorithm (AE-SINDy) [45]. As a comparison metric, we consider the mean normalised absolute difference (MNAD), averaged over the 32 initial conditions. The definition of the MNAD is provided in SI, Section 2. The MNAD is computed on the inhibitor density, as the difference between the result of the LB simulation \\(v(x,t)\\), considered as groundtruth, and the model forecasts \\(\\hat{v}\\). The warm-up period for all propagators is set to \\(T_{warm}=60\\). The hyper-parameters of the networks (reported in Tables 4, 6, and 7 of the SI, along with the training times) are tuned based on the MNAD on the validation data. The LSTM-end2end and the RC show the lowest test error, while the variance of the RC is larger. In the following, we consider an LSTM-end2end propagator for the LED. LED is benchmarked against EFF variants [32] in the FHN equation in Figure 2 C. As a metric for the accuracy, the MNAD is considered, consistent with [32] to facilitate comparison. The EFF variants [32] are based on the identification of PDEs on the coarse level (CSPDE). LED is compared with CSPDEs in forecasting the dynamics of the FHN equation starting from an initial condition from the test data up to final time \\(T_{f}=451\\). CSPDE variants are utilizing Gaussian processes (GP) or neural networks (NN), features of the fine scale dynamics obtained through diffusion maps (F1 to F3) and forward integration to propagate the coarse representation in time. LED outperforms CSPDE variants by an order of magnitude. In Figure 2 F, the latent space of LED is plotted against the attractor of the data embedded in the latent space. Even for long time horizons (here \\(T_{f}=8000\\)), the LED forecasts stay on the periodic attractor. Latent-LED propagates the low order dynamics, and up-scales back to the inhibitor density, forecasting its evolution accurately, while being 60 times faster than the LB solver. This speed-up can be decisive in accelerating simulations and achieving much larger time horizons. In Multiscale-LED, the approximation error of LED decreases, at the cost of reduced speed-up. This interplay can be seen in Figure 2 D and E. Latent-LED (\\(T_{\\mu}=0\\)), and Multiscale-LED, alternating between macro-dynamics for \\(T_{m}=10\\) and high-dimensional dynamics for \\(T_{\\mu}\\), are employed to approximate the evolution and compare it against the LB solver in forecasting up to \\(T_{f}=8000\\) starting from 32 initial conditions as before. For \\(T_{m}=T_{\\mu}=10\\) (\\(\\rho=1\\)), the MNAD is reduced from approximately 0.019, to approximately 0.003 compared to Latent-LED. The speed-up, however, is reduced from 60 to 2. By varying \\(T_{m}\\in\\{50,100,200,1000\\}\\), Multiscale-LED achieves a trade-off between speed-up and MNAD. A prediction of the Latent-LED in the inhibitor density is compared against the groundtruth in Figure 2 G, H, I. Additional results on the activator density are given in the SI, Section 3B. ### Kuramoto-Sivashinsky The Kuramoto-Sivashinsky (KS) [46; 47] is a prototypical partial differential equation (PDE) of fourth order that exhibits a very rich range of non-linear phenomena. In case of high dissipation and small spatial extent \\(L\\) (domain size), the long-term dynamics of KS can be represented on a low dimensional inertial manifold [18; 19], that attracts all neighboring states at an exponential rate after a transient period. LED is employed to learn the low order manifold of the effective dynamics in KS. The one dimensional K-S equation is given by the PDE \\[\\frac{\\partial u}{\\partial t}=-\ u\\frac{\\partial^{4}u}{\\partial x^{4}}-\\frac {\\partial^{2}u}{\\partial x^{2}}-u\\frac{\\partial u}{\\partial x}, \\tag{2}\\] on the domain \\(\\Omega=[0,L]\\) with periodic boundary conditions \\(u(0,t)=u(L,t)\\) and \\(\ u=1\\). The special case \\(L=22\\) considered in this work, is studied extensively in [48], and exhibits a structurally stable chaotic attractor, i.e. an inertial manifold where the long-term dynamics lie. Equation (2) is discretized with a grid of size 64 points, and solved using the fourth-order method for stiff PDEs introduced in [49] with a time-step of \\(\\delta t=2.5\\cdot 10^{-3}\\) starting from a random initial condition. The data are subsampled to \\(\\Delta t=0.25\\) (coarse time-step of LED). \\(15\\cdot 10^{3}\\) samples are used for training and another \\(15\\cdot 10^{3}\\) for validation. For testing purposes, the process is repeated with a different random seed, generating another \\(15\\cdot 10^{3}\\) samples. For the identification of a reasonable latent space dimension, we compare PCA, AEs, and CNNs in terms of the reconstruction MSE in the test data as a function of \\(d_{\\mathbf{z}}\\), plotted in Figure 3 A. MSE is plateauing after \\(d_{\\mathbf{z}}=8\\) indicating arguably the dimensionality of the attractor in agreement with previous studies [18; 48], and that the CNN is superior to the AE, while orders of magnitude better than PCA. For this reason, we employ a CNN with \\(d_{\\mathbf{z}}=8\\) for the autoencoding part of LED. The hyper-parameters of the networks are tuned based on the MSE on the validation data, reported in SI, Tables 8, 9 with the network training times. The CNN architecture is provided in SI Table 10, and depicted in SI Figure 12. In Figure 3 B, we compare various propagators in predicting the macro dynamics of LED, starting from 100 test initial conditions, up to \\(T_{f}=800\\) (3200 time-steps). We employ a CNN-LSTM trained end-to-end (CNN-LSTM-end2end), a CNN-LSTM where the CNN is pretrained (CNN-LSTM), a multi-layered perceptron (CNN-MLP), Reservoir Computers (CNN-RC) [28; 44], and the SINDy algorithm (CNN-SINDy) [45]. As a comparison metric, we consider the MNAD, averaged over the 100 initial conditions. The warm-up period for all propagators is set to \\(T_{warm}=60\\). The hyper-parameters (reported on SI Tables 11, 12, 13, along with the training times) are tuned based on the MNAD on the validation data. While the MLP and RC propagators exhibit large errors, the LSTM, LSTM-end2end, and SINDy show comparable accuracy. In the following, we consider an LSTM propagator for the LED. Due to chaoticity of the KS equation, iterative forecasting with LED is challenging, as initial errors propagate exponentially. In order to assess whether the iterative forecasting with LED leads to reasonable, physical predictions, we plot the density of values in the \\(\\mathbf{u}_{x}-\\mathbf{u}_{xx}\\) space in Figure 3 C. The data come from a single long trajectory of size \\(T_{f}=8000\\) (32000 time-steps). We observe that LED, Figure 3 D, is able to qualitatively reproduce the density of the simulation. In Figure 3 E and F we plot the MNAD, and correlation between forecasts of LED and the reference with respect to the multiscale ratio \\(\\rho\\). In Figure 3 G the speed-up of LED is plotted against \\(\\rho\\). Latent-LED is able to reproduce the long-term \"climate dynamics\" [28], and remain at the attractor, while being more than two o compared to the micro solver. As \\(\\rho\\) is increased, the error is reduced (correlation increased), at the cost of reduced speed-up. Finally, in Figure 3 H, we compare the performance of Latent-LED (CNN-LSTM) with previous studies [28; 44], that forecast directly on the high-dimensional space. Specifically, the Latent-LED matches the performance of an LSTM (no dimensionality reduction), but shows inferior short-term forecasting ability compared to an RC (no dimensionality reduction) forecasting on the high-dimensional space. This is expected as the RC and the LSTM have full information of the state. In turn, when the RC is employed on the latent space of LED as a macro-dynamics propagator, the error grows significantly and the performance is inferior to the CNN-LSTM case. A forecast of Latent-LED is provided in the SI, Figure 11. ### Viscous Flow Behind a Cylinder The flow behind a cylinder is a widely studied problem in fluids [50], that exhibits a rich range of dynamical phenomena like the transition from laminar to turbulent flow in high Reynolds numbers, and is used as a benchmark for reduced order modeling (ROM) approaches. The flow behind a cylinder in the two dimensional space is simulated by solving the incompressible Navier-Stokes equations with Brinkman penalization to enforce the no-slip boundary conditions on the surface of the cylinder [51; 52]. More details on the simulation are provided in the SI Section 3D. We consider the application of LED to two Reynolds' numbers \\(Re\\in\\{100,1000\\}\\). The definition of \\(Re\\) is provided in the SI Equation (24). The flow is simulated in a cluster with 12 CPU Cores, up to \\(T=200\\), after discarding initial transient. 250 time-steps distanced \\(\\Delta t=0.2\\) in time (total time \\(T=50\\)) are used for training, 250 for validation, and the rest for testing purposes. The vortex sheeding period is \\(T\\approx 2.86\\) for \\(Re=100\\), and \\(T\\approx 2.22\\) for \\(Re=1000\\). The state of LED is \\(\\mathbf{s}_{t}\\equiv\\{p,u_{x},u_{y},\\omega\\}\\in\\mathbb{R}^{4\\times 512\\times 1 024}\\), where \\(\\omega\\) is the vorticity field. For the autoencoding part, LED employs CNNs that take advantage of the spatial correlations. The architecture of the CNN is given in Table 14 and depicted in Figure 13 in the SI. The dimension of the latent space is tuned based on the performance on the validation dataset to \\(d_{\\boldsymbol{z}}=4\\) for \\(Re=100\\) and \\(d_{\\boldsymbol{z}}=10\\) for \\(Re=1000\\). The LSTM propagator of LED is benchmarked against SINDy and RC in predicting the dynamics, starting from 10 initial conditions randomly sampled from the test data for a prediction horizon of \\(T=20\\) (100 time-steps). The hyper-parameters (reported on SI Tables 15, 16, 17, along with the training times) are tuned based on the MNAD on the validation data. The logarithm of the MNAD is given in Figure 5 A for \\(Re=100\\) and E for \\(Re=1000\\). For the \\(Re=100\\) case, the LSTM exhibits lower MNAD and lower variance compared to RC and SINDy. For the challenging \\(Re=1000\\) scenario, LSTM and RC exhibit lower MNAD compared to SINDy, with the LSTM being more robust (lower variance). A prediction of the vorticity \\(\\omega\\) by Latent-LED at lead time \\(T=4\\) is given in Figure 4. LED captures the flow for both \\(Re\\in\\{100,1000\\}\\). The error concentrates mostly around the cylinder, rendering the accurate prediction of the drag coefficient challenging. In Figure 4 D and H, the latent space of Latent-LED is compared with the transformation of the data to the latent space. The predictions stay close to the attractor even for very large horizon (\\(T=20\\)). The Strouhal number St (defined in the SI Equation (23)) describes the periodic vortex shedding at the wake of the cylinder. By estimating the dominant frequency of the latent state using a Fourier analysis, we find that LED reproduces exactly the St of the system dynamics for both \\(Re\\in\\{100,1000\\}\\) cases. In the \\(Re=100\\) case, Latent-LED recovers a periodic non-linear mode in the latent space, and can forecast the dynamics accurately, as illustrated in Figure 4. In this case, approaches based on the Galerkin method or dynamic mode decomposition (DMD), construct ROM with six to eight degrees of freedom [53] that capture the most energetic spatiotemporal modes. In contrast, the latent space of LED in the \\(Re=100\\) case has a dimensionality of \\(d_{\\boldsymbol{z}}=4\\). In the challenging \\(Re=1000\\) scenario, LED with \\(d_{\\boldsymbol{z}}=10\\) can capture accurately the characteristic vortex street, and long-term dynamics. We note that, to the best of our knowledge, ROMs for flows past a cylinder have been so far limited to laminar periodic flows in the order of \\(Re=100\\) while this study advances the state of the art by one order of magnitude. Starting from 4 initial conditions randomly sampled from the test data, six LED variants ( Latent-LED, Multiscale-LED with \\(T_{\\mu}=0.4,T_{m}\\in\\{0.4,0.8,1.2,2,4\\}\\) for \\(Re=100\\), and Latent-LED, Multiscale-LED with \\(T_{\\mu}=1.6,T_{m}\\in\\{0.8,1.6,3.2,6.4,12.8\\}\\) for \\(Re=1000\\) ) are tested on predicting the dynamics of the flow up to \\(T_{f}=20\\), after \\(T_{warm}=2\\). The MNAD is plotted in Figure 5 B for \\(Re=100\\), and F for \\(Re=1000\\). The speed-up is plotted in Figure 5 D for \\(Re=100\\), and H for \\(Re=1000\\). The Latent-LED is two orders of magnitude faster than the flow solver, while exhibiting MNAD errors of 0.02 and 0.04 for \\(Re=100\\), and \\(Re=1000\\) respectively. By alternating between macro and micro, the error is reduced, at the cost of decreased speed-up. In Figure 5 C and G, the relative error on the drag coefficient \\(C_{d}\\) (defined in SI, Equation (28)) is plotted as a function of the multiscale ratio \\(\\rho\\). Latent-LED exhibits a relative error of 0.04 that is reduced to approximately 0.02for \\(\\rho=1\\). For \\(Re=1000\\), as we observe in Figure 4, the prediction error of LED concentrates around the cylinder which leads to an inaccurate computation of the drag. Even though Multiscale-LED is reducing this error, it still remains on the order of \\(0.15\\). ## III Discussion We have presented a novel framework for learning the effective dynamics (LED) and accelerate the simulations of multiscale (stochastic or deterministic) complex dynamical systems. Our work relies on augmenting the Equation-Free formalism with state of the art ML methods. The LED framework is tested on a number of benchmark problems. In systems where evolving the high-dimensional state dynamics is computationally expensive, LED accelerates the simulation by propagating on the latent space and upscaling to the high-dimensional states with the probabilistic, generative mixture density, or deterministic convolutional, decoder. This comes at the cost of training the networks, a process that is performed once, offline. The trained model can be used to forecast the dynamics starting from any arbitrary initial condition. The efficiency of LED was evaluated in forecasting the FitzHugh-Nagumo equation dynamics achieving an order of magnitude lower approximation error compared to other Equation-Free approaches while being two orders of magnitude faster than the Lattice Boltzmann solver. We demonstrated that the proposed framework identifies the effective dynamics of the Kuramoto-Sivashinsky equation with \\(L=22\\), capturing the long-term behavior (\"climate dynamics\"), achieving a speed-up of \\(S\\approx 100\\). Furthermore, LED captures accurately the long-term dynamics of a flow behind a cylinder in \\(Re=100\\) and \\(Re=1000\\), while being two orders of magnitude faster than a flow solver. In the SI, we demonstrate that LED can unravel and forecast the stochastic collective dynamics of \\(1000\\) particles following Brownian motion subject to advection and diffusion in the three dimensional space (SI, Section 3A). In our recent work [40] (briefly described in SI, Section 3E), we show that LED can be applied to learn the stochastic dynamics of molecular systems. We note that the present method is readily applicable to all problems where Equation-Free, HMM, and FLAVOR methodologies have been applied. In summary, LED identifies and propagates the effective dynamics of dynamical systems with multiple spatiotemporal scales providing significant computational savings. Moreover, LED provides a systematic way of trading between speed-up and accuracy for a multiscale system by switching between propagation of the latent dynamics, and evolution of the original equations, iteratively correcting the statistical error at the cost of reduced speed-up. The LED does not presently contain any mechanism to decide when to upscale the latent space dynamics. This is an active area of investigations. We do not expect LED to generalize to dynamical regions drastically different from those represented in the training data. Further research efforts will address this issue by adapting the training procedure. The present methodology can be deployed both in problems described by first principles as well as for problems where only data are available for either the macro or microscale descriptions of the system. LED creates unique algorithmic alloys between data driven and first principles models and opens new horizons for the accurate and efficient prediction of complex multiscale systems. ###### Acknowledgements. The authors would like to thank Nikolaos Kallikounis (ETH Zurich) for helpful discussions on the Lattice Boltzmann method, Pascal Weber and Michalis Chatzimanolakis (ETH Zurich) for help with the simulations of the flow behind a cylinder, and Yannis Kevrekidis (Johns Hopkins University),Kostas Spiliotis (University of Rostock), for providing code to reproduce data for the FHN equation. The authors acknowledge the support of the Swiss National Supercomputing Centre (CSCS) providing the necessary computational resources under Projects s929. ## Author contribution P.K. conceived the project; P.R.V., G.A., C.U., and P.K. designed and performed research; P.R.V., and G.A. contributed new analytic tools; P.R.V., G.A., and P.K. analyzed data; and P.R.V., G.A., and P.K. wrote the paper. ## Author Declaration The authors declare no conflict of interest. Figure 5: **A)** Comparison of different macrodynamics propagators (\\(\\blacklozenge\\) AE-LSTM; \\(\\blacklozenge\\) AE-RC; \\(\\blacklozenge\\) AE-SINDy) for \\(Re=100\\), and **E)** for \\(Re=1000\\). **B)** The MNAD and **C)** the relative error on the drag between predictions by LED and the reference data as a function of the multiscale ratio \\(\\rho\\), **F), G)** the same for \\(Re=1000\\). **D)** The speed-up of LED compared to the flow solver w.r.t. \\(\\rho\\) for \\(Re=100\\), and **H)** for \\(Re=1000\\). ## Data and Code Availability All code and data for the analysis associated with the current submission will become readily available upon publication in the following link: [https://github.com/pvlachas/LearningEffectiveDynamics](https://github.com/pvlachas/LearningEffectiveDynamics). ## References * [1]R. R. C. * [30] Maulik, R., Lusch, B. & Balaprakash, P. Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. _Physics of Fluids_**33**, 037106 (2021). * [31] Hasegawa, K., Fukami, K., Murata, T. & Fukagata, K. Machine-learning-based reduced-order modeling for unsteady flows around bluff bodies of various shapes. _Theoretical and Computational Fluid Dynamics_**34**, 367-383 (2020). * [32] Lee, S., Kooshkbaghi, M., Spiliotis, K., Siettos, C. I. & Kevrekidis, I. G. Coarse-scale pdes from fine-scale observations via machine learning. _Chaos: An Interdisciplinary Journal of Nonlinear Science_**30**, 013141 (2020). * [33] LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. _nature_**521**, 436-444 (2015). * [34] Bishop, C. M. Mixture density networks. _Technical Report NCRG/97/004, Neural Computing Research Group, Aston University_ (1994). * [35] Hochreiter, S. & Schmidhuber, J. Long short-term memory. _Neural Comput._**9**, 1735-1780 (1997). * 356 (1988). * [37] Hernandez, C. X., Wayment-Steele, H. K., Sultan, M. M., Husic, B. E. & Pande, V. S. Variational encoding of complex dynamics. _Physical Review E_**97**, 062412 (2018). * [38] Sultan, M. M., Wayment-Steele, H. K. & Pande, V. S. Transferable neural networks for enhanced sampling of protein dynamics. _Journal of chemical theory and computation_**14**, 1887-1894 (2018). * [39] Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. In _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_ (2015). * [40] Vlachas, P. R., Zavadlav, J., Praprotnik, M. & Koumoutsakos, P. Accelerated simulations of molecular systems through learning of their effective dynamics. _arXiv preprint arXiv:1312.6114_ (2021). * [41] FitzHugh, R. Impulses and physiological states in theoretical models of nerve membrane. _Biophysical journal_**1**, 445 (1961). * [42] Nagumo, J., Arimoto, S. & Yoshizawa, S. An active pulse transmission line simulating nerve axon. _Proceedings of the IRE_**50**, 2061-2070 (1962). * [43] Karlin, I. V., Ansumali, S., Frouzakis, C. E. & Chikatamarla, S. S. Elements of the lattice boltzmann method i: Linear advection equation. _Commun. Comput. Phys_**1**, 616-655 (2006). * [44] Pathak, J., Hunt, B., Girvan, M., Lu, Z. & Ott, E. Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach. _Physical review letters_**120**, 024102 (2018). * [45] Brunton, S. L., Proctor, J. L. & Kutz, J. N. Discovering governing equations from data by sparse identification of non-linear dynamical systems. _Proceedings of the National Academy of Sciences_**113**, 3932-3937 (2016). * [46] Kuramoto, Y. Diffusion-Induced Chaos in Reaction Systems. _Progress of Theoretical Physics Supplement_**64**, 346-367 (1978). * [47] Sivashinsky, G. I. Nonlinear analysis of hydrodynamic instability in laminar flames -- I. Derivation of basic equations. _Acta Astronautica_**4**, 1177-1206 (1977). * [48] Cvitanovic, P., Davidchack, R. L. & Siminos, E. On the state space geometry of the kuramoto-sivashinsky flow in a periodic domain. _SIAM Journal on Applied Dynamical Systems_**9**, 1-33 (2010). * [49] Kassam, A. & Trefethen, L. Fourth-order time-stepping for stiff pdes. _SIAM Journal on Scientific Computing_**26**, 1214-1233 (2005). * [50] Zdravkovich, M. Flow around circular cylinders; vol. i fundamentals. _Journal of Fluid Mechanics_**350**, 377-378 (1997). * [51] Rossinelli, D. _et al._ Mrag-i2d: Multi-resolution adapted grids for remeshed vortex methods on multicore architectures. _Journal of Computational Physics_**288**, 1-18 (2015). * [52] Bost, C., Cottet, G.-H. & Maitre, E. Convergence analysis of a penalization method for the three-dimensional motion of a rigid body in an incompressible viscous fluid. _SIAM Journal on Numerical Analysis_**48**, 1313-1337 (2010). * [53] Taira, K. _et al._ Modal analysis of fluid flows: Applications and outlook. _AIAA journal_**58**, 998-1022 (2020). ## Supplementary Information I: Methods The framework to learn and propagate the effective dynamics (LED) of complex systems is composed of the models described in the following. ### Autoencoders (AE) Classical autoencoders are non-linear neural networks that map an input to a low dimensional latent space and then decode it to the original dimension at the output, trained to minimize the reconstruction loss \\(\\mathcal{L}=|\\mathbf{x}-\\mathbf{\\tilde{x}}|^{2}\\). They were proposed in as a non-linear alternative to Principal Component Analysis (PCA). An autoencoder is depicted in Figure 0(a). In this work, we employ feed-forward AEs to identify the coarse representation of the Fitz-Hugh Nagumo equation, and the Kuramoto-Sivashinsky equation. ### Variational Autoencoders (VAE) Research efforts on generative modeling led to the development of Variational Autoencoders (VAEs). The VAE similar to AE is composed by an encoder and a decoder. The encoder neural network, instead of mapping the input \\(\\mathbf{x}\\) deterministically to a reduced order latent space \\(\\mathbf{z}\\), produces a distribution \\(q(\\mathbf{z}|\\mathbf{x};\\mathbf{w}_{q})\\) over the latent representation \\(\\mathbf{z}\\), where \\(\\mathbf{w}_{q}\\) is the parametrization of the distribution given by the output of the encoder \\(\\mathbf{w}_{q}=\\mathcal{E}^{\\mathbf{w}\\varepsilon}(\\mathbf{x})\\). In most practical applications, the distribution \\(q(\\mathbf{z}|\\mathbf{x};\\mathbf{w}_{q})\\) is modeled as a factorized Gaussian, implying that \\(\\mathbf{w}_{q}\\) is composed of the mean, and the diagonal elements of the covariance matrix. The decoder maps a sampled latent representation to an output \\(\\mathbf{\\tilde{x}}=\\mathcal{D}^{\\mathbf{w}_{D}}(\\mathbf{z})\\). By sampling the latent distribution \\(q(\\mathbf{z}|\\mathbf{x};\\mathbf{w}_{q})\\), for a fixed input \\(\\mathbf{x}\\), the autoencoder can generate samples from the probability distribution over \\(\\mathbf{\\tilde{x}}\\) at the decoder output. The network is trained to maximize the log-likelihood of reproducing the input at the output, while minimizing the Kullback-Leibler divergence between the encoder distribution \\(q(\\mathbf{z}|\\mathbf{x};\\mathbf{w}_{q})\\) and a prior distribution, e.g. \\(\\mathcal{N}(0,\\mathbf{I})\\). VAEs are essentially regularizing the training of AE by adding the Gaussian noise in the latent representation. In this work, a Gaussian latent distribution with diagonal covariance matrix is considered, i.e., \\[q(\\mathbf{z}\\,|\\,\\mathbf{x};\\mathbf{\\mu}_{\\mathbf{z}},\\mathbf{\\sigma}_{\\mathbf{z}})=\\mathcal{N}\\big{(} \\mathbf{z}\\,|\\,\\mathbf{\\mu}_{\\mathbf{z}}(\\mathbf{x}),\\text{diag}(\\sigma_{\\mathbf{z}}(\\mathbf{x})) \\big{)}, \\tag{1}\\] where \\(\\mathbf{w}_{q}=(\\mathbf{\\mu}_{\\mathbf{z}},\\sigma_{\\mathbf{z}})\\) and the mean latent representation \\(\\mathbf{\\mu}_{\\mathbf{z}}\\) and the variance \\(\\sigma_{\\mathbf{z}}\\) vectors are the outputs of the encoder neural network \\(\\mathcal{E}^{\\mathbf{w}\\varepsilon}(\\mathbf{x})\\). The latent representation is then sampled from \\(\\mathbf{z}\\sim\\mathcal{N}(\\mathbf{\\mu}_{\\mathbf{z}},\\text{diag}(\\sigma_{\\mathbf{z}}))\\). The decoder receives as an input the sample, and outputs the reconstruction \\(\\mathbf{\\tilde{x}}\\). A VAE is depicted in Figure 0(b). A preliminary study, benchmarking VAEs against feedforward AEs (and Convolutional AE described later) in the FitzHugh-Nagumo equation, and the Kuramoto-Sivashinsky equation, showed no significant advantages for the cases considered in this work over feed-forward AEs. They are, however, part of the LED framework, and may be useful in other applications. Figure 1: **(a)** A schematic diagram of a classical Autoencoder (AE). A high-dimensional state \\(\\mathbf{x}\\) is mapped to a low dimensional feature space \\(\\mathbf{z}\\) by applying the encoder transformation through multiple fully connected layers. The low dimensional feature space \\(\\mathbf{z}\\) is expanded in the original space by the decoder. The autoencoder is trained with the loss \\(\\mathcal{L}=||\\mathbf{x}-\\mathbf{\\tilde{x}}||^{2}\\), so that the input can be reconstructed as faithfully as possible at the decoder output. **(b)** A schematic diagram of a Variational Autoencoder (VAE). Instead of modeling the latent space deterministically, the encoder outputs a mean latent representation \\(\\mathbf{\\mu}_{\\mathbf{z}}\\), along with the associated uncertainty \\(\\sigma_{\\mathbf{z}}\\). The latent space \\(\\mathbf{z}\\) is sampled from a normal distribution \\(\\mathbf{z}\\sim\\mathcal{N}(\\cdot|\\mathbf{\\mu}_{\\mathbf{z}},\\sigma_{\\mathbf{z}}\\mathbf{I})\\), with diagonal covariance matrix. ### Convolutional Neural Networks Convolutional neural networks (CNNs) are tailored to process image data with spatial correlations. Each layer of a CNN is processing a multidimensional input (with a channel axis, and some spatial axes) by applying a convolutional kernel or filter that slides along the input spatial axes. In other words, CNNs take into account of the structure in the data in their architecture, which is a form of a geometric prior. In this work, CNN layers are used in the Autoencoder, by introducing a bottleneck layer, reducing the dimensionality. Other dimensionality reduction techniques, like AEs, Principal Component Analysis (PCA), or Diffusion maps (DiffMaps), that are based on vectorization of input field data, do not take into account the structure of the data, i.e. when an input field is shifted by a pixel, the vectorized version will differ a lot, while the convoluted image will not. In this work, we employ Autoencoding CNNs (and compare them with feed-forward AEs) to identify the coarse representation of the FitzHugh-Nagumo equation, the Kuramoto-Sivashinsky equation, and the incompressible Navier-Stokes flow behind a cylinder at \\(Re\\in\\{100,1000\\}\\). ### Permutation Invariance Physical systems may satisfy specific properties like energy conservation, translation invariance, permutation invariance, etc. In order to build data-driven models that accurately reproduce the statistical behavior of such systems, these properties should be embedded in the model. In this section, the dynamics of particles of the same kind are modeled with a permutation invariance layer. This is useful in simulations of molecules, i.e. molecular dynamics, where the state of the system is described by a configuration of particles, and any permutation of these particles corresponds to the same configuration. Permutation invariance is handled here with a sum decomposition of a feature space. The exact procedure is depicted in Appendix I.4. Assume that the state of a dynamical system \\(\\mathbf{s}\\) is composed of \\(N\\) particles of the same kind, each one having specific properties or features with dimensionality \\(d_{\\mathbf{x}}\\), e.g. position, velocity, etc. The features of a single particle are given by the state \\(\\mathbf{x}\\in\\mathbb{R}^{d_{\\mathbf{x}}}\\) of the particle. Raw data is provided as an input to the network, i.e. the features of all particles, stacked together in a matrix \\(\\mathbf{s}\\in\\mathbb{R}^{N\\times d_{\\mathbf{x}}}\\). A permutation of two particles represents in essence the same configuration and should be mapped to the same latent representation. This is achieved with a permutation invariant layer that first applies a non-linear transformation \\(\\phi:\\mathbb{R}^{d_{\\mathbf{x}}}\\to\\mathbb{R}^{d_{p}}\\) mapping each particles' features to a high-dimensional latent representation of dimension \\(d_{p}\\). This mapping is applied to all particles independently leading to \\(N\\) such latent vectors. The mean of these vectors is taken to construct the representation of the configuration. The representation \\(\\frac{1}{N}\\sum_{i=1}^{N}\\phi(\\mathbf{x}^{i})\\) is finally fed to a final layer reducing the dimensionality to a low-order representation \\(\\mathbf{z}\\in\\mathbb{R}^{d_{\\mathbf{x}}}\\), with \\(d_{\\mathbf{z}}\\ll d_{p},N\\). This is achieved by the mapping \\(g:\\mathbb{R}^{d_{p}}\\to\\mathbb{R}^{d_{\\mathbf{z}}}\\). In this work, the permutation invariance layer is utilized in the modeling of the collective dynamics of a group of particles whose movement is governed by the advection-diffusion equation in the one and three dimensional space. Both mappings \\(g\\) and \\(\\phi\\) are implemented with neural networks, having 3 layers of 50 hidden units each, and \\(\\tanh\\) activations. ### Mixture Density Decoder Mixture density networks (MDNs) [34] are powerful neural networks that can model non-Gaussian, multi-modal data distributions. The outputs of MDNs are parameters of a mixture density model (mixture of probability density functions). The most generic choice of the mixture component distribution, is the Gaussian distribution. Gaussian MDNs are widely deployed in machine learning applications to model structured dynamic environments, i.e. (video) games. The effectiveness of MDNs, however, in modeling physical systems remains unexplored. In physical systems, the state may be bounded. In this case, the choice of a Gaussian MDN is problematic due to its unbounded support. To make matters worse, most applications of Gaussian MDNs when modeling random vectors do not consider the interdependence between the vector variables, i.e. the covariance matrix of the Gaussian mixture components is diagonal, in an attempt to reduce their computational complexity. Arguably in the applications where they were successful, modeling this interdependence was not imperative. In contrast, in physical systems the variables of a state might be very strongly dependent on each other. In order to cope with these problems, the following approach is considered: Firstly, an auxiliary vector variable is considered \\(\\mathbf{v}\\) along with its distribution \\(p(\\mathbf{v}|\\mathbf{z})\\). \\(\\mathbf{v}\\in\\mathbb{R}^{d_{\\mathbf{x}}}\\) has the same dimensionality \\(d_{\\mathbf{x}}\\) as the high-dimensional state (input/output of the autoencoder). The distribution is modeled as a mixture of \\(K\\)**multivariate** normal distributions \\[p(\\mathbf{v}|\\mathbf{z})=\\sum_{k=1}^{K}\\pi^{k}(\\mathbf{z})\\,\\mathcal{N}\\bigg{(}\\,\\mathbf{\\mu} _{\\mathbf{v}}^{k}(\\mathbf{z}),\\Sigma_{\\mathbf{v}}^{k}(\\mathbf{z})\\,\\bigg{)}, \\tag{2}\\]The multivariate normal distribution is parametrised in terms of a mean vector \\(\\mathbf{\\mu}_{\\mathbf{v}}^{k}\\), a positive definitive covariance matrix \\(\\Sigma_{\\mathbf{v}}^{k}\\), and the mixing coefficients \\(\\pi^{k}\\) which are functions of \\(\\mathbf{z}\\). The covariance matrix is parametrised by a lower-triangular matrix \\(L_{\\mathbf{v}}^{k}\\) with positive-valued diagonal entries, such that \\(\\Sigma_{\\mathbf{v}}^{k}=L_{\\mathbf{v}}^{k}L_{\\mathbf{v}}^{k\\,T}\\in\\mathbb{R}^{d_{\\mathbf{v}} \\times d_{\\mathbf{v}}}\\) (This triangular matrix can be recovered by Cholesky factorization of the positive definite \\(\\Sigma_{\\mathbf{v}}^{k}\\)). The functional forms of \\(\\pi^{k}(\\mathbf{z})\\in\\mathbb{R}\\), \\(\\mathbf{\\mu}_{\\mathbf{v}}(\\mathbf{z})\\in\\mathbb{R}^{d_{\\mathbf{v}}}\\), and the \\(n(n+1)/2\\) entries of \\(L_{\\mathbf{v}}^{k}\\) are neural networks, their values are given by the outputs of the decoder for all mixture components \\(k\\in\\{1,\\ldots,K\\}\\), i.e. \\(\\mathbf{w}_{\\mathcal{D}}=\\mathcal{D}^{\\mathbf{w}_{\\mathcal{D}}}(\\mathbf{z})=\\{\\pi^{k}, \\mathbf{\\mu}_{\\mathbf{v}}^{k},L_{\\mathbf{v}}^{k}\\}_{1,\\ldots,K}\\). The positivity of the diagonal elements of \\(L_{\\mathbf{v}}^{k}\\) is ensured by a **softplus** activation function \\[f(x)=\\ln(1+\\exp(x)) \\tag{3}\\] in the respective outputs of the decoder. The mixing coefficients satisfy \\(0\\leq\\pi^{k}<1\\) and \\(\\sum_{k=1}^{K}\\pi^{k}=1\\). To ensure these conditions, the respective outputs of the decoder are passed through a **softmax** activation \\[\\sigma(\\mathbf{x})_{i}=\\frac{e^{\\mathbf{x}_{i}}}{\\sum_{i}e^{\\mathbf{x}_{i}}}. \\tag{4}\\] The rest (non-diagonal elements and mean vector) of the decoder outputs have linear activations, so no restriction in their sign. In total, the decoder output is composed of \\(K(n-1)n/2+Kn\\) single valued outputs with linear activation for the non-diagonal elements of \\(L_{\\mathbf{v}}^{k}\\) and the mean vectors \\(\\mathbf{\\mu}_{\\mathbf{v}}^{k}\\), and \\(Kn\\) positive outputs with softplus activation for the diagonal of \\(L_{\\mathbf{v}}^{k}\\), and \\(K\\) outputs with softmax activation for the mixing coefficients. MD networks are employed in stochastic systems (e.g. molecular dynamics). In the following, we assume that the high-dimensional state of the molecular system is described by \\(\\mathbf{s}_{t}\\in\\mathbb{R}^{d_{\\mathbf{s}}}\\). The decoder \\(\\mathcal{D}^{\\mathbf{w}_{\\mathcal{D}}}\\) is modeled with an MD network. The MD approximates the probability distribution of the state \\(\\mathbf{\\tilde{s}}_{t}\\sim p(\\cdot;\\mathbf{w}_{\\text{MD}})\\), where \\(\\mathbf{w}_{\\text{MD}}=\\mathcal{D}^{\\mathbf{w}_{\\mathcal{D}}}(\\mathbf{z}_{t})\\) is the output of the decoder that parametrizes the distribution. The optimal parameters of the MD autoencoder are Figure 2: Illustration of the permutation invariant encoder. The input of the network is composed of \\(N\\) atomic states that are permutation invariant, e.g. positions \\(\\{\\mathbf{x}^{1},\\ldots,\\mathbf{x}^{N}\\}\\) of \\(N\\) particles in a particle simulation, each one with dimension \\(d_{\\mathbf{x}}\\), i.e. \\(\\mathbf{x}^{i}\\in\\mathbb{R}^{d_{\\mathbf{x}}}\\), \\(\\forall i\\in\\{1,\\ldots,N\\}\\). A transformation \\(\\phi(\\cdot):\\mathbb{R}^{d_{\\mathbf{x}}}\\to\\mathbb{R}^{d_{\\mathbf{p}}}\\) is applied to each atomic state separately, mapping to a high-dimensional latent feature space. The mean of these latent representations of the atomic states is computed, leading to a singe latent feature that is permutation invariant with respect to the input. The final layer of the encoder maps the high-dimensional feature to a low dimensional representation \\(\\mathbf{z}\\), which is again permutation invariant with respect to the input, representing the encoding of the global state. identified by maximizing the log-likelihood of the reconstruction, \\[\\mathbf{w}_{\\mathcal{E}}^{\\star},\\mathbf{w}_{\\mathcal{D}}^{\\star}= \\operatorname*{argmax}_{\\mathbf{w}_{\\mathcal{E}},\\mathbf{w}_{\\mathcal{D}}} \\text{ log }p\\big{(}\\mathbf{s}_{t};\\mathbf{w}_{\\text{MD}}\\big{)},\\] \\[\\text{where}\\quad\\mathbf{w}_{\\text{MD}}=\\mathcal{D}^{\\mathbf{w}_{\\mathcal{ D}}}(\\mathbf{z}_{t})=\\mathcal{D}^{\\mathbf{w}_{\\mathcal{D}}}\\big{(}\\mathcal{E}^{\\mathbf{w}_{ \\mathcal{E}}}(\\mathbf{s}_{t})\\big{)}.\\] ### Long Short-Term Memory Recurrent Neural Networks (LSTM-RNNs) In the low order manifold (coarse, latent state), a Recurrent Neural Network (RNN) is utilized to capture the non-linear, non-markovian dynamics. The forecasting rule of the RNN is given by \\[\\mathbf{h}_{t}=\\mathcal{H}^{\\mathbf{w}_{\\mathcal{H}}}\\big{(}\\mathbf{z}_{t}, \\mathbf{h}_{t-\\Delta t}\\big{)},\\quad\\mathbf{\\tilde{z}}_{t+\\Delta t}=\\mathcal{R}^{\\mathbf{ w}_{\\mathcal{R}}}\\big{(}\\mathbf{h}_{t}\\big{)}, \\tag{5}\\] where \\(\\mathbf{w}_{\\mathcal{H}}\\) and \\(\\mathbf{w}_{\\mathcal{R}}\\) are the trainable parameters of the network, \\(\\mathbf{h}_{t}\\in\\mathbb{R}^{d_{\\mathbf{h}}}\\) is an internal hidden memory state, and \\(\\mathbf{\\tilde{z}}_{t+\\Delta t}\\) is a prediction of the latent state. The RNN is trained to minimize the forecasting loss \\(||\\mathbf{\\tilde{z}}_{t+\\Delta t}-\\mathbf{z}_{t+\\Delta t}||_{2}^{2}\\), which can be written as \\[||\\mathbf{\\tilde{z}}_{t+\\Delta t}-\\mathbf{z}_{t+\\Delta t}||_{2}^{2}=|| \\mathcal{R}^{\\mathbf{w}_{\\mathcal{R}}}\\big{(}\\mathbf{h}_{t}\\big{)}-\\mathbf{z}_{t+\\Delta t} ||_{2}^{2}=||\\mathcal{R}^{\\mathbf{w}_{\\mathcal{R}}}\\big{(}\\mathcal{H}^{\\mathbf{w}_{ \\mathcal{H}}}\\big{(}\\mathbf{z}_{t},\\mathbf{h}_{t-\\Delta t}\\big{)}\\big{)}-\\mathbf{z}_{t+ \\Delta t}||_{2}^{2}. \\tag{6}\\] This leads to \\[\\mathbf{w}_{\\mathcal{H}},\\mathbf{w}_{\\mathcal{R}}= \\operatorname*{argmin}_{\\mathbf{w}_{\\mathcal{H}},\\mathbf{w}_{\\mathcal{R}}} \\hskip-14.226378pt||\\mathcal{R}^{\\mathbf{w}_{\\mathcal{R}}}\\big{(}\\mathcal{H}^{\\bm {w}_{\\mathcal{H}}}\\big{(}\\mathbf{z}_{t},\\mathbf{h}_{t-\\Delta t}\\big{)}\\big{)}-\\mathbf{z}_{ t+\\Delta t}||_{2}^{2}. \\tag{7}\\] The RNNs are trained with Backpropagation through time (BPTT) [36]. The mappings \\(\\mathcal{H}^{\\mathbf{w}_{\\mathcal{H}}}\\) and \\(\\mathcal{R}^{\\mathbf{w}_{\\mathcal{R}}}\\), considered in this work take the form of the long short-term memory (LSTM) [35] cell. The output mapping is given by a linear transformation, i.e. \\[\\mathbf{\\tilde{z}}_{t+\\Delta t}=W_{\\mathbf{z},\\mathbf{h}}\\mathbf{h}_{t}, \\tag{8}\\] where \\(W_{\\mathbf{z},\\mathbf{h}}\\in\\mathbb{R}^{d_{\\mathbf{z}}\\times d_{\\mathbf{h}}}\\). As a consequence, the set of trainable weights of the hidden-to-output mapping is just one matrix \\(\\mathbf{w}_{\\mathcal{R}}=W_{\\mathbf{z},\\mathbf{h}}\\in\\mathbb{R}^{d_{\\mathbf{z}}\\times d_{\\mathbf{ h}}}\\). The LSTM possesses two hidden states, a cell state \\(\\mathbf{c}\\) and an internal memory state \\(\\mathbf{h}\\). The hidden-to-hidden mapping \\[\\mathbf{h}_{t},\\mathbf{c}_{t}=\\mathcal{H}^{\\mathbf{w}_{t}}\\big{(}\\mathbf{z}_{t},\\mathbf{h}_{t-\\Delta t },\\mathbf{c}_{t-\\Delta t}\\big{)} \\tag{9}\\] takes the form \\[\\mathbf{g}_{t}^{f} =\\sigma_{f}\\big{(}W_{f}[\\mathbf{h}_{t-\\Delta t},\\mathbf{z}_{t}]+\\mathbf{b}_{f }\\big{)}\\qquad\\mathbf{g}_{t}^{i}=\\sigma_{i}\\big{(}W_{i}[\\mathbf{h}_{t-\\Delta t},\\mathbf{z}_ {t}]+\\mathbf{b}_{i}\\big{)} \\tag{10}\\] \\[\\tilde{\\mathbf{c}}_{t} =\\tanh\\big{(}W_{c}[\\mathbf{h}_{t-\\Delta t},\\mathbf{z}_{t}]+\\mathbf{b}_{c} \\big{)}\\qquad\\mathbf{c}_{t}=\\mathbf{g}_{t}^{f}\\odot\\mathbf{c}_{t-\\Delta t}+\\mathbf{g}_{t}^{i} \\odot\\tilde{\\mathbf{c}}_{t}\\] \\[\\mathbf{g}_{t}^{\\mathbf{z}} =\\sigma_{h}\\big{(}W_{h}[\\mathbf{h}_{t-\\Delta t},\\mathbf{z}_{t}]+\\mathbf{b}_{h }\\big{)}\\qquad\\mathbf{h}_{t}=\\mathbf{g}_{t}^{\\mathbf{z}}\\odot\\tanh(\\mathbf{c}_{t}),\\] where \\(\\mathbf{g}_{t}^{f},\\mathbf{g}_{t}^{i},\\mathbf{g}_{t}^{\\mathbf{z}}\\in\\mathbb{R}^{d_{\\mathbf{h}}}\\), are the gate vector signals (forget, input and output gates), \\(\\mathbf{z}_{t}\\in\\mathbb{R}^{d_{\\mathbf{z}}}\\) is the latent input at time \\(t\\), \\(\\mathbf{h}_{t}\\in\\mathbb{R}^{d_{\\mathbf{h}}}\\) is the hidden state, \\(\\mathbf{c}_{t}\\in\\mathbb{R}^{d_{\\mathbf{h}}}\\) is the cell state, while \\(W_{f}\\), \\(W_{i}\\), \\(W_{c},W_{h}\\in\\mathbb{R}^{d_{\\mathbf{h}}\\times(d_{\\mathbf{h}}+d_{\\mathbf{z}})}\\), are weight matrices and \\(\\mathbf{b}_{f},\\mathbf{b}_{i},\\mathbf{b}_{c},\\mathbf{b}_{h}\\in\\mathbb{R}^{d_{\\mathbf{h}}}\\) biases. The symbol \\(\\odot\\) denotes the element-wise product. The activation functions \\(\\sigma_{f}\\), \\(\\sigma_{i}\\) and \\(\\sigma_{h}\\) are sigmoids. The dimension of the hidden state \\(d_{\\mathbf{h}}\\) (number of hidden units) controls the capacity of the cell to encode history information. The set of trainable parameters of the recurrent mapping \\(\\mathcal{H}^{\\mathbf{w}_{t}}\\) is thus given by \\[\\mathcal{H}^{\\mathbf{w}_{t}}=\\{\\mathbf{b}_{f},\\mathbf{b}_{i},\\mathbf{b}_{c},\\mathbf{b}_{h},W_{f}, W_{i},W_{c},W_{h}\\} \\tag{11}\\] ## Supplementary Information II: Comparison Measures In this section, we elaborate on the metrics used to quantify the effectiveness of the proposed approach to capture the dynamics and the state statistics of the systems under study. The mean normalized absolute difference (MNAD) is used to quantify the prediction performance of a method in a deterministic system. This metric was selected to facilitate comparison of LED with equation-free variants [32]. The Wasserstein distance (WD) and the L1-Norm histogram distance (L1-NHD) are utilized to quantify the difference between distributions. These metrics are used in stochastic systems or in the comparison of state distributions. ### Mean normalised absolute difference (MNAD) Assume that a model is used to predict a spatiotemporal field \\(y(x,t)\\), at discrete state \\(x_{i}\\) and time \\(t_{j}\\) locations. Predicted values from a model (neural network, etc.) are denoted with \\(\\tilde{y}\\), while the groundtruth (simulation of the equations with a solver based on first principles) with \\(y\\). The normalized absolute difference (NAD) between the model output and the groundtruth is defined as \\[\\text{NAD}(t_{j})=\\frac{1}{N_{x}}\\sum_{i=1}^{N_{x}}\\frac{|y(x_{i},t_{j})- \\hat{y}(x_{i},t_{j})|}{\\max_{i,j}(y(x_{i},t_{j}))-\\min_{i,j}(y(x_{i},t_{j}))}, \\tag{12}\\] where \\(N_{x}\\) is the dimensionality of the discretized state \\(x\\). The NAD depends on the time \\(t_{j}\\). The mean NAD (MNAD) is given by the mean over time of the NAD score, i.e. \\[\\text{MNAD}=\\frac{1}{N_{T}}\\sum_{j=1}^{N_{T}}\\text{NAD}(t_{j}), \\tag{13}\\] where \\(N_{T}\\) is the number of time-steps considered. The MNAD is used in the FitzHugh-Nagumo equation, and the Kuramoto-Sivashinsky equation, to quantify the prediction accuracy of LED and benchmark against other methods (e.g. other propagators on the latent space) or against other equation-free variants. ### Pearson Correlation Coefficient Assume as before, the spatiotemporal field \\(y(x,t)\\), at discrete state \\(x_{i}\\) and time \\(t_{j}\\) locations. This can be vectorized in \\(y_{vec}=\\text{vec}(y(x,t))\\in\\mathbb{R}^{N_{x}\\cdot N_{t}\\times 1}\\). The same applies to the vectorized prediction \\(\\tilde{y}_{vec}=\\text{vec}(\\tilde{y}(x,t))\\in\\mathbb{R}^{N_{x}\\cdot N_{t} \\times 1}\\). We can compute the Pearson correlation coefficient, or simply correlation, as \\[\\text{Correlation}=\\frac{\\text{COV}\\left(y_{vec},\\tilde{y}_{vec}\\right)}{ \\sigma_{y_{vec}}\\sigma_{\\tilde{y}_{vec}}}, \\tag{14}\\] where COV is the covariance, and \\(\\sigma\\) is the standard deviation. The correlation is used as a prediction performance metric in the Kuramoto-Sivashinsky equation. ### Wasserstein Distance The Wasserstein distance (WD), is a metric used to quantify the difference between the distribution functions of two random variables. It is defined as the integral of the absolute difference of the inverse Cumulative Distribution Functions (CDF) of the random variables. Assuming two random variables \\(Z_{1}\\) and \\(Z_{2}\\), with CDFs given by \\(\\tau=F_{Z_{1}}(z)\\) and \\(F_{Z_{2}}(z)\\), with \\(\\tau\\in[0,1]\\), the Wasserstein metric is defined as \\[\\text{WD}(Z_{1},Z_{2})=\\int_{0}^{1}|F_{Z_{1}}^{-1}(\\tau)-F_{Z_{2}}^{-1}(\\tau)| \\,d\\tau. \\tag{15}\\] In high-dimensional problems, where the random variable is multivariate (random vector), we are reporting the mean WD of each variable after marginalization of all others. ### L1-Norm Histogram Distance In order to quantify the difference of the distributions of two random multivariate random variables \\(Z_{1}\\) and \\(Z_{2}\\), we employ in addition to the WD, the L1-Norm histogram distance. We measure this metric based on the L1 norm of the difference between the normalized histograms of the random variables computed on the same grid. The number of bins for the computation of the histograms, is selected according to Rice rule, given by \\(N_{bins}=\\left\\lceil\\,2\\sqrt[3]{n}\\,\\right\\rceil\\) where \\(n\\) is the number of observations in the sample \\(z\\). The WD and the L1-NHD are used to measure the difference between the spatial particle distributions in the Advection-Diffusion model. ## Supplementary Information III: Results ### LED for Advection-Diffusion Equation The LED method is applied to the simulation of the advection-diffusion equation. The microscale description of the Advection-Diffusion (AD) process is modeled with a system of \\(N=1000\\) particles on a bounded domain \\(\\Omega=[-L/2,L/2]^{d_{\\mathbf{x}}}\\). The particle dynamics are modeled with the stochastic differential equation (SDE) \\[\\mathrm{d}\\mathbf{x}_{t}=\\mathbf{u}_{t}\\mathrm{d}t+\\sqrt{D}\\,\\mathrm{d}\\mathbf{W}_{t}, \\tag{16}\\] where \\(\\mathbf{x}_{t}\\in\\Omega\\) denotes the position of the particle at time \\(t\\), \\(D\\in\\mathbb{R}\\) is the diffusion coefficient, \\(\\mathrm{d}\\mathbf{W}_{t}\\in\\mathbb{R}^{d_{\\mathbf{x}}}\\) is a Wiener process, and \\(\\mathbf{u}_{t}=\\mathbf{A}\\cos(\\mathbf{\\omega}t)\\in\\mathbb{R}^{d_{\\mathbf{x}}}\\) is a cosine advection (drift) term. In the following, the three dimensional space \\(d_{\\mathbf{x}}=3\\) is considered, with \\(D=0.1\\), \\(\\mathbf{A}=[1,1.74,0.0]^{T}\\), \\(\\mathbf{\\omega}=[0.2,1.0,0.5]^{T}\\), and a domain size of \\(L=1\\). The Peclet number quantifies the rate of advection by the rate of diffusion, i.e. \\(Pe=\\frac{LU}{D}\\). In this work, \\(L=1\\), \\(U=|\\mathbf{A}|_{2}\\approx 2\\), suggest a Peclet number of \\(Pe=20\\). Equation (16) is solved with explicit Euler integration with \\(\\Delta t=10^{-2}\\), initial conditions \\(\\mathbf{x}_{0}=\\mathbf{0}\\), and reflective boundary conditions ensuring that \\(\\mathbf{x}_{t}\\in\\Omega,\\forall t\\). The positions of the particles are saved at a coarser time-step \\(\\Delta t=1\\). Three datasets are generated by starting from randomly selected initial conditions. The training and validation datasets consist of 500 samples each, and the test dataset consists of 4000 samples. The full state of the system is high-dimensional, i.e. \\(\\mathbf{s}_{t}=[\\mathbf{x}_{t}^{1};\\dots;\\mathbf{x}_{t}^{N}]^{T}\\in\\mathbb{R}^{N\\times 3}\\). The particles concentrate on a few \"meta-stable\" states, and transition between them, suggesting that the collective dynamics can be captured by a few latent variables. It is not straightforward, however, to determine a-priori the number of these states and the patterns of collective motion. LED unravels this information and provides a computationally efficient multiscale model to approximate the system dynamics. An AE with a permutation invariant input layer with a latent dimension \\(d_{\\mathbf{z}}\\), an MD decoder and a stateful LSTM-RNN are employed to learn and forecast the dynamics on the low-dimensional manifold. In this case, the latent dimension of LED is tuned to \\(d_{\\mathbf{z}}=8\\) based on the log-likelihood on the validation data. Reducing the latent dimension further, caused a decrease in the validation log-likelihood loss, while increasing the latent dimension did not lead to any significant improvement. Regarding the rest of the LED hyper-parameters, the \\(\\phi\\) function consists of \\(3\\times 50\\) layers and tanh activation, the permutation invariant space has dimension \\(M=100\\) with mean feature function, and the decoder \\(g\\) consists of a network with \\(3\\times 50\\) layers and tanh activation, reducing the dimensionality to the desired latent state of dimension \\(d_{\\mathbf{z}}=8\\). The decoder is composed of \\(3\\times 50\\) layers, and a mixture density output layer, with 25 hidden units, and 5 kernels outputting the parameters for the mixture coefficients, the means, and the covariance matrices of the 5 kernels. The RNN propagating the dynamics in the latent space, is composed of one stateful LSTM layer with 25 nodes and was trained with BBTT with a sequence length of 100. Figure 4: **A)** The L1-Norm Histogram distance averaged over time and initial conditions. The self-similar error is plotted for reference, as errors below this level are statistically insignificant. **B)** The evolution of the L1-Norm Histogram distance in time averaged over initial conditions. **C)** The Wasserstein distance averaged over time and initial conditions. **D)** The evolution of the Wasserstein distance in time averaged over initial conditions. **E)** The speed-up of LED compared to the micro scale solver is plotted w.r.t. \\(\\rho\\). After training the RNN, the efficiency of LED in forecasting the dynamics is tested in 30 trajectories starting from different initial conditions randomly sampled from the testing data. The final prediction horizon is set to \\(T_{f}=2000\\). The particle spatial distribution predicted by LED is compared against the groundtruth in terms of the L1-Norm Histogram distance (L1-NHD) and the Wasserstein distance (WD). The results are shown in Figure 4. Three LED variants are considered. The first variant does not evolve the dynamics on the particle level (Latent-LED, \\(T_{m}=0\\)) and its error increases with time and exhibits the highest errors on average. The second and third variants, (Multiscale-LED), evolve the low order manifold dynamics (coarse scale) for \\(T_{m}\\) time units, and the particle dynamics (fine scale) for \\(T_{\\mu}=5\\) to correct iteratively for the statistical error. This effect is due to the explicit dependence of the coarse system dynamics in time, as the \\(\\cos(\\mathbf{\\omega}t)\\) advection term dominates. Two values for \\(T_{m}\\) are considered, \\(T_{m}=50\\) leading to a relative ratio of coarse to fine simulation time of \\(\\rho=T_{m}/T_{\\mu}=10\\), and another one with \\(T_{m}=100\\), leading to \\(\\rho=20\\). This incurs additional computational cost induced by the evolution of the high-dimensional state. The warm-up time is \\(T_{warm}=100\\) for all variants. As the multiscale ratio \\(\\rho=T_{m}/T_{\\mu}\\) is increased, spending more time in the latent propagation, the errors gradually increase. The propagation in the low dimensional latent space is far less computationally expensive compared to the evolution of the high-dimensional dynamics. As \\(\\rho\\) is increased, greater computational savings are achieved, albeit at the cost of higher approximation error, as depicted in Figure 4. The LED is able to generalize to different numbers of particles, as demonstrated in the Section III.1.2. The effectiveness of LED depending on the diffusion coefficient \\(D\\) is shown in Figure 5. LED exhibits consistently lower error as the Peclet number decreases. In lower Peclet numbers, diffusion becomes dominant. Since the diffusion is isotropic, it brings the system very fast to a mean solution. An example of the evolution of the latent state, the errors on the first two moments, and the L1-NHD between the groundtruth and the predicted spatial distribution of particles in an iterative prediction on the test data is shown in Figure 6a. The initial warm-up period of LED is set to \\(T_{warm}=100\\). LED captures the variance of the particle positions but due to the iterative error propagation the error on the distribution (L1-NHD and mean position) is increasing with time. In Figure 7, the latent space of LED is clustered to identify frequently visited metastable states that can be mapped back to their respective particle configurations using the decoder. Figure 5: Analysis of the performance of LED for different Péclet numbers \\(Pe\\in\\{20,40,200,400\\}\\). Three LED variants are considered, Latent-LED (\\(T_{m}=0\\)), and two variants of Multiscale-LED with \\(T_{\\mu}=5\\) and \\(T_{m}\\in\\{50,100\\}\\). The warm-up time is \\(T_{warm}=100\\) for all variants. **A)** The Wasserstein distance and **B)** L1-Norm Histogram distance between the particle spatial distributions averaged over time and initial conditions, plotted with respect to the multiscale ratio \\(\\rho\\). The methods consistently exhibit lower error as the Péclet number decreases. **C)** The speed-up is plotted w.r.t. \\(\\rho\\). Figure 6: **(a)** LED applied on the 3-dimensional Advection-Diffusion equation, iteratively forecasting the evolution of the particles starting from an initial condition in the test data. The initial warm-up period of LED is set to \\(T_{warm}=100\\). An AE with a permutation invariant input layer, and a latent dimension of \\(d_{\\mathbf{z}}=8\\) is utilized to coarse-grain the high-dimensional dynamics. The decoder of LED is mapping from the latent space to the particle configuration using a MD decoder. We plot the evolution of the latent state in time, along with the L1-NHD between the predicted and groundtruth particle distributions and the absolute error on the mean, and the standard deviation of the particle distributions. LED can forecast the evolution of the particle positions with low error, even though the total dimensionality of the original state describing the configuration of the \\(N=1000\\) particles of the system is \\(\\mathbf{s}_{t}\\in\\mathbb{R}^{1000\\times 3}\\). The network, learned an \\(d_{\\mathbf{z}}=8\\) dimensional coarse-grained representation of this configuration. However, due to the iterative prediction with LED, the error on the predicted distribution of particles is increasing with time. **(b)** Multiscale propagation in LED. To alleviate the iterative error propagation, the multiscale propagation is utilized with \\(T_{m}=100\\), \\(T_{\\mu}=5\\), \\(\\rho=20\\). Due to the iterative transition between propagation in the latent space \\(\\mathbf{z}_{t}\\) of LED for \\(T_{m}\\) and evolution of the micro-scale particle dynamics for \\(T_{\\mu}\\), the effect of iterative statistical error propagation is alleviated. Figure 7: **A)** Evolution of the second PCA mode of the latent state \\(\\mathbf{z}_{t}\\in\\mathbb{R}^{d_{\\mathbf{z}}=8}\\), against the first mode. Higher color intensity denotes higher density. Six high density regions are identified. Spectral clustering on the PCA modes of the latent dynamics reveals the clusters. The six cluster centers are marked, while color illustrates the cluster membership. The LED probabilistic decoder is employed to map each cluster center to a realization of a high-dimensional simulation state. LED effectively unravels six meta stable states of the Advection-Diffusion equation, along with the transitions between them, representing the low order effective dynamics. **B)** Evolution of the third PCA mode against the second one, colored according to cluster assignment. **C)** Density of the particle positions from simulation plotted against the distribution of the positions predicted by LED. We remark the good agreement between the two distributions. Hyper-parameter Tuning The hyper-parameters of LED are given in Table 1 for the Autoencoder, and Table 2 for the RNN. \\begin{table} \\begin{tabular}{|c|c|} \\hline Hyper-parameter & Values \\\\ \\hline \\hline Number of AE layers & \\{3\\} \\\\ Size of AE layers & \\{50\\} \\\\ Activation of AE layers & \\(\\tanh(\\cdot)\\) \\\\ Latent dimension & \\(\\{1,2,3,4,5,6,7,8,9,10,12,16,18,22,24,28,32,64\\}\\) \\\\ Residual connections & False \\\\ Variational & True/False \\\\ Permutation Invariant Layer \\(d_{p}\\) & \\(\\{200,1001\\}\\) \\\\ Number of MD kernels \\(K\\) & \\{5\\} \\\\ Hidden units of MD decoder & \\{50\\} \\\\ Input/Output data scaling & Min-Max in \\([0,1]\\) \\\\ Noise level in the data & \\(\\{0,1,10\\}\\) (\\%e) \\\\ Weight decay rate & \\(\\{0.0,0.00001\\}\\) \\\\ Batch size & 32 \\\\ Initial learning rate & \\(0.001\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: Autoencoder hyper-parameters for Advection-Diffusion in 3-D (\\(d_{\\mathbf{x}}=3\\)) \\begin{table} \\begin{tabular}{|c|c|} \\hline Hyper-parameter & Values \\\\ \\hline \\hline Number of AE layers & \\{3\\} \\\\ Size of AE layers & \\{50\\} \\\\ Activation of AE layers & \\(\\tanh(\\cdot)\\) \\\\ Latent dimension & \\{8\\} \\\\ Residual connections & False \\\\ Variational & False \\\\ Permutation Invariant Layer \\(d_{p}\\) & \\{200\\} \\\\ Number of MD kernels \\(K\\) & \\{5\\} \\\\ Hidden units of MD decoder & \\{50\\} \\\\ Input/Output data scaling & Min-Max in \\([0,1]\\) \\\\ Noise level in the data & \\{0\\} \\\\ Weight decay rate & \\{0.0\\} \\\\ Batch size & 32 \\\\ Initial learning rate & \\(0.001\\) \\\\ BBTT Sequence length & \\{100\\} \\\\ RNN cell type & \\{lstm,gru\\} \\\\ Number of RNN layers & \\{1\\} \\\\ Size of RNN layers & \\{25\\} \\\\ Activation of RNN Cell & \\(\\tanh(\\cdot)\\) \\\\ \\hline \\end{tabular} \\end{table} Table 2: LED-RNN hyper-parameters for Advection-Diffusion in 3-D (\\(d_{\\mathbf{x}}=3\\))Generalization to Different Number of Particles In this section, we provide additional results on the **generalization** of LED for a different **number of particles** in the simulation. Due to the permutation invariant encoder, coarse-graining the high-dimensional input of LED, the network is expected to be able to generalize to a different number of particles, since the identified coarse representation should rely on global statistical quantities, and not depend on individual positions. LED trained in configurations of \\(N=1000\\) particles is utilized to forecast the evolution of \\(N=400\\) particles evolving according to the Advection-Diffusion equation. The propagation of the errors is plotted in Figure 8. The initial warm-up period of LED is set to \\(T_{warm}=100\\) for all variants. We observe an excellent generalization ability of the network. Figure 8: LED trained on particle configurations with \\(N=1000\\) number of particles, learned an \\(d_{\\mathsf{z}}=8\\) dimensional coarse-grained representation of this configuration. We utilize two models with \\(T_{\\mu}=0\\) (iterative latent propagation) and \\(T_{\\mu}=100,\\rho=20\\) (multiscale forecasting) to forecast the evolution of a particle configuration composed of \\(N=400\\) particles to test the generalization ability of the model. The initial warm-up period is set to \\(T_{warm}=100\\). We plot the latent space, the L1-NHD between the densities of the particle positions, and the error on the first two moments, for both variants of LED. We observe that the LED is able to successfully generalize to the case of \\(N=400\\) particles. ### FitzHugh-Nagumo Model (FHN) The hyper-parameters of the Autoencoder are reported in Appendix III.2.1. Input and output are scaled to \\([0,1]\\) and an output activation function of the form \\(1+0.5\\tanh(\\cdot)\\) is used to ensure that the data at the output lie at this range. The architecture of the CNN we employed is given in Figure 10. In this case, the inhibitor and activator density are considered two channels of the CNN. Figure 9: **A)** Comparison of Latent-LED with \\(d_{\\mathbf{z}}=2\\) with equation-free variants from [32] in forecasting the dynamics of FHN starting from one initial condition from the testing dataset. The density and mean MNAD error (averaged over 32 initial conditions) between the predicted and ground-truth evolution of the activator density is plotted. **B)** Comparison of different macrodynamics propagators (\\(\\clubsuit\\) AE-LSTMed2end; \\(\\clubsuit\\) AE-LSTM; \\(\\clubsuit\\) AE-MLP; \\(\\clubsuit\\) AE-RC; \\(\\clubsuit\\) AE-SINDy) in iterative latent forecasting. The MNAD error on the activator density is plotted. **C)** The activator mean MNAD error (averaged over 32 initial conditions) in multiscale forecasting with LED (AE-LSTMed2end with \\(d_{\\mathbf{z}}=2\\)) and its density plotted as a function of the macro-to-micro ratio \\(\\rho=T_{m}/T_{\\mu}\\). **D)** The speed-up of LED (AE-LSTMed2end with \\(d_{\\mathbf{z}}=2\\)) compared to the LB solver plotted w.r.t. \\(\\rho\\). The results for Latent-LED (\\(T_{\\mu}=0\\)) are denoted with the label “Latent”. As \\(T_{m}\\) is increased (increase \\(\\rho\\)), the speed-up is increased, albeit at the cost of an increasing MNAD error. **E)** The evolution of the activator density in time starting from an initial condition in the test data, along with **F)** the prediction of Latent-LED and **G)** absolute difference. Hyper-parameters and Training Time The hyper-parameter tuning of the autoencoder of LED and training times are reported in Table 3. PCA and Diffusion maps have very short fitting (training) times of approximately one minute. The layers of the CNN autoencoder employed in the FHN and its training times are given in Table 5. The architecture of the CNN autoencoder employed in the FHN is depicted in Figure 10. The hyper-parameters for the LSTM and its training times are given in Table 4. For the MLP, a three layered network with CELU activations is employed. Training time for the MLP is 100 minutes. The hyper-parameters and training times for the RC are given in Table 6. The hyper-parameters and training times for SINDy are given in Table 7. In all cases, the parameters of the best performing model on the validation data is denoted with red color. \\begin{table} \\begin{tabular}{|c|c|} \\hline Hyper-parameter & Values \\\\ \\hline \\hline end2end training & True / False \\\\ Number of AE layers & \\{3\\} \\\\ Size of AE layers & \\{100\\} \\\\ Activation of AE layers & \\(\\text{celu}(\\cdot)\\) \\\\ Latent dimension & 2 \\\\ Input/Output data scaling & \\([0,1]\\) \\\\ Output activation & \\(1+0.5\\tanh(\\cdot)\\) \\\\ Weight decay rate & 0.0 \\\\ Batch size & 32 \\\\ Initial learning rate & 0.001 \\\\ BPTT Sequence length & \\(\\{20,40,60\\}\\) \\\\ Output forecasting loss & True/False \\\\ RNN cell type & lstm \\\\ Number of RNN layers & 1 \\\\ Size of RNN layers & \\(\\{16,32,64\\}\\) \\\\ Activation of RNN Cell & \\(\\tanh(\\cdot)\\) \\\\ Output activation of RNN Cell & \\(1+0.5\\tanh(\\cdot)\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c||c|c|c|} \\hline Training times [minutes] & Min & Mean & Max \\\\ \\hline \\hline end2end training & 2.2 & 2.5 & 2.8 \\\\ only the RNN (sequential) & 0.9 & 1.2 & 1.6 \\\\ \\hline \\end{tabular} \\end{table} TABLE IV: LED-RNN hyper-parameters and training times for FHN \\begin{table} \\begin{tabular}{|c|c|} \\hline Layer & ENCODER \\\\ \\hline (0) & ConstantPad1d(padding=(13, 14), value=0.0) \\\\ (1) & ConstantPad1d(padding=(2, 2), value=0.0) \\\\ (2) & Conv1d(2, 8, kernel\\_size=(5,), stride=(1,)) \\\\ (3) & AvgPool1d(kernel\\_size=(2,), stride=(2,), padding=(0,)) \\\\ (4) & CELU(alpha=1.0) \\\\ (5) & ConstantPad1d(padding=(2, 2), value=0.0) \\\\ (6) & Conv1d(8, 16, kernel\\_size=(5,), stride=(1,)) \\\\ (7) & AvgPool1d(kernel\\_size=(2,), stride=(2,), padding=(0,)) \\\\ (8) & CELU(alpha=1.0) \\\\ (9) & ConstantPad1d(padding=(2, 2), value=0.0) \\\\ (10) & Conv1d(16, 32, kernel\\_size=(5,), stride=(1,)) \\\\ (11) & AvgPool1d(kernel\\_size=(2,), stride=(2,), padding=(0,)) \\\\ (12) & CELU(alpha=1.0) \\\\ (13) & ConstantPad1d(padding=(2, 2), value=0.0) \\\\ (14) & Conv1d(32, 4, kernel\\_size=(5,), stride=(1,)) \\\\ (15) & AvgPool1d(kernel\\_size=(2,), stride=(2,), padding=(0,)) \\\\ (16) & Flatten(start\\_dim=-2, end\\_dim=-1) \\\\ (17) & Linear(in\\_features=32, out\\_features=\\(d_{\\mathbf{z}}\\), bias=True) \\\\ (18) & CELU(alpha=1.0) \\\\ & \\(\\mathbf{z}\\in\\mathbb{R}^{d_{\\mathbf{z}}}\\) \\\\ \\hline \\hline Layer & DECODER \\\\ \\hline (1) & Linear(in\\_features=\\(d_{\\mathbf{z}}\\), out\\_features=32, bias=True) \\\\ (2) & CELU(alpha=1.0) \\\\ (3) & Upsample(scale\\_factor=2.0, mode=linear) \\\\ (4) & ConvTranspose1d(4, 32, kernel\\_size=(5,), stride=(1,), padding=(2,)) \\\\ (5) & CELU(alpha=1.0) \\\\ (6) & Upsample(scale\\_factor=2.0, mode=linear) \\\\ (7) & ConvTranspose1d(32, 16, kernel\\_size=(5,), stride=(1,), padding=(2,)) \\\\ (8) & CELU(alpha=1.0) \\\\ (9) & Upsample(scale\\_factor=2.0, mode=linear) \\\\ (10) & ConvTranspose1d(16, 8, kernel\\_size=(5,), stride=(1,), padding=(2,)) \\\\ (11) & CELU(alpha=1.0) \\\\ (12) & Upsample(scale\\_factor=2.0, mode=linear) \\\\ (13) & ConvTranspose1d(8, 2, kernel\\_size=(5,), stride=(1,), padding=(2,)) \\\\ (14) & 1 + 0.5 Tanh() \\\\ (15) & Unpad() \\\\ \\hline \\hline Latent dimension \\(d_{\\mathbf{z}}\\) & \\(\\{1,2,3,4,5,6,7,8,9,10,11,12,16,20,24,28,32,36,40,64\\}\\) \\\\ \\hline \\end{tabular} \\end{table} Table 5: CNN Autoencoder and training times for FHN \\begin{table} \\begin{tabular}{|c|c|} \\hline Hyper-parameter tuning & Values \\\\ \\hline \\hline Solver & Pseudoinverse \\\\ Size & 1000 \\\\ Degree & 10 \\\\ Radius & 0.99 \\\\ Input scaling \\(\\sigma\\) & \\(\\{0.5,1,2\\}\\) \\\\ Dynamics length & 100 \\\\ Regularization \\(\\eta\\) & \\(\\{0.0,0.001,0.0001,0.00001\\}\\) \\\\ Noise level per mill & \\(\\{10,20,30,40,100\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline \\multicolumn{3}{|c|}{Training times [minutes]} \\\\ \\hline Min & Mean & Max \\\\ \\hline \\hline 0.15 & 0.18 & 0.19 \\\\ \\hline \\end{tabular} \\end{table} Table 7: SINDy hyper-parameters and training times (in CNN-SINDy) for FHN \\begin{table} \\begin{tabular}{|c|c|} \\hline Hyper-parameter tuning & Values \\\\ \\hline \\hline Degree & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) \\\\ Library & Polynomials \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Min & Mean & Max \\\\ \\hline \\hline 0.14 & 0.23 & 0.32 \\\\ \\hline \\end{tabular} \\end{table} Table 6: Reservoir Computer hyper-parameters and training times (in CNN-RC) for FHN ### Kuramoto-Sivashinsky A KS trajectory is plotted in Figure 11, along with the latent space evolution of Latent-LED and the predicted trajectory. We observe that the long-term climate is reproduced, although the LED is propagating an 8-dimensional latent state. #### a.3.1 Hyper-parameters and Training Times The hyper-parameter tunings and training times for the AE and CNN are given in Table 8 and Table 9 respectively. The architecture of the CNN autoencoder employed in KS is given in Table 10 along with the training times, and depicted in Figure 12. PCA fitting time is approximately one minute. The hyper-parameters and training times of the LSTM-RNN of LED are given in Table 11. The hyper-parameters and training times of the RC are given in Table 12. The hyper-parameters and training times of SINDy are given in Table 13. \\begin{table} \\begin{tabular}{|c|c|} \\hline Hyper-parameter & Values \\\\ \\hline \\hline Convolutional & True \\\\ Kernels & Encoder: \\(5-5-5-5\\), Decoder: \\(5-5-5-5\\) \\\\ Channels & \\(1-16-32-64-8-\\mathbf{d}_{z}-8-64-32-16-1\\) \\\\ Batch normalization & True / False \\\\ Transpose convolution & True / False \\\\ Pooling & Average \\\\ Activation & \\(\\text{celu}(\\cdot)\\) \\\\ Latent dimension & \\(\\{1,2,3,4,5,6,7,8,9,10,11,12,16,20,24,28,32,36,40,64\\}\\) \\\\ Input/Output data scaling & \\([0,1]\\) \\\\ Output activation & \\(1+0.5\\tanh(\\cdot)\\) \\\\ Weight decay rate & \\(0.0\\) \\\\ Batch size & \\(32\\) \\\\ Initial learning rate & \\(0.001\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline \\multicolumn{3}{|c|}{Training times [minutes]} \\\\ \\hline Min & Mean & Max \\\\ \\hline \\hline 236 & 311 & 476 \\\\ \\hline \\end{tabular} \\end{table} Table 9: CNN hyper-parameters for KS Figure 12: The architecture of the CNN employed in KS. \\begin{table} \\begin{tabular}{|c|c|} \\hline Layer & ENCODER \\\\ \\hline (1) & ConstantPad1d(padding=(2, 2), value=0.0) \\\\ (2) & Conv1d(1, 16, kernel\\_size=(5,), stride=(1,)) \\\\ (3) & AvgPool1d(kernel\\_size=(2,), stride=(2,), padding=(0,)) \\\\ (4) & CELU(alpha=1.0) \\\\ (5) & ConstantPad1d(padding=(2, 2), value=0.0) \\\\ (6) & Conv1d(16, 32, kernel\\_size=(5,), stride=(1,)) \\\\ (7) & AvgPool1d(kernel\\_size=(2,), stride=(2,), padding=(0,)) \\\\ (8) & CELU(alpha=1.0) \\\\ (9) & ConstantPad1d(padding=(2, 2), value=0.0) \\\\ (10) & Conv1d(32, 64, kernel\\_size=(5,), stride=(1,)) \\\\ (11) & AvgPool1d(kernel\\_size=(2,), stride=(2,), padding=(0,)) \\\\ (12) & CELU(alpha=1.0) \\\\ (13) & ConstantPad1d(padding=(2, 2), value=0.0) \\\\ (14) & Conv1d(64, 8, kernel\\_size=(5,), stride=(1,)) \\\\ (15) & AvgPool1d(kernel\\_size=(2,), stride=(2,), padding=(0,)) \\\\ (16) & CELU(alpha=1.0) \\\\ (17) & Flatten( start\\_dim =-2, end\\_dim = -1) \\\\ (18) & Linear(in\\_features =32, out\\_features =8, bias=True) \\\\ (19) & CELU(alpha=1.0) \\\\ & \\(\\mathbf{z}\\in\\mathbb{R}^{8}\\) \\\\ \\hline \\hline Layer & DECODER \\\\ \\hline (1) & Linear(in\\_features=8, out\\_features=32, bias=True) \\\\ (2) & CELU(alpha=1.0) \\\\ (3) & Upsample(scale\\_factor=2.0, mode=linear) \\\\ (4) & Conv1d(8, 64, kernel\\_size=(5,), stride=(1,), padding=(2,)) \\\\ (5) & CELU(alpha=1.0) \\\\ (6) & Upsample(scale\\_factor=2.0, mode=linear) \\\\ (7) & Conv1d(64, 32, kernel\\_size=(5,), stride=(1,), padding=(2,)) \\\\ (8) & CELU(alpha=1.0) \\\\ (9) & Upsample(scale\\_factor=2.0, mode=linear) \\\\ (10) & Conv1d(32, 16, kernel\\_size=(5,), stride=(1,), padding=(2,)) \\\\ (11) & CELU(alpha=1.0) \\\\ (12) & Upsample(scale\\_factor=2.0, mode=linear) \\\\ (13) & Conv1d(16, 1, kernel\\_size=(5,), stride=(1,), padding=(2,)) \\\\ (14) & 1 + 0.5 Tanh() \\\\ \\hline \\end{tabular} \\end{table} Table 10: CNN Autoencoder for KS \\begin{table} \\begin{tabular}{|c|c|} \\hline Hyper-parameter tuning & Values \\\\ \\hline \\hline Solver & Pseudoinverse \\\\ Size & 1000 \\\\ Degree & 10 \\\\ Radius & 0.99 \\\\ Input scaling \\(\\sigma\\) & \\(\\{0.5,1,2\\}\\) \\\\ Dynamics length & 100 \\\\ Regularization \\(\\eta\\) & \\(\\{0.0,0.001,0.0001,0.00001\\}\\) \\\\ Noise level per mill & \\(\\{10,20,30,40,100\\}\\) \\\\ \\hline \\end{tabular} \\end{table} Table 11: LED (LSTM-RNN) hyper-parameters and training times for KS \\begin{table} \\begin{tabular}{|c|c|} \\hline Hyper-parameter & Values \\\\ \\hline \\hline end2end training & False / True \\\\ Convolutional AE (CNN) & True \\\\ Kernels & Encoder: \\(5-5-5-5\\), Decoder: \\(5-5-5-5\\) \\\\ Channels & \\(1-16-32-64-8-\\mathbf{d}_{z}-8-64-32-16-1\\) \\\\ Batch normalization & False \\\\ Transpose convolution & False \\\\ Pooling & Average \\\\ Activation & \\(\\text{celu}(\\cdot)\\) \\\\ Latent dimension & 8 \\\\ Input/Output data scaling & \\([0,1]\\) \\\\ Output activation & \\(1+0.5\\tanh(\\cdot)\\) \\\\ Weight decay rate & 0.0 \\\\ Batch size & 32 \\\\ Initial learning rate & 0.001 \\\\ BPTT Sequence length & \\(\\{25,50,100\\}\\) \\\\ Output forecasting loss & True/False \\\\ RNN cell type & lstm \\\\ Number of RNN layers & 1 \\\\ Size of RNN layers & \\(\\{64,128,256,512\\}\\) \\\\ Activation of RNN Cell & \\(\\tanh(\\cdot)\\) \\\\ Output activation of RNN Cell & \\(1+0.5\\tanh(\\cdot)\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c||c|c|c|} \\hline Training times [minutes] & Min & Mean & Max \\\\ \\hline \\hline end2end training & 476 & 978 & 1140 \\\\ only the RNN (sequential) & 960 & 1100 & 1140 \\\\ \\hline \\end{tabular} \\end{table} Table 12: Reservoir Computer hyper-parameters and training times (in CNN-RC) for KS \\begin{table} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values \\\\ \\hline \\hline Solver & Pseudoinverse \\\\ Size & 1000 \\\\ Degree & 10 \\\\ Radius & 0.99 \\\\ Input scaling \\(\\sigma\\) & \\(\\{0.5,1,2\\}\\) \\\\ Dynamics length & 100 \\\\ Regularization \\(\\eta\\) & \\(\\{0.0,0.001,0.0001,0.00001\\}\\) \\\\ Noise level per mill & \\(\\{10,20,30,40,100\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Min & Mean & Max \\\\ \\hline \\hline 0.25 & 0.35 & 0.38 \\\\ \\hline \\end{tabular} \\end{table} Table 12: Reservoir Computer hyper-parameters and training times (in CNN-RC) for KS \\begin{table} \\begin{tabular}{|c|c|} \\hline Hyper-parameter tuning & Values \\\\ \\hline \\hline Library & Polynomials \\\\ Degree & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) \\\\ \\hline \\end{tabular} \\end{table} Table 13: SINDy hyper-parameters and training times (in CNN-SINDy) for KS ### Viscous Flow Behind a Cylinder The flow behind a cylinder in the two dimensional space is simulated by solving the incompressible Navier-Stokes equations with Brinkman penalization to enforce the no-slip boundary conditions on the surface of the cylinder [51, 52], i.e. \\[\\frac{\\partial\\mathbf{u}}{\\partial t}+(\\mathbf{u}\\cdot\ abla) \\mathbf{u} =-\\frac{\ abla p}{\\rho}+\ u\\Delta\\mathbf{u}+\\lambda\\chi^{(s)}( \\mathbf{u}^{(s)}-\\mathbf{u})\\,, \\tag{17}\\] \\[\ abla\\cdot\\mathbf{u} =0\\,,\\] where \\(\\mathbf{u}=[u_{x},u_{y}]^{T}\\in\\mathbb{R}^{2}\\) is the velocity, \\(p\\in\\mathbb{R}\\) is the pressure field, \\(\\rho\\) is the density, \\(\ u\\) is the kinematic viscosity, and \\(\\lambda\\) is the penalization coefficient. The velocity-field \\(\\mathbf{u}^{(s)}\\in\\mathbb{R}^{2}\\) describes the translation of the cylinder. The numerical method of the flow solver is finite differences, with the incompressibility enforced through pressure projection. The computational domain is \\(\\Omega=[0,1]\\times[0,0.5]\\), the cylinder is positioned at \\((0.2,0.5)\\in\\Omega\\), with diameter \\(D=0.075\\). The cylinder is described by the characteristic function \\(\\chi^{(s)}\\), that is \\(\\chi^{(s)}=1\\) inside the cylinder \\(\\Omega^{(s)}\\) and \\(\\chi^{(s)}=0\\) outside \\(\\Omega\\setminus\\Omega^{(s)}\\). We consider the application of LED to two Reynolds' numbers \\(Re=100\\) and \\(Re=1000\\), by setting \\(\ u=0.0001125\\) and \\(\ u=0.00001125\\) respectively. The Strouhal number (defined in the SI Equation 26) is \\(\\mathrm{St}=0.175\\), and \\(\\mathrm{St}=0.225\\) for \\(Re=100\\) and \\(Re=1000\\) respectively. For both cases, the domain is discretized using \\(1024\\times 512\\) grid-points and the time-step \\(\\delta t\\) is adapted to ensure that the CFL number is fixed at \\(0.5\\). More details on the domain size and simulation are provided in the SI 3 D. Equation 17 is solved for the velocity \\(\\mathbf{u}\\in\\mathbb{R}^{2}\\) and pressure field \\(p\\in\\mathbb{R}\\) using the pressure projection method. First, we perform advection and diffusion of the flow field in the whole domain \\[\\mathbf{u}^{*}=\\mathbf{u}^{t}+\\delta t\\left(\ u\\Delta\\mathbf{u}^{t}-(\\mathbf{ u}^{t}\\cdot\ abla)\\mathbf{u}^{t}\\right)\\,. \\tag{18}\\] The continuity equation requires the field to be divergence-free. This condition is imposed with the pressure projection \\[\\mathbf{u}^{**}=\\mathbf{u}^{*}-\\delta t\\frac{\ abla p^{t+1}}{\\rho}\\,. \\tag{19}\\] The pressure field used here is obtained by solving the Poisson equation emerging from the divergence of Equation (19), i.e. \\[\\Delta p^{t+1}=\\frac{\\rho}{\\delta t}\ abla\\cdot\\mathbf{u}^{*}\\,. \\tag{20}\\] Note that adding Equation (19) and Equation (18) yields the original Equation (17) without the penalization term for Euler timestepping. The time-step is completed by applying the penalization force using \\(\\delta t\\lambda=1\\), \\[\\mathbf{u}^{t+1}=\\mathbf{u}^{**}+\\chi^{(s),t+1}(\\mathbf{u}^{(s),t+1}-\\mathbf{ u}^{**})\\,. \\tag{21}\\] We remark that the penalisation force acts as a Lagrange multiplier enforcing the translation motion of the cylinder on the fluid. The temporally discrete equations described above are solved on a grid with spacing \\(\\Delta x\\) using second-order central finite differences for diffusion terms, and a third-order upwind scheme for advection terms. For the simulated impulsively started cylinder the Reynolds-number for a cylinder with diameter \\(D\\) moving with velocity \\(v\\) in a fluid with kinematic viscosity \\(\ u\\) is defined as \\[Re=\\frac{Dv}{\ u}. \\tag{22}\\] In the present simulations the cylinder moves with constant velocity \\(v=0.15\\) in \\(-x\\)-direction. The computational domain is chosen to be \\(\\Omega=[0,1]\\times[0,0.5]\\) and moves with the center of mass of the sphere with diameter \\(D=0.075\\), that is fixed at \\((0.2,0.5)\\in\\Omega\\). Here we present results for a simulation at \\(Re=100\\) and \\(Re=1000\\) by setting the kinematic viscocity to be \\(\ u=0.0001125\\) and \\(\ u=0.00001125\\) respectively. For both cases, the domain is discretized using \\(1024\\times 512\\) gridpoints and the time-step \\(\\delta t\\) is adapted to ensure that the CFL-number is fixed at \\(0.5\\). The Strouhal number \\(\\mathrm{St}\\) describes the periodic vortex shedding at the wake of the cylinder. It is defined as \\[St=\\frac{Df}{\ u}. \\tag{23}\\] where \\(f\\) is the frequency of vortex shedding. In our case, \\(\\mathrm{St}=0.175\\) for \\(Re=100\\), and \\(\\mathrm{St}=0.225\\) for \\(Re=1000\\). The state of the simulation is described by the velocity \\(\\mathbf{u}\\in\\mathbb{R}^{2}\\) and the pressure \\(p\\in\\mathbb{R}\\) at each grid point. The drag coefficient \\((Cd)\\) around the cylinder for the viscosity \\(\\mu\\) and pressure \\(p_{t}\\) is calculated as \\[\\mathbf{F}_{\\mu} =\\oiint\\mu(\ abla\\mathbf{u}+\ abla\\mathbf{u}^{\\intercal})\\cdot \\mathbf{n}\\,dS, \\tag{24}\\] \\[\\mathbf{F}_{p} =\\oiint-p\\mathbf{n}\\,dS,\\] (25) \\[C_{d,\\mu} =\\frac{2\\cdot\\mathbf{F}_{\\mu}\\cdot\\mathbf{u}_{\\infty}}{\\varrho \\cdot\\|\\mathbf{u}_{\\infty}\\|^{3}\\cdot D},\\] (26) \\[C_{d,p} =\\frac{2\\cdot\\mathbf{F}_{p}\\cdot\\mathbf{u}_{\\infty}}{\\varrho \\cdot\\|\\mathbf{u}_{\\infty}\\|^{3}\\cdot D},\\] (27) \\[C_{d} =C_{d,\\mu}+C_{d,p}, \\tag{28}\\] where \\(\\mathbf{u}_{\\infty}=(1,0)^{\\intercal}\\) is the free-stream velocity and \\(\\mathbf{n}\\) is the outward normal of the cylinder perimeter. The state of the LED at every time-step is composed of four fields, the two components of the velocity field \\(u_{x}\\), and \\(u_{y}\\), the scalar pressure \\(p\\) at each grid-point, and the vorticity field \\(\\omega\\) computed a-posteriori from the velocity field, i.e. \\(\\mathbf{s}_{t}=\\{u_{x},u_{y},p,\\omega\\}\\in\\mathbb{R}^{4\\times 512\\times 1024}\\). The simulation state \\(\\mathbf{s}_{t}\\) is saved at a coarse time resolution \\(\\Delta t=0.2\\) for a total of 1000 coarse time-steps. There are 512 grid points along the length of the channel and 1024 gird points along the width of the channel. After discarding the initial transients, 250 time-steps are used for training (equivalent to \\(T=50\\) time units), the next 250 for validation (equivalent to \\(T=50\\) time units), and the next 500 for testing (equivalent to \\(T=100\\) time units). LED employs a Convolutional neural network (CNN) to identify a low dimensional latent space \\(\\mathbf{z}\\in\\mathbb{R}^{4}\\) in the \\(Re=100\\) scenario, and \\(\\mathbf{z}\\in\\mathbb{R}^{10}\\) in the \\(Re=1000\\) scenario. The CNN architecture is depicted in 13, and the layers are given in Table 14. We experimented with various activation functions, addition of batch-normalization layers, addition of transpose convolutional layers in the decoding part, different kernel sizes, and optimizers. The data are scaled to \\([0,1]\\). The output activation function of the CNN autoencoder is set to \\(0.5+0.5\\tanh(\\cdot)\\), whose image range matches the data range. The hyper-parameter tuning and training times for the LSTM-RNN of LED are given in Table 15. The hyper-parameters and training times for the RC are given in Table 16. The hyper-parameters and training times for SINDy are given in Table 17. The hyper-parameters and training times for the RC are given in Table 16. The hyper-parameters and training times for SINDy are given in Table 17. Figure 13: The architecture of the CNN employed in the flow behind a cylinder example. \\begin{table} \\begin{tabular}{|c|c|c|} \\hline Layer & ENCODER & \\\\ \\hline (0) & interpolationLayer() \\\\ \\hline (1) & ZeroPad2d(padding=(6, 6, 6, 6), value=0.0) \\\\ \\hline (2) & Conv2d(4, 20, kernel,size=(13, 13), stride=(1, 1)) \\\\ \\hline (3) & BatchNorm2d(20, eps=1e-05, momentum=0.1, affine=0, track_running_stats=True) \\\\ \\hline (4) & AvgPool2d( kernel,size=2, stride=2, padding=0) \\\\ \\hline (5) & CELU(alpha=1.0) \\\\ \\hline (6) & ZeroPad2d(padding=(6, 6, 6), value=0.0) \\\\ \\hline (7) & Conv2d(20, 20, kernelsize=(13, 13), stride=(1, 1)) \\\\ \\hline (8) & BatchNorm2d(20, eps=1e-05, momentum=0.1, affine=0, track_running_stats=True) \\\\ \\hline (9) & AvgPool2d( kernel,size=2, stride=2, padding=0) \\\\ \\hline (10) & CELU(alpha=1.0) \\\\ \\hline (11) & ZeroPad2d(padding=(6, 6, 6, 6), value=0.0) \\\\ \\hline (12) & Conv2d(20, 20, kernelsize=(13, 13), stride=(1, 1)) \\\\ \\hline (13) & BatchNorm2d(20, eps=1e-05, momentum=0,1 affine=0, track_running_stats=True) \\\\ \\hline (14) & AvgPool2d( kernel,size=2, stride=2, padding=0) \\\\ \\hline (15) & CELU(alpha=1.0) \\\\ \\hline (16) & ZeroPad2d(padding=(6, 6, 6), value=0.0) \\\\ \\hline (17) & Conv2d(20, 20, kernelsize=(13, 13), stride=(1, 1)) \\\\ \\hline (18) & BatchNorm2d(20, eps=1e-05, momentum=0,1 affine=0, track_running_stats=True) \\\\ \\hline (19) & AvgPool2d( kernel,size=2, stride=2, padding=0) \\\\ \\hline (20) & CELU(alpha=1.0) \\\\ \\hline (21) & ZeroPad2d(padding=(6, 6, 6, 6), value=0.0) \\\\ \\hline (22) & Conv2d(20, 20, kernelsize=(13, 13), stride=(1, 1)) \\\\ \\hline (23) & BatchNorm2d(20, eps=1e-05, momentum=0,1 affine=0, track_running_stats=True) \\\\ \\hline (24) & AvgPool2d( kernel,size=2, stride=2, padding=0) \\\\ \\hline (25) & CELU(alpha=1.0) \\\\ \\hline (26) & ZeroPad2d(padding=(6, 6, 6, 6), value=0.0) \\\\ \\hline (27) & Conv2d(20, 2, kernelsize=(13, 13), stride=(1, 1)) \\\\ \\hline (28) & AvgPool2d( kernel,size=2, stride=2, padding=0) \\\\ \\hline (29) & CELU(alpha=1.0) \\\\ \\hline (30) & Flatten(start,dim=3, end_dim=1) \\\\ \\hline (31) & Linear(in,features=64, out_features=\\(d_{k}\\), bias=True) \\\\ \\hline (32) & CELU(alpha=1.0) \\\\ \\hline \\(\\mathbf{z}\\in\\mathbb{R}^{d}\\) & \\\\ \\hline \\hline Layer & DESODER \\\\ \\hline (0) & Linear(in,features=\\(d_{k}\\), out_features=64, bias=True) \\\\ \\hline (1) & CELU(alpha=1.0) \\\\ \\hline (2) & ViewModule() \\\\ \\hline (3) & ConvTranspose2d(2, 20, kernel,size=(13, 13), stride=(2, 2), padding=(6, 6), output_padding=(1, 1)) \\\\ \\hline (4) & BatchNorm2d(20, eps=1e-05, momentum=0.1, affine=0, track_running_stats=False) \\\\ \\hline (5) & CELU(alpha=1.0) \\\\ \\hline (6) & ConvTranspose2d(20, 20, kernelsize=(13, 13), stride=(2, 2), padding=(6, 6), output_padding=(1, 1)) \\\\ \\hline (7) & BatchNorm2d(20, eps=1e-05, momentum=0.1, affine=0, track_running_stats=False) \\\\ \\hline (8) & CELU(alpha=1.0) \\\\ \\hline (9) & ConvTranspose2d(20, 20, kernelsize=(13, 13), stride=(2, 2), padding=(6, 6), output_padding=(1, 1)) \\\\ \\hline (10) & BatchNorm2d(20, eps=1e-05, momentum=0.1, affine=0, track_running_stats=False) \\\\ \\hline (11) & CELU(alpha=1.0) \\\\ \\hline (12) & ConvTranspose2d(20, 20, kernelsize=(13, 13), stride=(2, 2), padding=(6, 6), output_padding=(1, 1)) \\\\ \\hline (13) & BatchNorm2d(20, eps=1e-05, momentum=0.1, affine=0, track_running_stats=False) \\\\ \\hline (14) & CELU(alpha=1.0) \\\\ \\hline (15) & ConvTranspose2d(20, 20, kernelsize=(13, 13), stride=(2, 2), padding=(6, 6), output_padding=(1, 1)) \\\\ \\hline (16) & BatchNorm2d(20, eps=1e-05, momentum=0.1, affine=0, track_running_stats=False) \\\\ \\hline (17) & CELU(alpha=1.0) \\\\ \\hline (18) & ConvTranspose2d(20, 4, kernelsize=(13, 13), stride=(2, 2), padding=(6, 6), output_padding=(1, 1)) \\\\ \\hline (19) & interpolationLayer() \\\\ \\hline (20) & \\(1+0.5\\) Tanh() \\\\ \\hline \\hline Latent dimension \\(d_{k}\\) & \\multicolumn{2}{c|}{\\(\\{1,2,3,4,5,6,7,8,9,10,11,12,16\\}\\)} \\\\ \\hline \\end{tabular} \\end{table} Table 14: CNN Autoencoder of LED for the flow behind a cylinder at \\(Re\\in\\{100,1000\\}\\) \\begin{table} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values for \\(Re=100\\) & Values for \\(Re=1000\\) \\\\ \\hline \\hline Solver & Pseudoinverse & Pseudoinverse \\\\ Size & 200 & 200 \\\\ Degree & 10 & 10 \\\\ Radius & 0.99 & 0.99 \\\\ Input scaling \\(\\sigma\\) & \\(\\{0.5,1,2\\}\\) & \\(\\{0.5,1,2\\}\\) \\\\ Dynamics length & 100 & 100 \\\\ Regularization \\(\\eta\\) & \\(\\{0.0,0.001,0.0001,0.00001\\}\\) & \\(\\{0.0,0.001,0.0001,0.00001\\}\\) \\\\ Noise level per mill & \\(\\{10\\}\\) & \\(\\{10\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values for \\(Re=100\\) & Values for \\(Re=1000\\) \\\\ \\hline \\hline Library & Polynomials & Polynomials \\\\ Degree & \\(\\{1,2,3\\}\\) & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) & \\(\\{0.001,0.0001,0.00001\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values for \\(Re=100\\) & Values for \\(Re=1000\\) \\\\ \\hline \\hline Library & Polynomials & Polynomials \\\\ Degree & \\(\\{1,2,3\\}\\) & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) & \\(\\{0.001,0.0001,0.00001\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values for \\(Re=100\\) & Values for \\(Re=1000\\) \\\\ \\hline \\hline Library & Polynomials & Polynomials \\\\ Degree & \\(\\{1,2,3\\}\\) & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) & \\(\\{0.001,0.0001,0.00001\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values for \\(Re=100\\) & Values for \\(Re=1000\\) \\\\ \\hline \\hline Library & Polynomials & Polynomials \\\\ Degree & \\(\\{1,2,3\\}\\) & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) & \\(\\{0.001,0.0001,0.00001\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values for \\(Re=100\\) & Values for \\(Re=1000\\) \\\\ \\hline \\hline Library & Polynomials & Polynomials \\\\ Degree & \\(\\{1,2,3\\}\\) & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) & \\(\\{0.001,0.0001,0.00001\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values for \\(Re=100\\) & Values for \\(Re=1000\\) \\\\ \\hline \\hline Library & Polynomials & Polynomials \\\\ Degree & \\(\\{1,2,3\\}\\) & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) & \\(\\{0.001,0.0001,0.00001\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values for \\(Re=100\\) & Values for \\(Re=1000\\) \\\\ \\hline \\hline Library & Polynomials & Polynomials \\\\ Degree & \\(\\{1,2,3\\}\\) & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) & \\(\\{0.001,0.0001,0.00001\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values for \\(Re=100\\) & Values for \\(Re=1000\\) \\\\ \\hline \\hline Library & Polynomials & Polynomials \\\\ Degree & \\(\\{1,2,3\\}\\) & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) & \\(\\{0.001,0.0001,0.00001\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values for \\(Re=100\\) & Values for \\(Re=1000\\) \\\\ \\hline \\hline Library & Polynomials & Polynomials \\\\ Degree & \\(\\{1,2,3\\}\\) & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) & \\(\\{0.001,0.0001,0.00001\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values for \\(Re=100\\) & Values for \\(Re=1000\\) \\\\ \\hline \\hline Library & Polynomials & Polynomials \\\\ Degree & \\(\\{1,2,3\\}\\) & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) & \\(\\{0.001,0.0001,0.0001,0.00001\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values for \\(Re=100\\) & Values for \\(Re=1000\\) \\\\ \\hline \\hline Library & Polynomials & Polynomials \\\\ Degree & \\(\\{1,2,3\\}\\) & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) & \\(\\{0.001,0.0001,0.00001\\}\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|} \\hline Hyper-parameter tuning & Values for \\(Re=100\\) & Values for \\(Re=1000\\) \\\\ \\hline \\hline Library & Polynomials & Polynomials \\\\ Degree & \\(\\{1,2,3\\}\\) & \\(\\{1,2,3\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) & \\(\\{0.001,0.0001,0.0001,0.00001\\}\\) \\\\ Threshold & \\(\\{0.001,0.0001,0.00001\\}\\) & \\(\\{0.0001 ### Alanine Dipeptide Dynamics The efficiency of LED in capturing complex molecular dynamics is demonstrated in the dynamics of a molecule of alanine dipeptide in water, a benchmark for enhanced sampling methods. The molecule is simulated with molecular dynamics with a time-step \\(\\delta t=1\\)fs, to generate data of total length 38.4ns for training, 38.4ns for validation, and 100ns for testing. The time-step of LED is set to \\(\\Delta t=0.1\\)ps. In this case, LED utilizes an MD decoder, and an MD-LSTM in the latent space, to model the stochastic, non-Markovian latent dynamics. The latent space dimension is set to \\(d_{\\tt z}=1\\). As shown in Figure 14, LED identifies a meaningful one dimensional latent space, and reproduces statistics of the system. In our recent work [40], we demonstrate that LED also captures the time-scales between the meta stable states and samples realistic protein configurations while being three orders of magnitude faster than the molecular dynamics solver. Figure 14: **A)** Ramachadran plot of the alanine dipeptide data, i.e. state density on the space spanned by two backbone dihedral angles \\((\\phi,\\psi)\\). **B)** Ramachadran plot of the state evolution data predicted by LED with \\(T_{\\mu}=0\\) and \\(d_{\\tt z}=1\\). LED captures the three mostly visited meta-stable states \\(\\{C_{5},P_{II},\\alpha_{R}\\}\\). **C)** Projection of the state evolution data to the free energy on the one dimensional latent space unraveled by LED, i.e. \\(F/\\kappa_{B}T=-\\log\\,p(z_{t})\\). Low energy (high probability) regions on the latent space are mapped to known metastable state configurations of alanine dipeptide.
Predictive simulations of complex systems are essential for applications ranging from weather forecasting to drug design. The veracity of these predictions hinges on their capacity to capture the effective system dynamics. Massively parallel simulations predict the system dynamics by resolving all spatiotemporal scales, often at a cost that prevents experimentation while their findings may not allow for generalisation. On the other hand reduced order models are fast but limited by the frequently adopted linearization of the system dynamics and/or the utilization of heuristic closures. Here we present a novel systematic framework that bridges large scale simulations and reduced order models to Learn the Effective Dynamics (LED) of diverse complex systems. The framework forms algorithmic alloys between non-linear machine learning algorithms and the Equation-Free approach for modeling complex systems. LED deploys autoencoders to formulate a mapping between fine and coarse-grained representations and evolves the latent space dynamics using recurrent neural networks. The algorithm is validated on benchmark problems and we find that it outperforms state of the art reduced order models in terms of predictability and large scale simulations in terms of cost. LED is applicable to systems ranging from chemistry to fluid mechanics and reduces the computational effort by up to two orders of magnitude while maintaining the prediction accuracy of the full system dynamics. We argue that LED provides a novel potent modality for the accurate prediction of complex systems.
Summarize the following text.
267
arxiv-format/2001_11198v3.md
A CNN With Multiscale Convolution for Hyperspectral Image Classification using Target-Pixel-Orientation scheme Jayasree Saha1, Yuvraj Khanna1, and Jayanta Mukherjee1 1 Computer Science and Engineering, Indian Institute of Technology, Kharagpur, West Bengal, India ## I Introduction Hyperspectral image (HSI) classification has received considerable attention in recent years for a variety of application using neural network-based techniques. Hyper-spectral imagery has several hundreds of contiguous narrow spectral bands from the visible to the infrared frequency in the entire electromagnetic spectrum. However, processing such a large number of spectral dimension suffers from the curse of dimensionality. Also, a few properties of the dataset bring challenges in the classification of HSIs, such as 1) limited training examples, and 2) large spatial variability of spectral signature. In general, contiguous spectral bands may contain some redundant information which leads to the Hughes phenomenon [1]. It causes accuracy drop in classification when there is an imbalance between the high number of spectral channels and scarce training examples. Conventionally, dimension-reduction techniques are used to extract a suitable spectral feature. For instance, Independent Component Discriminant Analysis (ICDA) [2] has been used to find statistically independent components. It uses ICA with the assumption that at most one component has a Gaussian distribution. ICA uses higher order statistics to compute uncorrelated components compared to the PCA [3] which uses the covariance matrix. A few other non-linear techniques quadratic discriminant analysis [4], kernel-based methods [5], etc., are also employed to handle non-linearity in HSIs. However, extracted features in the reduced dimensional space may not be optimal for classification. HSI classification task gets more complicated with the following facts: i) spectral signature of objects belonging to the same class may be different, and ii) the spectral signature of objects belonging to different classes may be the same. Therefore, only spectral components may not be sufficient to provide discriminating features for classification. Recent studies prove that incorporation of spatial context along with spectral components improves classification task considerably. There are two ways of exploiting spatial and spectral information for classification. In the first approach, spatial and spectral information are processed separately and then combined at the decision level [32, 19]. The second strategy uses joint representation of spectral-spatial features [31, 33, 30, 34]. In this paper, a novel joint representation of spectral-spatial features has been proposed. Our technique supersedes the accuracy of classification compared to the state of the art techniques. In the literature, 1-D [30], 2-D [31], and 3-D [20] CNN based architectures are well-known for _HSI_ classification. Also, hybridisation of different CNN-type architecture is employed [21]. 1D-CNNs use pixel-pair strategy [30] which combines a set of pairs of training samples. Each pair is constructed using all permutations of training pixels under consideration. This set not only reveals the neighborhood of the observed pixel but also increases the training samples. Yet, it can not use full power of spatial information in hyperspectral image classification. It completely ignores neighborhood profile. In general, 2-D and 3-D CNN based approaches are more suitable in such a scenario. However, there are many other architectures, e.g., _Deep Belief Network_[22, 23, 24, 25], Autoencoders [26, 27, 28, 29] which provide efficient solution to the hyperspectral image classification problem. Multi-scale convolution is widely popular in literature for exploring complex spatial context. In most cases, outputs of different kernels are either concatenated [42, 43] or summed [31] together and further processed for feature extraction and classification. However, there exists no study on whether concatenating a large number of filter banks of different scales result in optimum classification. In the present context, we are more interested in scrutinizing various CNN architectures for the current problem. In general, a few core components are available for making any CNN architecture. For example, convolution, pooling, batch-normalization [40], and activation layers. In practice, there are various ways of using convolution mechanism. A few of them are very popular, namely, point convolution, group convolution, depth-wise separable convolution [39], etc. Similarly, there is a variation to the pooling mechanism, namely, adaptive pooling [41]. Recently, many mid-level components are developed, e.g., an inception module which integrates outputs of multi-scale convolutions. Mid-level components are sequentially combined to make a large network, such as, VGG [36], GoogleNet [37], etc. Additionally, a skip architecture [38] proves to be a successful way of making a very deep network to deal with the vanishing gradient problem. Hyperspectral image classification is still an interesting and challenging problem where the effectiveness of various core components of CNN and their arrangement to resolve the classification problem, needs to be studied. To summarize, our work is motivated by the following observations. 1. Incorporating spatial and spectral feature may achieve better classification accuracy for hyperspectral images. Hence, a strategy is needed to combine spatial and spectral information. 2. Due to complex reflectance property in HSI, pixels of same class may have different spectral signature and pixels of different class may have similar spectral signature. Spatial neighborhood plays a key role in improving classification accuracies in such scenario. However, the neighborhood of a pixel at class boundary appears different compared to the non-boundary pixels. It is observed that non-boundary pixels include more pixels from the same class. However, the neighborhood of boundary pixels include pixels of different classes. Our strategy aims to bring similar neighborhood for the boundary and non-boundary pixels which belong to the same class. 3. Substantial spectral information brings redundancy, and also taking all spectral bands together decreases the performance of classification. Hence, band reduction is required in hyperspectral image classification. 4. A suitable network architecture is needed to handle the problems stated above such that the network can be trained in an end to end manner. In this paper, we present a _CNN_ architecture which performs three major tasks in a pipeline of processing, such as, :1) band reduction, 2) feature extraction, and 3) classification. The first block of processing uses point-wise 3-D convolution layers. For feature extraction, we use multi-scale convolution module. In our work, we propose two architectures for feature extraction which eventually lead to two different _CNN_ architectures. In the first architecture, we use an inception module with four parallel convolution structures for feature extraction. Additionally, we use similar multi-scale convolutions in inception like structure but with a different arrangement. The second architecture extracts finer contextual information compared to the first one. We feed the extracted features to a fully connected layer to form the high-level representation. We train our network in an end to end manner by minimizing cross-entropy loss using Adam optimizer. Our proposed architecture gives a state of the art performance without any data augmentation on three benchmark HSI classification datasets. Besides this new architecture, we propose a way to incorporate spatial information along with the spectral one. It not only covers the neighborhood profile for a given window, but also it observes the change of neighborhood by shifting its current window. We observe that this process appears to be more beneficial towards the boundary location compared to the a single window neighborhood system. The contribution of this paper can be summarized as follows: 1. A novel technique to obtain a joint representation of spatial and spectral features has been proposed. The design is aimed at improving classification accuracy at the boundary location of each class. 2. A novel end to end shallow and wider neural network has been proposed, which is a hybridization of 3-D CNN with 2-D-CNN. This hybrid structure does band reduction followed by a discriminating feature extraction. Also, we have shown two different arrangements of similar multi-scale convolutional layers to extract distinctive features. Section II-A gives a detailed description of the proposed classification framework, including the technique of inclusion of the spatial information. Performance and comparisons among the proposed networks and current state of the art methods are presented in Section III. This paper is concluded in Section IV. ## II Proposed classification framework The proposed classification framework shown in Fig 1 mainly consists of three tasks: i) organizing a target-pixel-orientation model using available training samples, ii) constructing a CNN architecture to extract uncorrelated spectral information, and iii) learning spatial correlation with neighboring pixels. ### _Target-Pixel-Orientation Model for Training Samples_ Consider a hyperspectral data set \\(H\\) with \\(d\\) spectral bands. We have \\(N\\) labeled samples denoted as \\(P=\\{p_{i}\\}_{i=1}^{N}\\) in an \\(R^{d\\times 1}\\) feature space and class labels \\(y_{i}\\in\\{1,2,\\cdots,C\\}\\) where \\(C\\) is the number of class. Let \\(n_{c}\\) be the number of available labeled samples in the \\(c^{th}\\) class and \\(\\sum_{c=1}^{C}n_{c}=N\\). We propose a Target-Pixel-Orientation (TPO) scheme. In this scheme, we consider a \\(k\\times k\\) window whose center pixel is the target pixel. We select eight neighbors of the target pixel by simply shifting the window into eight different directions in a clockwise manner. Fig 2 shows one example of how we prepare eight neighboring of a target pixel of size 3x3. We mark the target pixel in blue color which is surrounded by a Fig. 1: Flowchart of the proposed classification framework \\(3\\times 3\\) neighboring window shown in a red border. First sub-image in Fig 2 depicts the \\(3\\times 3\\) window when the target pixel occupies the center position of that window. Other eight sub-images are the neighbors of the first sub-image which are numbered by 1 to 8. We consider each of nine windows as a single integrated view for the target pixel. However, we have shown the _TPO_ view with one spectral channel to make the illustration simple. In our proposed system, we consider \\(d\\) spectral channels. Therefore, input to the model is a 4-dimensional tensor. We perform the following operation to form the input for our models. \\[\\mathbb{S}(\\mathbb{S}(s_{ji})\\forall j\\in d)\\forall i\\in V \\tag{1}\\] Where \\(\\mathbb{S}\\) is a function which is responsible for stacking of channels and \\(s_{ji}\\) represents \\(k\\times k\\) patch of \\(j^{th}\\) spectral channel in \\(i^{th}\\) view. \\(V\\) represents nine views in the _TPO_ scheme. We have converted labeled samples \\(P\\) to \\(I=\\{i_{1},i_{2},\\cdots,i_{N}\\}\\) such that each \\(i_{x}\\) has \\(9\\times d\\times k\\times k\\) dimension. #### Ii-A1 Advantage of _TPO_ for class boundary We observe that a \\(k\\times k\\) patch of a pixel appears very differently at the boundary region of any class compared to pixels in the non-boundary area. In general, the non-boundary pixels are surrounded by pixels belonging to the same class. Also, all pixels in each view of the _TPO_ contains the same class information. However, the neighborhood of boundary pixels is contaminated with more than one class information. In such a scenario, _TPO_ provides nine 2d neighborhoods for the target pixel. Therefore, a few neighborhoods of boundary pixels are similar to the pixel at non-boundary regions. The intuition is that the same neighborhoods may form a single cluster. We illustrate this with a two-class situation in Fig-3. The patch of a target pixel at near boundary contains all pixels of similar class (blue). However, the patch of a target pixel at the border includes pixels of two classes (blue and red). If we consider only one patch surrounded that target pixel, we may fail to classify border pixels. In this scenario _TPO_ brings a different view of patches for a single target pixel at the boundary. We have shown _TPO_ of target pixel at border and near-border in Fig 4 and Fig 5 respectively. In the given situation, there is at least one view where every pixel belongs to the blue class for the border pixel. However, there are other views which are similar to the views of the pixel at the near boundary. ### _Network Architecture_ The framework of the HSI classification is shown in Fig 1. It consists of mainly three blocks, namely, band-reduction, feature extraction, and classification. _TPO_ extracts samples from the given dataset as described in Section II-A. The label of each sample is that of the pixel located in the center of the first view among the nine views (discussed in Sec II-A). #### Ii-B1 Band-Reduction This block contains three consecutive \"BasicConv3d\" layers. The designed \"BasicConv3d\" layer contains 3-D batch-normalization layer and rectified linear unit (ReLU) layer sequentially after 3-D point-wise convolution layer. Parameters of 3-D convolution layer are the input channel, output channel, and the kernel size. In our experiment we have kept input channel\\(=9\\) and output channel\\(=9\\). However, we have empirically adjusted kernel size of three \"BasicConv3d\" layers which is of the form (X,1,1). Hence, we have used X=p, X=q, and X=r notation in defining kernel size in Fig 6. The aim of choosing such kernel dimension is not to change the spatial size but to reduce the number of bands. Dimension of the input to this layer is \\(v\\times d\\times k\\times k\\) where \\(v\\) represents number of views in the _TPO_ scheme, \\(d\\) represents spectral dimensionality and \\(k\\) is spatial size. We consider \\(v=9,\\ d=103\\), and \\(k=5\\) to illustrate the network description. The first 3-D convolutional layer (C1) primarily filters the input with dimension \\(9\\times 103\\times 5\\times 5\\) with kernel of size \\(8\\times 1\\times 1\\), producing a \\(9\\times 96\\times 5\\times 5\\) feature map. In this layer dimension of spectral channels gets reduced from 103 to 96. As we have used point-wise 3-D convolution, there is no change in the spatial size of the sample. But, the size of the spectral channel is changed based on the value of p which is 8 in this example. The size of the spectral channel in the convolved sample can be computed using the following equation. \\[\\left\\lfloor\\frac{W-K+2P}{S}\\right\\rfloor+1. \\tag{2}\\] Where \\(W\\) represents the size of the spectral channel, which is \\(103\\) in this case. \\(K,P\\), and \\(S\\) represent kernel size, padding, and stride. For the above example, \\(K=8\\), \\(P=0\\) and \\(S=1\\) holds. Therefore, we are getting 96 channels in the convolved sample. The second layer (C2) combines the features obtained in the C1 layer with nine \\(16\\times 1\\times 1\\) kernels, resulting in a \\(9\\times 81\\times 5\\times 5\\) feature map. The third layer (C3) combines the features obtained in the C1 layer with nine \\(32\\times 1\\times 1\\) kernels, resulting in a \\(9\\times 50\\times 5\\times 5\\) feature map. We have a reduced number of bands from 103 to 50 at this point. Now we reshape our data in 3 dimensions by stacking nine views for each 50 spectral information, leading to \\(450\\times 5\\times 5\\)-sized sample. We feed the reshaped output of band-reduction block to feature extraction layer. Fig. 2: Example of Target-Patch-Orientation Model #### Iii-C2 Feature Extraction We have taken a tiny patch as an input sample. Our assumption is that a shallow but wider network, i.e., \"multi-scale filter bank\" extracts more appropriate features from small patches. Hence, we have considered similar to inception-module for feature extraction. We use inception module in two different ways forming two separate networks. Fig 7 and Fig 8 depict feature extraction modules of _MS-CNN1_ and _MS-CNN2_. Each \"BasicConv2d\" layer in the figures contains a 2-D batch-normalization layer and rectified linear units (ReLU) sequentially after 2-D convolution layer. Parameter of the 2-D convolution layer is the input channel, output channel, and the kernel size. Each rectangular block of \"BasicConv2d\" in the diagram contains parameters of the Fig. 4: Example of Target-Patch-Orientation of a target pixel lies at the boundary of a class. Fig. 5: Example of Target-Patch-Orientation of a target pixel lies at the near boundary of a class. Fig. 6: Diagram of the Band Reduction layer in the proposed network. Fig. 7: Diagram of the Feature Extraction layer (MS-CNN1) in the proposed network. 2-D convolution layer. We denote this by \\(\\mathbb{C}_{k_{1}\\times k_{1}\\times B}\\), where \\(k_{1}\\) refers to the kernel size of the convolution layer and \\(B\\) is the number of input channel. On the other hand, each block of \"AvgPool2d\" depicts the kernel size and the stride value for the average pooling layer in the diagram. We denote this by \\(\\mathbb{P}_{k_{2}\\times k_{2}}\\), where \\(k_{2}\\) refers to the kernel size of the pooling layer. _MS-CNN1_ uses a multi-scale filter bank that locally convolves the input sample with four parallel blocks with different filter sizes in convolution layer. Each parallel block consists of either one or many \"BasicConv2D\" layer and pooling layer. _MS-CNN1_ has the following details: \\(\\mathbb{C}_{1\\times 1\\times B}\\), \\(\\mathbb{C}_{1\\times 1\\times B}\\) followed by \\(\\mathbb{C}_{3\\times 3\\times B}\\) followed by \\(\\mathbb{C}_{3\\times 3\\times B}\\), \\(\\mathbb{C}_{1\\times 1\\times B}\\) followed by \\(\\mathbb{C}_{5\\times 5\\times B}\\) and \\(\\mathbb{P}_{3\\times 3}\\) followed by \\(\\mathbb{C}_{1\\times 1\\times B}\\). The \\(3\\times 3\\) and \\(5\\times 5\\) filters are used to exploit local spatial correlations of the input sample while the \\(1\\times 1\\) filters are used to address correlations among nine views and their respective spectral information. The outputs of the _MS-CNN1_ feature extraction layer are combined at concatenation layer to form a joint view-spatio-spectral feature map used as input to the subsequent layers. However, since the size of the feature maps from the four convolutional filters is different from each other, we have padded the input feature with zeros to match the size of the output feature maps of each parallel blocks. For example, we have padded input with 0, 1 and 2 zeros for \\(1\\times 1\\), \\(3\\times 3\\) and \\(5\\times 5\\) filters, respectively. In _MS-CNN1_, we have used an adaptive average pooling [41] layer sequentially after the concatenation layer. However, we have split the inception architecture of _MS-CNN1_ into three small inception layer. Each has two parallel convolutional layers. Each concatenation layer is followed by an adaptive average pooling layer. Finally, we concatenate all the pooled information. #### Iii-B3 Classification Outputs of feature extraction block are flattened and fed to the fully connected layers whose output channel is the number of class. The fully connected layers is followed by 1-D Batch-Normalization layer and a softmax activation function. In general, the classification layer can be defined as \\[p=softmax(BN(Wa+b)) \\tag{3}\\] where \\(a\\) is the input of the fully connected layer, and \\(W\\) and \\(b\\) are the weights and bias of the fully connected layer, respectively. BN(\\(\\cdot\\)) is the 1-D Batch-Normalization layer. \\(p\\) is the \\(C\\)-dimensional vector which represents the probability that a sample belongs to the \\(c^{th}\\) class. ### _Learning the Proposed Network_ We have trained the proposed networks by minimizing cross-entropy loss function. Let \\(Y=\\{y_{i}\\}_{i=1}^{b}\\) represents the ground-truth for the training samples present in a batch \\(b\\). \\(P=\\{p_{ic}\\}_{i=1}^{b}\\) denotes the conditional probability distribution of the model. The model predicts that \\(i^{th}\\) training sample belongs to the \\(c^{th}\\) class with probability \\(p_{ic}\\). The cross-entropy loss function \\(\\mathbb{L}_{cross-entropy}\\) is given by \\[\\mathbb{L}_{cross-entropy}=-\\sum_{i=1}^{b}\\sum_{c=1}^{C}y_{i}[c]\\log p_{ic} \\tag{4}\\] In our dataset, the ground truth is represented as one-hot encoded vector. Each \\(y_{i}\\) is a \\(C\\) dimensional vector and \\(C\\) represents the number of classes. If the class label of the \\(i^{th}\\) sample is \\(c\\) then, \\[\\begin{cases}y_{i}[j]=1&\\text{if }j=c\\\\ y_{i}[j]=0&\\text{, Otherwise}\\end{cases}\\] Fig. 8: Diagram of the Feature Extraction layer (MS-CNN2) in the proposed network. ### _Comparison with Other Methods_ The key features of our proposed methods are 1) _use of_ both spectral and spatial features, 2) _band_-reduction using several consecutive 3-D CNNs and 3) feature extraction with a _m_ulti-scale convolutional network. We have chosen six state of the art methods namely,: 1) _CNN-PPF_[30], 2) _DR-CNN_[31], 3) _2S-Fusion_[32], 4) _BASS_[34], 5) _DPP-ML_[33], and 6) _S-CNN+SVM_[35]. Every comparable method exploits both spectral and spatial features. _CNN-PPF_ uses a pixel pair strategy to increase the number of training samples and feeds them into a deep network having 1-D convolutional layers. _DR-CNN_ exploits diverse region-based 2-D patches from the image to produce more discriminative features. On the contrary, _2S-Fusion_ processes spatial and spectral information separately and fuses them using adaptive class-specific weights. However, _BASSNET_ extracts band specific spatial-spectral features. In _DPP-ML_, convolutional neural networks with multiscale convolution are used to extract deep multiscale features from the HSI. SVM-based methods are common in traditional hyperspectral image classification. In _S-CNN+SVM_, the Siamese convolutional neural network extracts spectral-spatial features of HSI and feeds them to a _SVM_ classifier. In general, the performance of deep learning-based algorithms supersedes traditional techniques (e.g, _k-NN_, _SVM_, _ELM_). We have compared the performance of the proposed techniques with the best results reported for each of these state of the art techniques. In _S-CNN+SVM_ and _2S-Fusion_, performance on the _salinas_ dataset is not reported. To maintain consistency in the results, we ran our algorithm with the classes and the number of samples for each class used in _2S-Fusion_, _DR-CNN_, _DPP-ML_ for Indian Pines. ### _Results and Discussion_ The performance of the proposed _MS-CNN1_ and _MS-CNN2_ on test-samples are compared with the aforementioned deep learning-based classifiers in Tables II to V. We have considered the size of spatial window (\\(k\\)) for generating a patch as \\(7\\) to evaluate the performance of our algorithms. For _Indian-Pines_, _DR-CNN_ and _DPP-ML_ used 8 classes, _2S-Fusion_ used 16 classes and other comparable methods used 9 classes for their experiment. To compare our method we have chosen the same number of classes and training samples of each class as mentioned in the respective papers. Table III and Table IV together indicate that MS-CNN1 and MS-CNN2 provides comparable results and supersedes the other methods. However, _MS-CNN2_ produces better results compared to _MS-CNN1_ in _University of Pavia_ and _Salinas_ datasets as shown in Table II and Table V respectively. The results signify that the arrangements of multi-scale convolutions in _MS-CNN2_ is able to extract more useful features for the classification compared to _MS-CNN1_. We have shown thematic maps generating from the classification of three _HSI_ scenes using our networks in Figure 13. In order to check the consistency of our network, we repeat experiments 10 times with different training sets. Table VI shows the mean and standard deviation of OA over these 10 experiments for each data set. #### Iv-C1 Description of metrics for quantitative analysis We have proposed a few metrics to analyze the impact of the TPO scheme in the models. \\(\\tau\\) in Equation (9) reveals the percentage of misclassified pixels along the boundary (B) or non-boundary (NB) regions. \\[\\tau=\\frac{\\sum_{c_{i}\\in C}\\text{\\#samples misclassified in B (NB)}}{\\sum_{c_{i}\\in C}\\text{\\#samples in B (NB)}}\\times 100 \\tag{9}\\] We have proposed \\(\ u(a\\to b)\\) in Equation (10) to understand the impact of including TPO scheme in the proposed models. \\(tc(\\cdot)\\) determines truely classified pixels. \\(mc(\\cdot)\\) determines misclassified pixels. \\(a,b\\in\\{Yes,No\\}\\). The condition (TPO=a) indicates whether TPO scheme is considered. \\[\ u(a\\to b)=\\frac{\\sum_{c_{i}\\in C}\\text{samples in B (NB)}[mc(TPO=a)\\text{ and }tc(TPO=b)]}{\\sum_{c_{i}\\in C}\\text{samples in B (NB)}}\\times 100 \\tag{10}\\] \\begin{table} \\begin{tabular}{c c c c c c c c c} \\hline \\hline \\multirow{2}{*}{Class} & Training & CNN & \\multirow{2}{*}{BASS} & S-CNN & 2S & DR & DPP & MS- & MS- \\\\ & samples & -PPF & & +SVM & -Fusion & -CNN & -ML & CNN1 & CNN2 \\\\ \\hline Asphalit & 200 & 97.42 & 97.71 & **100** & 97.47 & 98.43 & 99.38 & 99.78 & **100** \\\\ Meadows & 200 & 95.76 & 97.93 & 98.12 & 99.92 & 99.45 & 99.59 & 99.88 & **99.99** \\\\ Gravel & 200 & 94.05 & 94.95 & 99.12 & 83.80 & 99.14 & 97.33 & 99.21 & **100** \\\\ Trees & 200 & 97.52 & 97.80 & 99.40 & 98.98 & 99.50 & 99.31 & 99.41 & **99.93** \\\\ Painted metal sheets & 200 & **100** & **100** & 99.18 & **100** & **100** & **100** & **100** \\\\ Bare Soil & 200 & 99.13 & 96.60 & 99.10 & 97.75 & **100** & 99.99 & 99.75 & **100** \\\\ Bitumen & 200 & 96.19 & 98.14 & 98.50 & 77.44 & 99.70 & 99.85 & **100** & **100** \\\\ Self-Blocking Bricks & 200 & 93.62 & 95.46 & 99.91 & 96.65 & 99.55 & 99.02 & 99.77 & **100** \\\\ Shadows & 200 & 99.60 & **100** & **100** & 99.65 & **100** & **100** & **100** & **100** \\\\ \\hline OA & 96.48 & 99.68 & 97.50 & 99.56 & 99.46 & 99.72 & 99.78 & **99.99** \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE II: Class-specific Accuracy(%) and OA of comparable techniques for the University of Pavia dataset Fig. 12: Variation of test accuracy on the three HSI datasets (a)-(b) with varying patch size, (c)-(d) with reduced number of spectral channels. Fig. 13: Thematic maps resulting from classification using \\(7\\times 7\\)-patch by MS-CNN1 and MS-CNN2 respectively for (a)-(b) University of Pavia dataset, (c)-(d) Salinas data set, and (e)-(f) Indian Pines dataset. Color code is similar to its ground truths. \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\hline \\multicolumn{5}{c}{Number of training samples per class=200} \\\\ \\hline \\multirow{2}{*}{Class} & CNN- & \\multirow{2}{*}{BASS} & S-CNN & MS- & MS- \\\\ & PPF & & & +SVM & CNN1 & CNN2 \\\\ \\hline Com-mini & 92.99 & 96.09 & 98.25 & **100** & **100** \\\\ Corr-minitil & 96.66 & 98.25 & 99.64 & **99.92** & 99.75 \\\\ Grass-pasture & 95.58 & **100** & 97.10 & **100** & 99.68 \\\\ Grass-tree & **100** & 99.24 & 99.86 & 99.73 & 99.82 \\\\ Hay-windword & **100** & **100** & **100** & **100** & **100** \\\\ Soybean-notil & 96.24 & 94.82 & 98.87 & **100** & **100** \\\\ Soybean-mini & 87.80 & 94.41 & 98.57 & 99.74 & **100** \\\\ Soybean-clean & 98.98 & 97.46 & **100** & **100** & **100** \\\\ Woods & 99.81 & 99.90 & **100** & **100** & 99.72 \\\\ \\hline OA & 94.34 & 96.77 & 99.04 & **99.89** & 99.84 \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE III: Class-specific Accuracy(%) and OA of comparable techniques for the Indian Pines dataset. Training and testing is restricted to 9 classes. \\begin{table} \\begin{tabular}{c c c c c c c c c} \\hline \\hline \\multirow{2}{*}{Class} & \\multicolumn{2}{c}{Training} & \\multicolumn{2}{c}{CNN} & \\multicolumn{2}{c}{DR} & \\multicolumn{2}{c}{DPP} & MS- & MS- \\\\ & samples & -PFF & \\multicolumn{2}{c}{BASS} & \\multicolumn{2}{c}{-CNN} & \\multicolumn{2}{c}{-ML} & \\multicolumn{2}{c}{CNN1} & \\multicolumn{2}{c}{CNN2} \\\\ \\hline Brocoli-green-weeds-1 & 200 & **100** & **100** & **100** & **100** & **100** & **100** & **100** \\\\ Brocoli-green-weeds-2 & 200 & 99.88 & 99.97 & 100 & **100** & **100** & **100** & **100** \\\\ Fallow & 200 & 99.60 & **100** & 99.98 & **100** & **100** & 99.72 \\\\ Fallow-rough-plow & 200 & 99.49 & 99.66 & 99.89 & 99.25 & **100** & **100** \\\\ Fallow-smooth & 200 & 98.34 & 99.59 & 99.83 & 99.44 & 99.84 & **99.88** \\\\ Stubble & 200 & 99.97 & 100 & **100** & **100** & **100** & **100** \\\\ Celery & 200 & **100** & 99.91 & 99.96 & 99.87 & **100** & **100** \\\\ Grapes-untrained & 200 & 88.68 & 90.11 & 94.14 & 95.36 & 94.30 & **98.17** \\\\ Soil-vinyard-develop & 200 & 98.33 & 99.73 & 99.99 & **100** & 99.75 & **100** \\\\ Corn-senesed-green-weeds & 200 & 98.60 & 97.46 & 99.20 & 98.85 & 94.02 & 99.35 \\\\ Lettuce-romaine-4wk & 200 & 99.54 & 99.08 & 99.99 & 99.77 & **100** & **100** \\\\ Lettuce-romaine-5wk & 200 & **100** & **100** & **100** & **100** & **100** & **100** \\\\ Lettuce-romaine-6wk & 200 & 99.44 & 99.44 & **100** & 99.86 & **100** & **100** \\\\ Lettuce-romaine-7wk & 200 & 98.96 & **100** & **100** & 99.77 & **100** & **100** \\\\ Vinyard-untrained & 200 & 83.53 & 83.94 & **95.52** & 90.50 & 95.08 & 94.03 \\\\ Vinyard-vertical-trellis & 200 & 99.31 & 99.38 & 99.72 & 98.94 & **100** & **100** \\\\ \\hline OA & 94.80 & 95.36 & 98.33 & 97.51 & 97.98 & **98.72** \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE VI: Mean and standard deviation of 10 Independent Experiments \\begin{table} \\begin{tabular}{c c c c c c c c c} \\hline \\hline \\multirow{2}{*}{Class} & \\multicolumn{2}{c}{Training} & \\multicolumn{2}{c}{CNN} & \\multicolumn{2}{c}{DR} & \\multicolumn{2}{c}{DPP} & MS- & \\multicolumn{2}{c}{MS-} \\\\ & samples & -CNN & -ML & CNN1 & CNN2 & samples & Fusion & CNN1 & CNN2 \\\\ \\hline Alfalfa & - & - & - & - & - & 33 & **100** & **100** & **100** \\\\ Corn-notill & 200 & 98.20 & 99.03 & **100** & **100** & 861 & 95.35 & **100** & **100** \\\\ Corn-mintill & 200 & **99.79** & 99.74 & 99.51 & 99.67 & 501 & 98.75 & **100** & **100** \\\\ Corn & - & - & - & - & - & 141 & **100** & **100** & **100** \\\\ Grass-pasture & 200 & **100** & **100** & **100** & **100** & 299 & **100** & **100** \\\\ Grass-trees & - & - & - & - & - & 449 & 99.32 & **100** & **100** \\\\ Grass-pasture-mowed & - & - & - & - & - & 9 & **100** & **100** & **100** \\\\ Hay-windowed & 200 & **100** & **100** & 98.84 & 98.85 & 294 & **100** & **100** & **100** \\\\ Oats & - & - & - & - & - & 12 & **100** & **100** & **100** \\\\ Soybean-notill & 200 & 99.78 & 99.61 & **100** & **100** & 580 & **100** & **100** & **100** \\\\ Soybean-mintill & 200 & 96.69 & 97.80 & **100** & 99.91 & 1480 & 98.03 & **100** & **100** \\\\ Soybean-clean & 200 & 99.86 & **100** & **100** & **100** & 369 & **100** & **100** & **100** \\\\ Wheat & - & - & - & - & - & 127 & 97.87 & **100** & **100** \\\\ Woods & 200 & 99.99 & **100** & **100** & **100** & 777 & 99.62 & **100** & **100** \\\\ Buildings-Grass-Trees-Drives & - & - & - & - & - & 228 & 98.53 & **100** & **100** \\\\ Stone-Steel-Towers & - & - & - & - & - & 57 & **100** & **100** & **100** \\\\ \\hline OA & & 98.54 & 99.08 & 99.54 & **99.55** & & 98.65 & **100** & **100** \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE IV: Class-specific Accuracy(%) and OA of comparable techniques for the Indian Pines dataset. DR-CNN and DPP-ML consider 8 classes and 2S-Fusion includes 16 classes for the experiment. \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline \\multirow{2}{*}{\\#samples} & \\multicolumn{2}{c}{150} & \\multicolumn{2}{c}{100} & \\multicolumn{2}{c}{50} \\\\ \\hline MS-CNN & 1 & 2 & 1 & 2 & 1 & 2 \\\\ \\hline OA & 99.58 & 99.93 & 99.16 & 99.75 & 98.25 & 98.98 \\\\ std. dev. & (0.22) & (0.05) & (0.62) & (0.13) & (1.15) & (0.62) \\\\ AA & 99.76 & 99.92 & 98.88 & 99.67 & 97.68 & 98.65 \\\\ std. dev. & (0.08) & (0.06) & (0.2) & (0.17) & (1.50) & (0.82) \\\\ \\(\\kappa\\) & 99.44 & 99.91 & 99.45 & 99.82notes deep model whose values can be \\(a\\) or \\(b\\). \\(a,b\\in\\{\\text{MS-CNN1},\\text{MS-CNN2}\\}\\) \\[\\mu(a\\to b)=\\frac{\\sum_{z_{i}\\in C}\\text{\\#samples in B (NB\\}\\ [mc(\\mathcal{M}=a)\\text{ and }tc(\\mathcal{M}=b)\\ ]}{\\sum_{z_{i}\\in C}\\text{\\#samples in B (NB)}}\\times 100 \\tag{11}\\] #### Iii-B2 Impact of TPO scheme on each model Table XII clearly shows that the misclassification rate reduces along the boundary and non-boundary regions with the inclusion of the TPO scheme. A critical challenge in hyper-spectral imaging is that some pixels which belong to the same land cover class may have different spectral signatures due to complex light scattering mechanism. Therefore, an approach that is capable of making generalized features for the case, as mentioned above, can offer better classification accuracies. We discuss the impact of the TPO approach on the above issue in Salinas dataset. We found that the class='vinyl-untrained' has highest percentage of misclassification by every model irrespective of TPO scheme. Also, it is identified that most of them are misclassified as 'grapes-untrained'. For 'vinyl-untrained' and 'grapes-untrained' in _Salinas_, we calculate the average of reflectance of selected samples for each spectral band belonging to the respective class and name it Average Reflectance Spectrum (ARS). In Figure 14, we have shown ARS for 'vinyl-untrained' and 'grapes-untrained' in _Salinas_ dataset. As shown in this figure, two of them have very similar reflectance spectrum. In Figure 15, we have selected samples for computing ARS on the basis of three conditions: i) samples that are correctly predicted by each model with _TPO_ and non-TPO scheme, ii) samples are correctly predicted by each model with the _TPO_ scheme but misclassified by each model with non-_TPO_ scheme, and iii) samples that are misclassified by each model with _TPO_ and non-TPO scheme. The first condition incorporates most of the samples of the respective class. Figure 15 clearly reveals that the class has different reflectance spectrum within itself. Additionally, it shows that the _TPO_ scheme is able to predict correctly a fraction of class members whose reflectance spectrum is different from the majority of the class members. Equation (12) quantitatively analyze the improvement in classification of pixels due to incorporation of the _TPO_ approach. We have tabulated values of Equation (12) in Table XI. It clearly states that _TPO_-scheme has positive effect on the class which has different spectral signature. \\[\\alpha(X|Y)=\\frac \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline TPO & metric & U.P & LP & S \\\\ \\hline \\multicolumn{5}{c}{_MS-CNN1_} \\\\ \\hline \\multirow{3}{*}{No} & _OA_ & 92.69 & 81.26 & 93.47 \\\\ & _AA_ & 92.32 & 88.40 & 97.14 \\\\ & _\\(\\kappa\\)_ & 90.24 & 77.82 & 92.69 \\\\ \\hline \\multirow{3}{*}{Yes} & _OA_ & **98.78** & **94.44** & **94.84** \\\\ & _AA_ & **98.60** & **96.92** & **97.73** \\\\ & _\\(\\kappa\\)_ & **98.36** & **93.36** & **94.22** \\\\ \\hline \\multicolumn{5}{c}{_MS-CNN2_} \\\\ \\hline \\multirow{3}{*}{No} & _OA_ & 89.06 & 82.27 & 93.51 \\\\ & _AA_ & 91.14 & 88.73 & 97.26 \\\\ & _\\(\\kappa\\)_ & 85.50 & 78.97 & 92.73 \\\\ \\hline \\multirow{3}{*}{Yes} & _OA_ & **99.05** & **97.32** & **95.59** \\\\ & _AA_ & **98.94** & **98.40** & **98.35** \\\\ \\cline{1-1} & _\\(\\kappa\\)_ & **98.72** & **96.79** & **95.07** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table X: Impact of TPO scheme (metric \\(\ u(a\\to b)\\)) for the improvement of classification in boundary and non-boundary regions. N abbreviates No and Y abbreviates Yes. \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline \\multirow{2}{*}{Dataset} & Models & MS-CNN1 & MS-CNN2 \\\\ \\hline \\multirow{2}{*}{U. P} & Boundary & 8.03 & **1.75** & 12.39 & **14.2** \\\\ & Non-boundary & 6.42 & **0.84** & 9.42 & **0.62** \\\\ \\hline \\multirow{2}{*}{LP} & Boundary & 18.60 & **8.10** & 19.27 & **8.14** \\\\ & Non-boundary & 13.87 & **3.22** & 12.54 & **1.12** \\\\ \\hline \\multirow{2}{*}{S} & Boundary & 6.69 & **5.72** & 6.18 & **4.26** \\\\ & Non-boundary & 6.09 & **4.80** & 6.12 & **4.15** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table X: Impact of models (metric \\(\\mu\\)) on the improvement of classification in boundary and non-boundary region. \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline \\multirow{2}{*}{Dataset} & Models & MS-CNN1 & MS-CNN2 \\\\ \\cline{2-5} & TPO & No & Yes & No & Yes \\\\ \\hline \\multirow{2}{*}{U. P} & Boundary & 8.03 & **1.75** & 12.39 & **14.2** \\\\ & Non-boundary & 6.42 & **0.84** & 9.42 & **0.62** \\\\ \\hline \\multirow{2}{*}{LP} & Boundary & 18.60 & **8.10** & 19.27 & **8.14** \\\\ & Non-boundary & 13.87 & **3.22** & 12.54 & **1.12** \\\\ \\hline \\multirow{2}{*}{S} & Boundary & 6.69 & **5.72** & 6.18 & **4.26** \\\\ & Non-boundary & 6.09 & **4.80** & 6.12 & **4.15** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table X: Total misclassification (in %) of pixels (\\(\\tau\\)) in boundary and non-boundary region. Bold fond is used to highlight lower misclassification. \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\hline \\multirow{2}{*}{Dataset} & Models & MS-CNN1 & MS-CNN2 \\\\ \\cline{2-5} & TPO & No & Yes & No & Yes \\\\ \\hline \\multirow{2}{*}{U. P} & Boundary & 8.03 & **1.75** & 12.39 & **14.2** \\\\ & Non-boundary & 6.42 & **0.84** & 9.42 & **0.62** \\\\ \\hline \\multirow{2}{*}{LP} & Boundary & 18.60 & **8.10** & 19.27 & **8.14** \\\\ & Non-boundary & 13.87 & **3.22** & 12.54 & **1.12** \\\\ \\hline \\multirow{2}{*}{S} & Boundary & 6.69 & **5.72** & 6.18 & **4.26** \\\\ & Non-boundary & 6.09 & **4.80** & 6.12 & **4.15** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table X: Total misclassification (in %) of pixels (\\(\\tau\\)) in boundary and non-boundary region. Bold fond is used to highlight lower misclassification. \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline \\multirow{2}{*}{Dataset} & Models & MS-CNN1 & MS-CNN2 \\\\ \\cline{2-5} & TPO & No & Yes & No & Yes \\\\ \\hline \\multirow{2}{*}{U. P} & Boundary & 8.03 & **1.75** & 12.39 & **14.2** \\\\ & Non-boundary & 6.42 & **0.84** & 9.42 & **0.62** \\\\ \\hline \\multirow{2}{*}{LP} & Boundary & 18.60 & **8.10** & 19.27 & **8.14** \\\\ & Non-boundary & 13.87 & **3.22** & 12.54 & **1.12** \\\\ \\hline \\multirow{2}{*}{S} & Boundary & 6.69 & **5.72** & 6.18 & **4.26** \\\\ & Non-boundary & 6.09 & **4.80** & 6.12 & **4.15** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table X: Total misclassification (in %) of pixels (\\(\\tau\\)) in boundary and non-boundary region. Bold fond is used to highlight lower misclassification. Fig. 15: Average reflectance spectrum of selected samples based on three conditions. TPO\\(\\rightarrow\\)’X’ and Non-TPO\\(\\rightarrow\\)’Y’ represents prediction for the sample by the two models with the _TPO_ scheme is ‘X’ and non-_TPO_ scheme is ‘Y’. Legends show the three conditions. * [5] G. Camps-Valls and L. Bruzzone, \"Kernel-based methods for hyperspectral image classification,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 6, pp. 1351-1362, 2005. * [6] W. Li, S. Prasad, J. E. Fowler and L. M. Bruce, \"Locality-Preserving Dimensionality Reduction and Classification for Hyperspectral Image Analysis,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 4, pp. 1185-1198, 2012. * [7] X. Wang, Y. Kong, Y. Gao and Y. Cheng, \"Dimensionality Reduction for Hyperspectral Data Based on Pairwise Constraint Discriminative Analysis and Nonnegative Sparse Divergence,\" in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, no. 4, pp. 1552-1562, 2017. * [8] S. Chen and D. Zhang, \"Semisupervised Dimensionality Reduction With Pairwise Constraints for Hyperspectral Image Classification,\" in IEEE Geoscience and Remote Sensing Letters, vol. 8, no. 2, pp. 369-373, 2011. * [9] W. Zhao and S. Du, \"Spectral-Spatial Feature Extraction for Hyperspectral Image Classification: A Dimension Reduction and Deep Learning Approach,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 8, pp. 4544-4554, 2016. * [10] F. A. Manji and Y. Zhang, \"Robust Hyperspectral Classification Using Relevance Vector Machine,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 6, pp. 2100-2112, 2011. * [11] A. Samat, P. Du, S. Liu, J. Li and L. Cheng, \"\\(\\text{E}^{2}\\text{L}\\text{M}\\text{s}\\) : Ensemble Extreme Learning Machines for Hyperspectral Image Classification,\" in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 4, pp. 1060-1069, 2014. * [12] W. Li, C. Chen, H. Su and Q. Du, \"Local Binary Patterns and Extreme Learning Machine for Hyperspectral Imagery Classification,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 7, pp. 3681-3693, 2015. * [13] T. Lu, S. Li, L. Fang, L. Bruzzone and J. A. Benediktsson, \"Set-to-Set Distance-Based Spectral-Spatial Classification of Hyperspectral Images,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 12, pp. 7122-7134, 2016. * [14] Guang-Bin Huang, Qin-Yu Zhu, Chee-Kheong Siew, \"Extreme learning machine: Theory and applications,\" Neurocomputing, vol 70, Issues 1-3, pp. 489-501, 2006. * [15] Jie Gui, Zhenan Sun, Wei Jia, Rongxiang Hu, Yingke Lei, Shuiwang Ji, \"Discriminant sparse neighborhood preserving embedding for face recognition,\" Pattern Recognition, vol. 45, Issue 8, pp 2884-2893, 2012. * [16] D. Lunga, S. Prasad, M. M. Crawford and O. Ersoy, \"Manifold-Learning-Based Feature Extraction for Classification of Hyperspectral Data: A Review of Advances in Manifold Learning,\" in IEEE Signal Processing Magazine, vol. 31, no. 1, pp. 55-66, 2014. Fig. 16: Thematic maps resulting from classification using \\(3\\times 3\\)-patch by _MS-CNN1_ for (a)-(b) University of Pavia dataset, (c)-(d) Salinas dataset and (c)-(d) Indian Pines dataset respectively. Color code is similar to its ground truths. White color code is used to show misclassified pixels. Fig. 17: Thematic maps resulting from classification using \\(3\\times 3\\)-patch by _MS-CNN2_ for (a)-(b) Salinas dataset, (c)-(d) University of Pavia dataset, and (e)-(f) Indian Pines dataset respectively. Color code is similar to its ground truths. White color code is used to show misclassified pixels. * [17] Christian Szegedy, Sergey Ioffe and Vincent Vanhoucke, \"Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning,\" in CoRR, vol. abs/1602.07261, arXiv, 2016. * [18] Cohen, J. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, vol. 20, no. 1, pp. 37-46. 1960. * [19] S. Jia, X. Zhang and Q. Li, \"Spectral-Spatial Hyperspectral Image Classification Using\\(\\varepsilon_{1/2}\\)Regularized Low-Rank Representation and Sparse Representation-Based Graph Cuts,\" in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 6, pp. 2473-2484, 2015. * [20] H. Zhang, Y. Li, Y. Jiang, P. Wang, Q. Shen and C. Shen, \"Hyperspectral Classification Based on Lightweight 3-D-CNN With Transfer Learning,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 8, pp. 5813-5828, 2019. * [21] S. K. Roy, G. Krishna, S. R. Dubey and B. B. Chaudhuri, \"HybridSN: Exploring 3-D-2-D CNN Feature Hierarchy for Hyperspectral Image Classification,\" in IEEE Geoscience and Remote Sensing Letters, pp. 1-5, 2019. * [22] Y. Chen, X. Zhao and X. Jia, \"Spectral-Spatial Classification of Hyperspectral Data Based on Deep Belief Network,\" in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 6, pp. 2381-2392, June 2015. * [23] T. Li, J. Zhang and Y. Zhang, \"Classification of hyperspectral image based on deep belief networks,\" IEEE International Conference on Image Processing (ICIP), pp. 5132-5136, 2014. * [24] P. Zhong, Z. Gong, S. Li and C. Schonlieb, \"Learning to Diversify Deep Belief Networks for Hyperspectral Image Classification,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 6, pp. 3516-3530, 2017. * [25] P. Zhong, Zhiqiang Gong and C. Schonlieb, \"A DBN-crf for spectral-spatial classification of hyperspectral data,\" 23rd International Conference on Pattern Recognition (ICPR), pp. 1219-1224, 2016. * [26] J. Feng, L. Liu, X. Zhao, R. Wang and H. Liu, \"Hyperspectral image classification based on stacked marginal discriminative autoencoder,\" 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, 2017, pp. 3668-3671. * [27] J. E. Ball and P. Wei, \"Deep Learning Hyperspectral Image Classification using Multiple Class-Based Denoising Autoencoders, Mixed Pixel Training Augmentation, and Morphological Operations,\" IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 6903-6906, 2018. * [28] J. Feng, L. Liu, X. Cao, L. Jiao, T. Sun and X. Zhang, \"Marginal Stacked Autoencoder With Adaptively-Spatial Regularization for Hyperspectral Image Classification,\" in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 9, pp. 3297-3311, 2018. * [29] S. Zhou, Z. Xue and P. Du, \"Semisupervised Stacked Autoencoder With Cortaining for Hyperspectral Image Classification,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 6, pp. 3813-3826, June 2019. * [30] W. Li, G. Wu, F. Zhang and Q. Du, \"Hyperspectral Image Classification Using Deep Pixel-Pair Features,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 2, pp. 844-853, Feb. 2017. * [31] M. Zhang, W. Li and Q. Du, \"Diverse Region-Based CNN for Hyperspectral Image Classification,\" in IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 2623-2634, June 2018. * [32] S. Hao, W. Wang, Y. Ye, T. Nie and L. Rruzzone, \"Two-Stream Deep Architecture for Hyperspectral Image Classification,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 4, pp. 2349-2361, April 2018. * [33] Z. Gong, P. Zhong, Y. Yu, W. Hu and S. Li, \"A CNN With Multiscale Convolution and Diversified Metric for Hyperspectral Image Classification,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 6, pp. 3599-3618, June 2019. * [34] A. Santara et al., \"BASS Net: Band-Adaptive Spectral-Spatial Feature Learning Neural Network for Hyperspectral Image Classification,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 9, pp. 5293-5301, 2017. * [35] K. Simonyan and A. Zisserman, \"Very Deep Convolutional Networks for Large-Scale Image Recognition\", in CoRR, vol. abs/1409.1556, arXiv, 2014. * [36] B. Liu, X. Yu, P. Zhang, A. Yu, Q. Fu and X. Wei, \"Supervised Deep Feature Extraction for Hyperspectral Image Classification,\" in IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 4, pp. 1909-1921, April 2018. * [37] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D.Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich, \"Going Deeper with Convolutions\", in CoRR, vol. abs/1409.4842, arXiv, 2014. * [38] K. He, X. Zhang, S. Ren and J. Sun, \"Deep Residual Learning for Image Recognition\",in CoRR, vol. abs/1512.03385, arXiv, 2015. * ECCV, pp. 122-138, 2018. * [40] S. Ioffe and C. Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift\", in CoRR, vol. abs/1502.03167, arXiv, 2015. * [41] B. McFee, J. Salamon and J. P. Bello, \"Adaptive Pooling Operators for Weakly Labeled Sound Event Detection,\" in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, no. 11, pp. 2180-2193, 2018. * [42] Z. Tian, J. Ji, S. Mei, J. Hou, S. Wan and Q. Du, \"Hyperspectral Classification Via Spatial Context Exploration with Multi-Scale CNN,\" in IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp.2563-2566, 2018. * [43] X. Lu, Y. Zhong and J. Zhao, \"Multi-Scale Enhanced Deep Network for Road Detection,\" in IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 3947-3950, 2019. \\begin{tabular}{c c} & Jayasree Saha received M.Tech degree from University of Calcutta in 2014. She is currently pursuing the Ph.D degree with the Department of Computer Science and Engineering, Indian Institute of Technology (IIT), Kharagpur, India. Her research interest includes computer vision, remote sensing and deep learning \\\\ \\end{tabular} \\begin{tabular}{c c} & Yuvraj Khanna is currently pursuing the Dual degree with the Department of Electronics and Electrical communications and Engineering, Indian Institute of Technology (IIT), Kharagpur, India. His research interest includes remote sensing and deep learning \\\\ \\end{tabular} \\begin{tabular}{c c} & Jayanta Mukherjee received his B.Tech., M.Tech., and Ph.D. degrees in Electronics and Electrical Communication Engineering from the Indian Institute of Technology (IIT), Kharagpur in 1985, 1987, and 1990, respectively. He is currently a Professor with the Department of Computer Science and Engineering, Indian Institute of Technology (IIT), Kharagpur, India. His research interests are in image processing, pattern recognition, computer graphics, multimedia systems and medical informatics. He has published about 250 research papers in journals and conference proceedings in these areas. He received the Young Scientist Award from the Indian National Science Academy in 1992. Dr. Mukherjee is a Senior Member of the IEEE. He is a fellow of the Indian National Academy of Engineering (INAE). \\\\ \\end{tabular}
Presently, CNN is a popular choice to handle the hyperspectral image classification challenges. In spite of having such large spectral information in Hyper-Spectral Image(s) (HSI), it creates a curse of dimensionality. Also, large spatial variability of spectral signature adds more difficulty in classification problem. Additionally, scarcity of training examples is a bottleneck in using a CNN. In this paper, a novel target-patch-orientation method is proposed to train a CNN based network. Also, we have introduced a hybrid of 3D-CNN and 2D-CNN based network architecture to implement band reduction and feature extraction methods, respectively. Experimental results show that our method outperforms the accuracies reported in the existing state of the art methods. Target-Pixel-Orientation, multi-scale convolution, deep learning, hyperspectral classification.
Condense the content of the following passage.
167
arxiv-format/2209_05996v3.md
# M\\({}^{2}\\)-3DLaneNet: Exploring Multi-Modal 3D Lane Detection Yueru Luo\\({}^{1,2}\\) Xu Yan\\({}^{1,2}\\) Chaoda Zheng\\({}^{1,2}\\) Chao Zheng\\({}^{3}\\) Shuqi Mei\\({}^{3}\\) Tang Kun\\({}^{3}\\) Shuguang Cui\\({}^{2,1}\\) Zhen Li\\({}^{2,1}\\) \\({}^{1}\\) FNii, CUHK-Shenzhen \\({}^{2}\\) SSE, CUHK-Shenzhen \\({}^{3}\\) Tencent Map, T Lab Corresponding author ## 1 Introduction Accurate and robust lane detection is the foundation for safety in autonomous driving. Over the past few years, camera-based 2D lane detection has been extensively studied and achieved impressive results [8, 10, 11, 18, 19, 24, 30, 48, 49]. However, given 2D lane detection results, accurately localizing lanes in 3D space requires complex post-processing due to depth ambiguity. To circumvent this problem, several 3D lane open datasets [7, 9, 47] have been proposed, which have encouraged the development of algorithms [3, 5, 7, 9, 25, 47] for the detection of 3D lanes. Most previous methods model the 3D lane detecting in a vision-centric manner, which takes front-view camera images as inputs and predicts 3D lanes in the bird's eye view (BEV) space. Due to the lack of depth information, these approaches mainly utilize inverse perspective mapping (IPM) [5, 7, 9, 25] to transform image features from the front view to the BEV space, as shown in Figure 1 (a). However, IPM is based on the flat road assumption, which does not hold for most real-world scenarios (_e.g_., in the sloping terrain). To mitigate the distortion problem caused by the flat-road assumption, a recent work [3] adopted a deformable attention mechanism [50] to optimize the view transformation problem and achieve the state-of-the-art results on 3D lane detection. Although images can provide clues about the lanes with rich semantic context, cameras are sensitive to illumination change. Therefore, these vision-centric methods become unreliable under extreme light conditions. Recently, LiDARs are becoming increasingly popular in autonomous driving systems. The accurate depth information provided by LiDARs, regardless of lighting variations, motivates us to explore _whether LiDAR data can facilitate modern 3D lane detection and how much gain it can boost_. Thanks to inherent rich 3D Figure 1: (a) Previous methods [3, 7] mainly utilize inverse perspective mapping (IPM) to transform front-view image features to BEV features, through either ground truth or learned camera poses. (b) Our proposed M\\({}^{2}\\)-3DLaneNet first lifts image features to 3D space through depth completion of LiDAR to form the pseudo point cloud, and then fuses multi-modality features in the BEV for 3D lane detection. geometric information, detecting flat ground from other objects (_e.g_., cars, pedestrians) could be much easier in point clouds. This characteristic is preferable for lane detection, as lanes are typically located on the ground. We also find that lane lines have discriminative _intensity_ compared to road surfaces and other objects in most scenarios, as shown in Figure 5. However, the sparsity and slender appearance of lanes in point clouds result in lanes constituting only a small fraction of the total points. Consequently, lane lines in point clouds often get obscured by the surrounding ground surfaces. Motivated by the above observation, our objective is to enhance 3D lane detection by synergistically leveraging complementary information from images and LiDAR point clouds. This paper begins with an exploration of LiDAR's capabilities in 3D lane detection. Subsequently, we introduce M\\({}^{2}\\)-3DLaneNet, a multi-modal framework designed to achieve precise 3D lane detection, surpassing the limitations of prior single-modal approaches Figure 1 (b). M\\({}^{2}\\)-3DLaneNet takes geometric-rich point clouds and texture-rich images as inputs, processing them through two modal-specific streams to predict 3D lanes via multi-scale cross-modal feature fusion. To obtain 3D lane prediction, we propose to conduct feature fusion in a unified BEV space. In the LiDAR stream, we generate multi-scale features in BEV space using a 3D pillar-based backbone [17]. For the camera stream, the input image is initially encoded into multi-scale features and subsequently enhanced using our line-restricted feature aggregation. Afterward, the multi-scale image features are projected into 3D space as pseudo points, aided by the completed depth information obtained from LiDAR. These lifted points are pillarized into the same BEV space as the LiDAR stream. With multi-scale BEV features from both modalities in hand, we perform a bottom-up fusion, transitioning from large-scale maps to the smallest one, to generate the final 3D lane prediction. Experiments on OpenLane dataset [3] show that our model surpasses all previous methods by a significant margin. In summary, our main contributions are twofold: * We explored the possibility of utilizing LiDARs to detect 3D lanes among modern vision-centric solutions by conducting comprehensive experiments on Waymo [36]-based 3D lane detection dataset, namely OpenLane. In doing so, we provide insights into how LiDARs can facilitate 3D lane detection and why their utilization is beneficial, investigating their geometric patterns and intensity--a topic that remains underexplored in the current literature. * We propose M\\({}^{2}\\)-3DLaneNet, a multi-modal framework for accurate 3D lane detection, which serves as a strong baseline for future research. This framework effectively employs complementary features from camera images and LiDAR point clouds to detect 3D lanes. ## 2 Related work ### 2D Lane Detection The goal of 2D lane detection is to detect the positions of lanes in the 2D images. Among recent methods, there are four mainstream strategies adopted to perform this task: 1) Anchor-based methods [49, 33, 20, 34] leverage line-like anchors to detect lanes out of the special nature of the slender lane. [42] utilizes attention between anchors to aggregate global information. Further, [33] uses row anchors to represent lanes, by which lane detection is formulated as a row selection problem. 2) Segmentation-based methods [45, 30, 11, 24] aim to predict the lane mask through pixel-wise classification task. 3) Parametric-based methods [26, 39, 6] are in a characteristic way to resolve the traffic line detection, _i.e_., learn to predict parameters used to fit the polynomial curves of lanes. [26, 39] model lane shape with polynomial parameters, while [6] adopts Bezier curve to capture holistic lane structure. 4) Key-point-based methods [34, 44] formulate lane detection problem from key points perspective and finally get lanes through associating points in the same lane. However, since lane annotations are defined on 2D images, these methods lack the ability to accurately localize lanes in 3D space. ### 3D Lane Detection 3D lane detection aims at predicting lane lines in the 3D space. While most deep-learning-based 3D lane detection algorithms generate 3D predictions, they still rely on monocular images due to the lack of public 3D datasets. Among these methods, 3D-LaneNet [7] is the first to detect 3D lanes using monocular front-view images. It utilizes inverse perspective mapping (IPM) to transform the front-view images into top-view, employing camera poses predicted by a learning branch to predict lanes on a 3D plane. Gen-LaneNet [9] proposes a virtual top view as a surrogate to address the misalignment between anchor representation and internal features projected by IPM. CLGo [25] realizes a two-stage framework with pose estimation and polynomials estimation. These methods rely on IPM to project image features to top-view features. Recent Persformer [3] leverages the deformable attention mechanism to mitigate the discrepancies introduced by IPM. However, IPM introduces distortion in uphill or downhill scenarios, which jeopardizes the ability of the model to perceive the scene accurately and consistently. Alternatively, ONCE [47] directly generates 3D lanes without relying on IPM. Instead, it detects 2D lanes on images and projects them into 3D space with the help of depth estimation. Nevertheless, these monocular approaches heavily depend on camera features,which suffer from depth ambiguity and sensitivity to light conditions. Although some approaches attempt to detect 3D lanes using LiDAR data [14, 15, 41]. They often rely strongly on the hand-craft intensity threshold, making it challenging to determine suitable values across different environments. Fortunately, [3] provides a large-scale 3D lane detection dataset, OpenLane, which is the first public 3D lane dataset containing both camera and LiDAR data with pixel-to-point correspondences. This new dataset offers an opportunity to develop multi-modal approaches that could achieve better 3D lane detection. ### Bird-Eye-View Perception BEV-based representation draws a lot of attention among 3D perception tasks recently, owing to its strong capability and compactness in representing not only multi-sensor features but also unifying multiple tasks in a shared space. LSS [31] proposed semantic segmentation on BEV for variable cameras by lifting 2D images to frustums. This 2D-to-3D lifting operation is accomplished with the aid of estimated depth distribution for each pixel. In this way, LSS generates a unified voxel representation from different views. Inspired by LSS [31], several studies [22, 23, 27, 13] on 3D detection have explored the capabilities of BEV representation to achieve improved performance. Among them, BEVFusion [23] fuses multi-view image features and LiDAR features into BEV space, explicitly predicting the depth distributions of image feature pixels. Similarly, another concurrent work, BEVFusion1[27] unifies 3D detection and segmentation in one framework. Different from LSS, BEVFormer [22] projects BEV grids into images and uses deformable attention to aggregate image features. Then, they adopt a query-based paradigm in BEV space for the final prediction. Footnote 1: There are two concurrent BEVFusion works in the literature. In line with the spirit of 2D-to-3D projection, but differing in method, we lift multi-scale 2D features into LiDAR space, considering that features from monocular images are suboptimal for 3D perception. Besides, generating a large point cloud using depth distribution is computationally expensive. Thus, we employ paired LiDAR data to generate a depth map on the CPU using the image processing algorithm proposed in [16]. ### Multi-Sensor Approaches As cameras and LiDARs capture complementary information, multi-sensor approaches are widely adopted in different fields. In 3D object detection, PointPainting [43] provides point clouds with their corresponding 2D semantics. Taking advantage of the attention mechanism, [21, 4, 2] adaptively model the 2D-3D mapping through multi-modal fusion. SFD [46] proposes an RoI fusion strategy to aggregate multi-modal RoI features and designs color point convolution to extract pseudo point cloud features. Regarding 3D lane detection, there are few previous approaches related to multi-sensor approaches due to the lack of a dataset. Figure 2: **Overview of the proposed M\\({}^{2}\\)-3DLaneNet. It parallelly generates BEV features by taking an image and LiDAR point cloud as inputs, where features from the former are gained by top-down BEV generation. Afterward, two BEV features are fused together with bottom-up BEV fusion, where the fused BEV features are finally used to predict 3D lanes.** Early work [1] adopts multi-modal input on their private dataset, but it relies on supervision from additional high-definition maps. Furthermore, they did not explore geometric information in LiDAR point clouds and solely extract features through 2D-CNNs using a three-channel image obtained from raw LiDAR points. ## 3 Method In this section, we present the architecture of M\\({}^{2}\\)-3DLaneNet, designed to effectively leverage multi-modal features for accurate 3D lane detection. We begin by describing the overall structure of our M\\({}^{2}\\)-3DLaneNet Section 3.1. Then, we explain the process of lifting 2D image features into 3D space in Section 3.2. Following, we elaborate on the fusion of camera and LiDAR features into the unified 3D space in Section 3.3. By aggregating the two-modal information in a shared BEV space, we achieve a comprehensive representation for the final prediction. ### Two-Stream Architecture As shown in Figure 2, our architecture comprises two parallel streams to generate multi-scale features for each modality. **Camera Stream.** The camera stream processes RGB images and encodes them into multi-scale BEV feature maps supported by our proposed depth-aware lifting. Following [3], we adopt the first meta-stage of EfficientNet-B7 [40] followed by eight convolution layers as our image backbone, producing four different scales of features. These multi-scale features are lifted into 3D space, leveraging their corresponding real depth maps from LiDAR, and finally encoded into BEV space. **LiDAR Stream.** In the LiDAR stream, we encode LiDAR point cloud using PointPillars [17]. This process involves dividing the point cloud into different pillars and encoding each pillar into a high-dimensional feature vector using a mini-PointNet [32]. Subsequently, all pillars are mapped to the 2D BEV grid based on their corresponding locations. Then, we use CNNs to generate LiDAR BEV features in four scales, similar to the camera stream. ### Camera Stream: Front View to BEV To obtain multi-scale BEV features from the input multi-scale features of the image, we enhance the 2D feature maps using our newly proposed Cross-Scale Line-restricted Deformable Attention (CS-LDA) module, as depicted in Figure 3. CS-LDA is a modification of the deformable attention [50] that restricts sampling locations along straight lines, considering the long and slim nature of lane lines. Additionally, it propagates high-level features to low-level ones in a coarse-to-fine paradigm to predict sampling locations. Our experiments reveal that the line restriction in CS-LDA leads to a 0.2 improvement in the final F-Score without significantly increasing complexity, as shown in Table 4. After enhancing the front-view features, we transform them into BEV space using the following steps: 1) _Lifting_ to 3D space and creating pseudo point clouds (see Section 3.2.1). 2) _Pillarizing_ the lifted pseudo point clouds and generating BEV representations, similar to the process used in the LiDAR stream. 3) _Completing_ the BEV grid to alleviate the effects of occlusion and limited field-of-view (see Section 3.2.2). Figure 4: **Depth-aware Lifting.** To provide a clearer visualization of the lifting process, we use the thick image to represent its corresponding features. The lifting process involves the following steps: 1) Completing the sparse depth map to obtain a dense depth map. 2) Lifting multi-scale image features into 3D space based on the dense depth map. 3) Conducting pillarization to obtain the BEV features from the 3D space. Figure 3: **Cross-Scale Line-restricted Deformable Attention (CS-LDA).** (a) The workflow of CS-LDA, in which the features from two different scales are merged for predicting the current sampling position, and a residual connection are added after attention enhancement. (b) The comparison of previous deformable attention (DA) with our line-restricted deformable attention (LDA). The main difference lies in that we restrict the sampling locations with an explicit slope–intercept line form. Since this operation is applied from the top to the bottom, the input of the first layer is just itself. #### 3.2.1 Depth-aware Lifting To lift the features to 3D space, we first need the corresponding depth of each feature pixel. Given a LiDAR point cloud \\(\\mathbf{P}\\in\\mathbb{R}^{N\\times 3}\\), we first project it onto the image plane, obtaining a sparse depth map \\(\\mathbf{D}\\). This occurs because the original LiDAR points only map to a small proportion of pixels in the image. Then, we apply an efficient depth completion algorithm [16], specifically designed for completing sparse LiDAR-based depth maps using image processing techniques. This algorithm generates a dense depth map \\(\\mathbf{\\hat{D}}\\). With the dense depth map in hand, we can then transform the features from the image plane into 3D space, as shown in Figure 4. In practice, given the input image with the size of \\((H,W)\\), we first generate image coordinates \\(\\mathbb{C}\\) according to the dense depth map \\(\\mathbf{\\hat{D}}\\): \\[\\mathbb{C}=\\{(u,v,d)|\\;u\\in[1,W],v\\in[1,H]\\}, \\tag{1}\\] where \\(d=\\mathbf{\\hat{D}}_{uv}\\). Afterwards, we transform the image coordinates \\(\\mathbb{C}\\) into the 3D space, utilizing the camera intrinsic and extrinsic matrices \\(K\\in\\mathbb{R}^{4\\times 4}\\) and \\(T\\in\\mathbb{R}^{4\\times 4}\\). Specifically, given the \\(i\\)-th image coordinate \\(\\mathbb{C}_{i}=(u_{i},v_{i},d_{i})\\), its corresponding coordinate \\((x_{i},y_{i},z_{i})\\) in the world system can be calculated as: \\[[x_{i},y_{i},z_{i},1]^{\\mathsf{T}}=T^{-1}\\cdot K^{-1}\\cdot[u_{i}\\times d_{i}, v_{i}\\times d_{i},d_{i},1]^{\\mathsf{T}}. \\tag{2}\\] After the above transformation, we lift the multi-scale image features into the 3D space. To unify the image features into a common space, we perform pillarization [17] on the above multi-scale features, obtaining multi-scale BEV features, as shown in Figure 4. #### 3.2.2 Nearest BEV Completion. Due to occlusion and limited field-of-view, the BEV maps derived from the camera stream may contain noticeable empty grids. To tackle this problem, we apply the Nearest BEV Completion to fill the empty BEV areas. For each BEV feature map, we first generate an occupancy map, which records the occupancy status of the current lattice cells. After that, for each empty lattice, we perform interpolation using its nearest neighbor. Meanwhile, we record the offset and Euclidean distance to its nearest neighbor and use them as additional features concatenated with the original BEV features. ### Multi-Scale Cross-Modal BEV Fusion We employ a bottom-up fusion approach to integrate the multi-modality features from the camera and LiDAR streams, as demonstrated in Figure 2. For each scale, we begin by concatenating the two feature maps obtained from different modalities. Subsequently, we apply the channel-wise attention [12] for single-scale fusion, enhancing the combined feature representation. Afterward, we aggregate multi-scale cross-modal features from bottom to up, culminating in a single fused BEV feature map. This integrated feature map serves as the foundation for our final predictions. ### Prediction and Objective We adopt an anchor-based approach, following [3], to detect 3D lanes. Given an image and its ground-truth annotations \\(\\mathbf{Y}_{(\\cdot)}^{gt}\\), the overall objective function between prediction and ground truth is formulated as follows: \\[\\begin{split}\\mathcal{L}_{all}=&\\lambda_{1}\\mathcal{ L}_{3d}(\\mathbf{Y}_{3d}^{\\prime},\\mathbf{Y}_{3d}^{gt})+\\lambda_{2}\\mathcal{L}_{2d}( \\mathbf{Y}_{2d}^{\\prime},\\mathbf{Y}_{2d}^{gt})+\\\\ &\\lambda_{3}\\mathcal{L}_{seg}(\\mathbf{Y}_{b}^{\\prime},\\mathbf{Y} _{b}^{gt}),\\end{split} \\tag{3}\\] where the 3D prediction \\(\\mathbf{Y}_{3d}^{\\prime}\\) contains three components: 1) regression \\(\\mathbf{Y}_{3d}^{\\prime reg}\\in\\mathbb{R}^{N_{anchor}\\times N_{s}\\times 2}\\); 2) category, \\(\\mathbf{Y}_{3d}^{\\prime cls}\\in\\mathbb{R}^{N_{anchor}\\times 1}\\); 3) visibility \\(\\mathbf{Y}_{3d}^{\\prime vis}\\in\\mathbb{R}^{N_{anchor}\\times N_{s}\\times 1}\\). \\(N_{s}\\) denotes the number of sampled points along the \\(y\\)-axis, which is predefined and remains the same for each anchor. \\(\\mathbf{Y}_{3d}^{\\prime reg}\\) denotes the predicted offsets to predefined anchors, trained by optimizing a Smooth-L1 loss. \\(\\mathbf{Y}_{cls}^{3d}\\) and \\(\\mathbf{Y}_{vis}^{3d}\\) denote the category and visibility of anchors respectively. Notably, in the OpenLane dataset, _visibility_ for each point is used solely to indicate the validity of the point on each lane, due to its label generation processing, _e.g_., the lanes in the sky are filtered. To enhance the capability of 2D features, we adopt auxiliary losses of 2D lane detection, which consist of similar components as the 3D lane detection, _i.e_., regression, category, and visibility. Moreover, a binary semantic segmentation loss is employed on the fused BEV feature supervised by the projection of 3D lane annotation. ## 4 Experiment As OpenLane [3] is currently the only publicly available dataset that includes LiDAR-camera inputs with their correspondence, we conducted all experiments using this dataset. ### Implementation Details For 2D image feature extraction, we adopt the same structure as Persformer [3], as illustrated in Section 3.1. For point clouds, we follow PointPillar [17] to generate BEV features for both LiDAR data and lifted image feature point clouds. All models are trained with batch size \\(16\\) and overall epochs \\(56\\). For a fair comparison, we resize input images to the same resolution as previous methods, _i.e_., \\(480\\times 360\\), during both the training and testing phases. The data augmentation includes only image random rotation from \\(-10^{\\circ}\\) to \\(10^{\\circ}\\). Through all experiments, we use the same hyperparameters and training protocols. Specifically, we use AdamW [29] as the optimizer, the learning rate is set to \\(2\\times 10^{-4}\\) at the beginning of training, and weuse a cosine annealing scheduler [28] with \\(T_{max}=8\\) to update the learning rate. \\begin{table} \\begin{tabular}{c|c|c|c|c|c|c|c|c} \\hline _Dist._ & Range & Methods & Modality & F-Score\\({}^{\\dagger}\\) & X error near\\({}_{\\downarrow}\\) & X error far\\({}_{\\downarrow}\\) & Z error near\\({}_{\\downarrow}\\) & Z error far\\({}_{\\downarrow}\\) \\\\ \\hline \\hline \\multirow{11}{*}{**C-Stream\\({}^{*}\\)**} & \\multirow{5}{*}{C} & 3DLaneNet [7] & C & 44.1 & 0.479 & 0.572 & 0.367 & 0.443 \\\\ & & \\begin{tabular}{c} GenLaneNet [9] \\\\ \\end{tabular} & C & 32.3 & 0.591 & 0.684 & 0.411 & 0.521 \\\\ & & \\begin{tabular}{c} Persformer [3] \\\\ \\end{tabular} & C & 50.5 & 0.485 & 0.553 & 0.364 & 0.431 \\\\ & & \\begin{tabular}{c} M\\({}^{2}\\) C-Stream\\({}^{*}\\) \\\\ \\end{tabular} & C+L & 50.9 & 0.433 & 0.536 & 0.333 & 0.452 \\\\ & & \\begin{tabular}{c} Persformer + LiDAR\\({}^{\\dagger}\\) \\\\ \\end{tabular} & C+L & 53.0 & 0.458 & 0.519 & 0.357 & 0.416 \\\\ & & \\begin{tabular}{c} M\\({}^{2}\\)-3DLaneNet (ours) \\\\ \\end{tabular} & C+L & **55.5** & **0.431** & **0.487** & **0.327** & **0.401** \\\\ & & \\begin{tabular}{c} _Improvement_ \\\\ \\end{tabular} & - & \\(\\uparrow\\)_5.0_ & \\(\\downarrow\\)_0.054_ & \\(\\downarrow\\)_0.066_ & \\(\\downarrow\\)_0.037_ & \\(\\uparrow\\)_0.030_ \\\\ \\hline \\multirow{11}{*}{**C-Stream\\({}^{*}\\)**} & \\multirow{5}{*}{C} & 53.3 & 0.478 & 0.563 & 0.363 & 0.408 \\\\ & & \\begin{tabular}{c} M\\({}^{2}\\) C-Stream\\({}^{*}\\) \\\\ \\end{tabular} & C+L & 59.2 & 0.402 & 0.420 & 0.313 & 0.335 \\\\ & ### Datasets **OpenLane.** Built on Waymo Open Dataset [37], OpenLane contains \\(200\\)K frames and \\(880\\)K annotated lanes. Overall \\(1000\\) segments are split into train and val sets as the original Waymo partition. Each segment of Waymo LiDAR data [37] is sampled at 10Hz for 20s with 64-beam LiDARs. Since OpenLane [3] annotates only the lanes in the front view, we exclude points in the LiDAR data that fall outside the field of the front view. Additionally, OpenLane provides category information for each lane,, white dash line, and road curbside, resulting in a total of 14 categories. ### Evaluation Metrics Following the official metrics of OpenLane [3], the evaluation of 3D lane detection is formulated as a matching problem between predictions and ground truth based on _edit distance_. After matching, the corresponding metrics can be calculated in terms of F-Score, category accuracy, and X/Z error over the matched lanes. A correct prediction requires the point-wise distances of 75\\(\\%\\) points between the prediction and ground truth to be less than the maximum allowed distance, 1.5m. The perception distance is set as [3m, 103m] for Y-axis, dubbed the 100m range setting here. We first evaluate models using the official metrics, 1.5m distance threshold, and 100m range setting. Further, to better exploit the capability of point clouds for 3D lane detection, we adopt 75m as the farthest perception distance, which falls within the valid range of LiDAR data in Waymo, dubbed 75m range setting (see Table 1 2). LiDAR-based and multi-modal methods (see Table 3) are mainly investigated under this setting. Moreover, we examined the performance of models with a more restrictive point-wise distance threshold,, reducing it from the original 1.5m to **0.5m** (the lower part of Table 1 2). This is because we believe that a 1.5m bias would seriously hamper the safety of real-world autonomous driving. ### Main Results We compare our approach with existing state-of-the-art 3D lane detection methods on OpenLane [3] dataset over different metrics. Table 1 shows the F-Score comparison over val set and six challenging scenarios. Table 2 illustrates more comprehensive evaluation result comparisons on val set. Our proposed M\\({}^{2}\\)-3DLaneNet surpasses its competitors by a significant margin when evaluated on a variety of scenarios and metrics. #### 4.4.1 Comparison with monocular methods **100m Range Evaluation.** As shown in Table 1, although the point clouds are missing from 75m to 103m, our method still surpasses the previous SOTA by a clear margin of 5.0\\(\\%\\) in F-Score, 0.054m/0.066m in X near/far error and 0.037m/0.030m in Z near/far error. The significant improvements demonstrate the effectiveness of our method in exploiting information from point clouds and integrating with semantics from the image for accurate 3D lane detection. **75m Range Evaluation.** As shown in Table 3, our LiDAR stream (LiDAR\\({}^{\\dagger}\\)) outperforms the previous state-of-the-art method Persformer [3] across all metrics on OpenLane. Equipping LiDAR\\({}^{\\dagger}\\) with our camera stream, M\\({}^{2}\\)-3DLaneNet achieves higher performance than previous SOTA across all metrics. Specifically, we observe a remarkable 7.2\\(\\%\\) improvement in F-Score, as well as reductions of 0.082m/0.0158m in X near/far error and reductions of 0.052m/0.082m in Z near/far error. **More Restrict Distance Threshold.** As discussed in Section 4.3, we propose adopting a smaller distance threshold that is more suitable for real-world autonomous driving scenarios. Therefore, we employ a more restrictive threshold of **0.5m** to evaluate the performance of different models. As shown in Table 1 2, when using a more challenging threshold, the margins between our results and the previous SOTA(Persformer [3]) become significantly larger across different setups. Specifically, under 100m range setting, our M\\({}^{2}\\)-3DLaneNet surpasses Persformer [3] by **16.2\\(\\%\\)** in F-Score, even when we miss points beyond 75m. Furthermore, with the valid LiDAR range (75m range setting), our M\\({}^{2}\\)-3DLaneNet achieves even greater improvements, showing an **18.5\\(\\%\\)** increase in F-Score on the overall val set and consistently outperforming the previous SOTA method by remarkable margins in all six challenging scenarios. Figure 5: **Why LiDAR works?** (a) shows the point cloud with RGB retrieved from the image and (b) shows the point cloud with intensity. (c) demonstrate the raw point cloud, in which we zoom in on three specific areas marked with red, blue, and green boxes. It can be found that the shape of the point cloud geometry alters at spots where lane lines and road curbs are present. ios. These strong performance results further demonstrate the strength and effectiveness of our design. **Effect of LiDAR.** As evident from the results in Table 1, our model achieves significant gains over six challenging scenes. For instance, when using a 1.5m distance threshold, M\\({}^{2}\\)-3DLaneNet shows improvements of 10.1\\(\\%\\) /10.2\\(\\%\\) and 5.0\\(\\%\\) /11.0\\(\\%\\) in F-Score compared to the existing SOTA under the 100m/75m setting for up&down and night scenarios, respectively. This highlights the crucial importance of LiDAR in 3D lane detection. In the up&down case, our BEV representations exhibit a better capability in perceiving 3D lanes more accurately. Moreover, in the night case, where previous methods struggle due to imaging limitations, our multi-modal approach achieves remarkable performance. This is attributed to the effectiveness of our designed multi-modal framework, in which geometry-rich LiDAR can compensate for the deficiencies in low-quality visual information. Similarly, when adopting a 0.5m distance threshold, we observe substantial performance improvements by equipping Persformer [3] with our LiDAR stream. This leads to an about 10-point increase in F-Score for both the 75m and 100m range settings, as illustrated in Table 2. These findings further reinforce the strength and effectiveness of the LiDAR stream. #### 4.4.2 Comparison with multi-modal methods In this section, we compare our method with other multi-modal techniques to further demonstrate the effectiveness of our proposed method. The comparison results are shown in Table 3, where the metrics are calculated under the 75m range setting. On top of Table 3, we introduce two baselines: LiDAR\\({}^{\\dagger-}\\) and LiDAR\\({}^{\\dagger}\\). These baselines utilize only the LiDAR branch from our M\\({}^{2}\\)-3DLaneNet. For LiDAR\\({}^{\\dagger-}\\), the inputs only contain only XYZ coordinates of points. In contrast, LiDAR\\({}^{\\dagger}\\) additionally uses point intensity and elongation. As shown in Table 3, LiDAR\\({}^{\\dagger-}\\) does not exhibit an advantage over the pure image-based method. On the other hand, LiDAR\\({}^{\\dagger}\\) shows a noticeable improvement over Persformer [3], indicating that the _intensity is the main discriminative representation_ for lane detection on LiDARs. To gain a better understanding of how LiDAR contributes to 3D lane detection, we provide an intuitive visualization in Figure 5. Except for the baselines, Table 3 also shows that equipping Persformer [3] with our LiDAR branch can boost its performance from 53.3 to 56.9 in F-Score, demonstrating the effectiveness of our LiDAR stream. Furthermore, based on PointPainting [43], we decorate the LiDAR point clouds with image features and feed the enhanced point clouds to our LiDAR branch for lane detection. Besides, Table 3 shows two variants of the PointPainting models (PointPainting RGB and PointPainting SEG). The former directly decorates the point clouds with RGB colors from images, while the latter uses the pixel labels predicted by a pretrained 2D lane segmentation model [35]. The poor performance of the SEG version may be attributed to inadequate segmentation results of the pretrained model. This issue arises due to the domain gap between OpenLane images and the training set of the segmentation models, as well as the inherent difficulty of lane segmentation. Finally, we compare our model with BEVFusion [23] and SFD [46], both of which fuse the multi-modal features in a unified 3D space. While we are the first to explore multi-modal exclusively for 3D lane detection, these multi-modal methods cannot be directly applied to this task. To overcome this, we adapt these approaches to lane detection by replacing their task-specific heads with ours. While BEVFusion [23] Figure 6: **Visualization.** We show the lane detection of M\\({}^{2}\\)-3DLaneNet in the 3D space and the corresponding image in various scenarios, where the green lines represent ground truth and the other colors are determined by different categories. Notably, the yellow line denotes the predicted road curbside, which is a class in the OpenLane dataset. and SFD [46] have shown promising results in 3D object detection, our investigations reveal that their performances are inferior to our approach when directly applied to the lane detection task. This disparity can be attributed to their lack of consideration for the unique characteristics of lanes, such as their slenderness and sparsity, which are crucial factors for achieving accurate lane detection. It is worth noting that SFD did not surpass LiDAR\\({}^{\\dagger}\\) in terms of F-Score, potentially indicating its sensitivity to the accuracy of depth estimation. ### Design Analysis To understand the effectiveness of our proposed modules, we conduct ablation studies on each component in M\\({}^{2}\\)-3DLaneNet, as summarized in Table 4. After independently adopting the CS-LDA and NBC, the performance will be increased by 1.1 and 1.2, respectively. By employing both techniques simultaneously, M\\({}^{2}\\)-3DLaneNet achieves 60.5 (+1.4). Notably, the joint adoption of both CS-LDA and NBC leads to a decrease in error and an increase in F-Score, indicating that their combination improves detection results and enhances localization accuracy. Additionally, we present the results of the original deform attention, which causes a 0.2 drop in F-Score compared to CS-LDA. ### Model Complexity We summarize the parameters of models and test the FPS of them on one Nvidia Tesla V100-32G GPU, as shown in Table 5. We notice that our model uses fewer parameters compared to Persformer [3], although we adopt two different modalities as inputs. In other words, we adopt a lightweight branch, about 7.5M parameters (the last line in Table 5), to encode the 3D information. A stronger LiDAR backbone may further boost the performance. Note that 2D auxiliary heads are not included for both Persformer [3] and our M\\({}^{2}\\)-3DLaneNet, as they are used to help the training. ## 5 Conclusions In this work, we conduct comprehensive experiments to explore the effect of LiDAR on 3D lane detection and we present the M\\({}^{2}\\)-3DLaneNet. It is a novel multi-modal 3D lane detection framework that utilizes both camera and LiDAR data simultaneously. By lifting 2D features through the generated dense depth map and fusing features in BEV space, M\\({}^{2}\\)-3DLaneNet effectively extracts useful features from different modalities and integrates them for better 3D lane detection. The architecture is validated on the OpenLane dataset and outperforms previous works. \\begin{table} \\begin{tabular}{c c c|c|c|c|c} \\hline \\hline \\begin{tabular}{c} CS- \\\\ DA \\\\ \\end{tabular} & \\begin{tabular}{c} CS- \\\\ LDA \\\\ \\end{tabular} & \\begin{tabular}{c} NBC \\\\ \\end{tabular} & \\begin{tabular}{c} F-Score \\\\ \\end{tabular} & \\begin{tabular}{c} X error \\\\ near | far \\\\ \\end{tabular} & \\begin{tabular}{c} Z error \\\\ near | far \\\\ \\end{tabular} \\\\ \\hline \\hline \\multirow{6}{*}{\\(\\check{\\check{\\check{\\check{\\check{\\check{\\check{\\check{\\check{\\check{\\check{ \\check{ }}}}}}}}}}}}\\) & & 59.1 & 0.412 & 0.420 & 0.321 & 0.337 \\\\ & & ✓ & 60.2 (+1.1) & 0.401 & 0.403 & 0.312 & 0.325 \\\\ & ✓ & & 60.3 (+1.2) & 0.403 & 0.409 & 0.315 & 0.332 \\\\ \\cline{1-1} \\cline{3-10} ✓ & & & 60.1 (+1.0) & 0.402 & 0.413 & 0.319 & 0.331 \\\\ & ✓ & ✓ & **60.5** (+1.4) & 0.396 & 0.405 & 0.311 & 0.326 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: **Ablation studies on designed modules.** All the ablation experiments are based on the multi-modal framework. **CS-DA**: Cross-Scale Deformation Attention. **CS-LDA**: Cross-Scale Line-restricted Deformable Attention module. **NBC**: the nearest BEV completion. \\begin{table} \\begin{tabular}{c|c|c|c|c|c|c} \\hline \\hline Methods & Modality & F-Score\\({\\uparrow}\\) & X error near\\({\\downarrow}\\) & X error far\\({\\downarrow}\\) & Z error near\\({\\downarrow}\\) & Z error far\\({\\downarrow}\\) \\\\ \\hline \\hline \\begin{tabular}{c} Persformer [3]\\({}^{*}\\) \\\\ \\end{tabular} & C & 53.3 & 0.478 & 0.563 & 0.363 & 0.408 \\\\ \\hline \\begin{tabular}{c} LiDAR\\({}^{\\dagger}\\) \\\\ \\end{tabular} & L & 49.5 & 0.490 & 0.459 & 0.360 & 0.357 \\\\ \\cline{1-1} \\cline{3-10} LiDAR\\({}^{\\dagger}\\) & L & 53.8 & 0.462 & 0.435 & 0.332 & 0.343 \\\\ \\hline \\begin{tabular}{c} Persformer [3] + LiDAR\\({}^{\\dagger}\\) \\\\ \\end{tabular} & C+L & 56.9 & 0.453 & 0.467 & 0.354 & 0.357 \\\\ \\cline{1-1} \\cline{3-10} PointPainting [43] RGB & C+L & 54.3 & 0.448 & 0.443 & 0.351 & 0.366 \\\\ \\cline{1-1} \\cline{3-10} PointPainting [43] SEG & C+L & 51.5 & **0.376** & 0.392 & **0.303** & 0.330 \\\\ \\cline{1-1} \\cline{3-10} BEVFusion [23]\\({}^{\\ddagger}\\) & C+L & 54.7 & 0.455 & 0.431 & 0.319 & 0.334 \\\\ \\cline{1-1} \\cline{3-10} SFD [46]\\({}^{\\ddagger}\\) & C+L & 52.5 & 0.452 & 0.441 & 0.331 & 0.347 \\\\ \\hline M\\({}^{2}\\)-3DLaneNet (ours) & C+L & **60.5** & 0.396 & **0.405** & 0.311 & **0.326** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Comprehensive comparison with other single/multi-modality methods on OpenLane val set. The evaluation is conducted using the original _1.5m_ threshold and valid range of LiDAR data, _75m_. The errors on the X and Z axes are both measured in _meters_. **Persformer\\({}^{*}\\):** we evaluate Persformer based on their provided checkpoint. **LiDAR\\({}^{\\dagger-}\\): our LiDAR stream uses only XYZ coordinates of points as inputs. **LiDAR\\({}^{\\dagger}\\):** our LiDAR stream additionally uses point intensity and elongation as inputs. \\(\\ddagger\\) denotes that we reimplement the models into the lane detection architecture based on their released codes. ## 6 Acknowledgment This work was supported in part by NSFC-Youth 61902335, by HZQB-KCZYZ-2021067, by the National Key R&D Program of China with grant No.2018YFB1800800, by Shenzhen Outstanding Talents Training Fund, by Guangdong Research Project No.2017ZT07X152, by Guangdong Regional Joint Fund-Key Projects 2019B1515120039, by the NSFC 61931024&81922046, by helixon biotechnology company Fund and CCF-Tencent Open Fund. ## References * [1] Min Bai, Gellert Mattyus, Namdar Homayounfar, Shenlong Wang, Shrinidhi Kowshika Lakshmikanth, and Raquel Urtasun. Deep multi-sensor lane detection. In _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, pages 3102-3109. IEEE, 2018. * [2] Xuyang Bai, Zeyu Hu, Xinge Zhu, Qingqiu Huang, Yilun Chen, Hongbo Fu, and Chiew-Lan Tai. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1090-1099, 2022. * [3] Li Chen, Chonghao Sima, Yang Li, Zehan Zheng, Jiajie Xu, Xiangwei Geng, Hongyang Li, Conghui He, Jianping Shi, Yu Qiao, and Junchi Yan. Persformer: 3d lane detection via perspective transformer and the openlane benchmark. In _European Conference on Computer Vision (ECCV)_, 2022. * [4] Zehui Chen, Zhenyu Li, Shiquan Zhang, Liangji Fang, Qinghong Jiang, Feng Zhao, Bolei Zhou, and Hang Zhao. Autoalign: Pixel-instance feature aggregation for multi-modal 3d object detection. _arXiv preprint arXiv:2201.06493_, 2022. * [5] Netaleo Efrat, Max Bluvstein, Shaul Oron, Dan Levi, Noa Garnett, and Bat El Shlomo. 3d-lanenet+: Anchor free lane detection using a semi-local representation. _arXiv preprint arXiv:2011.01535_, 2020. * [6] Zhengyang Feng, Shaohua Guo, Xin Tan, Ke Xu, Min Wang, and Lizhuang Ma. Rethinking efficient lane detection via curve modeling. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 17062-17070, 2022. * [7] Noa Garnett, Rafi Cohen, Tomer Pe'er, Roee Lahav, and Dan Levi. 3d-lanenet: end-to-end 3d multiple lane detection. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 2921-2930, 2019. * [8] Raghuraman Gopalan, Tsai Hong, Michael Shneier, and Rama Chellappa. A learning approach towards detection and tracking of lane markings. _IEEE Transactions on Intelligent Transportation Systems_, 13(3):1088-1098, 2012. * [9] Yuliang Guo, Guang Chen, Peitao Zhao, Weide Zhang, Jinghao Miao, Jingao Wang, and Tae Eun Choe. Gen-lanenet: A generalized and scalable approach for 3d lane detection. In _European Conference on Computer Vision_, pages 666-681. Springer, 2020. * [10] Jianhua Han, Xiajun Deng, Xinyue Cai, Zhen Yang, Hang Xu, Chunjing Xu, and Xiaodan Liang. Laneformer: Object-aware row-column transformers for lane detection. _arXiv preprint arXiv:2203.09830_, 2022. * [11] Yuenan Hou, Zheng Ma, Chunxiao Liu, and Chen Change Loy. Learning lightweight lane detection cnns by self attention distillation. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 1013-1021, 2019. * [12] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 7132-7141, 2018. * [13] Junjie Huang, Guan Huang, Zheng Zhu, and Dalong Du. Bevdet: High-performance multi-camera 3d object detection in bird-eye-view. _arXiv preprint arXiv:2112.11790_, 2021. * [14] Jiyoung Jung and Sung-Ho Bae. Real-time road lane detection in urban areas using lidar data. _Electronics_, 7(11):276, 2018. * [15] Soren Kammel and Benjamin Pitzer. Lidar-based lane marker detection and mapping. In _2008 ieee intelligent vehicles symposium_, pages 1137-1142. IEEE, 2008. * [16] Jason Ku, Ali Harakeh, and Steven L Waslander. In defense of classical image processing: Fast depth completion on the cpu. In _2018 15th Conference on Computer and Robot Vision (CRV)_, pages 16-22. IEEE, 2018. * [17] Alex H Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. Pointpillars: Fast encoders for object detection from point clouds. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 12697-12705, 2019. * [18] Seokju Lee, Junsik Kim, Jae Shin Yoon, Seunghak Shin, Oleksandr Bailo, Namil Kim, Tae-Hee Lee, Hyun Seok Hong, Seung-Hoon Han, and In So Kweon. Vpgnet: Vanishing point guided network for lane and road marking detection and recognition. In _Proceedings of the IEEE international conference on computer vision_, pages 1947-1955, 2017. * [19] Jun Li, Xue Mei, Danil Prokhorov, and Dacheng Tao. Deep neural network for structural prediction and lane detection in traffic scene. _IEEE transactions on neural networks and learning systems_, 28(3):690-703, 2016. * [20] Xiang Li, Jun Li, Xiaolin Hu, and Jian Yang. Line-cnn: End-to-end traffic line detection with line proposal unit. _IEEE Transactions on Intelligent Transportation Systems_, 21(1):248-258, 2019. * [21] Yingwei Li, Adams Wei Yu, Tianjian Meng, Ben Caine, Jiquan Ngiam, Daiyi Peng, Junyang Shen, Yifeng Lu, Denny Zhou, Quoc V Le, et al. Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 17182-17191, 2022. * [22] Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie, Chonghao Sima, Tong Lu, Qiao Yu, and Jifeng Dai. Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers. _arXiv preprint arXiv:2203.17270_, 2022. * [23] Tingting Liang, Hongwei Xie, Kaicheng Yu, Zhongyu Xia, Zhiwei Lin, Yongtao Wang, Tao Tang, Bing Wang, and ZhiTang. Bevlfusion: A simple and robust lidar-camera fusion framework. _arXiv preprint arXiv:2205.13790_, 2022. * [24] Lizhe Liu, Xiaohao Chen, Siyu Zhu, and Ping Tan. Condlancenet: a top-to-down lane detection framework based on conditional convolution. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 3773-3782, 2021. * [25] Ruijin Liu, Dapeng Chen, Tie Liu, Zhiliang Xiong, and Zejian Yuan. Learning to predict 3d lane shape and camera pose from a single image via geometry constraints. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 36, pages 1765-1772, 2022. * [26] Ruijin Liu, Zejian Yuan, Tie Liu, and Zhiliang Xiong. End-to-end lane shape prediction with transformers. In _Proceedings of the IEEE/CVF winter conference on applications of computer vision_, pages 3694-3702, 2021. * [27] Zhijian Liu, Haotian Tang, Alexander Amini, Xingyu Yang, Huizi Mao, Daniela Rus, and Song Han. Bevlfusion: Multi-task multi-sensor fusion with unified bird's-eye view representation. _arXiv_, 2022. * [28] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. _arXiv preprint arXiv:1608.03983_, 2016. * [29] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. _arXiv preprint arXiv:1711.05101_, 2017. * [30] Davy Neven, Bert De Brabandere, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Towards end-to-end lane detection: an instance segmentation approach. In _2018 IEEE intelligent vehicles symposium (IV)_, pages 286-291. IEEE, 2018. * [31] Jonah Philion and Sanja Fidler. Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In _European Conference on Computer Vision_, pages 194-210. Springer, 2020. * [32] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 652-660, 2017. * [33] Zequn Qin, Huanyu Wang, and Xi Li. Ultra fast structure-aware deep lane detection. In _European Conference on Computer Vision_, pages 276-291. Springer, 2020. * [34] Zhan Qu, Huan Jin, Yang Zhou, Zhen Yang, and Wei Zhang. Focus on local: Detecting lane marker from bottom up via key point. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 14122-14130, 2021. * [35] Eduardo Romera, Jose M Alvarez, Luis M Bergasa, and Roberto Arroyo. Erfnet: Efficient residual factorized convnet for real-time semantic segmentation. _IEEE Transactions on Intelligent Transportation Systems_, 19(1):263-272, 2017. * [36] Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2446-2454, 2020. * [37] Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. In _Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition_, pages 2446-2454, 2020. * [38] Lucas Tabelini, Rodrigo Berriel, Thiago M Paixao, Claudine Badue, Alberto F De Souza, and Thiago Oliveira-Santos. Keep your eyes on the lane: Real-time attention-guided lane detection. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 294-302, 2021. * [39] Lucas Tabelini, Rodrigo Berriel, Thiago M Paixao, Claudine Badue, Alberto F De Souza, and Thiago Oliveira-Santos. Polylanenet: Lane estimation via deep polynomial regression. In _2020 25th International Conference on Pattern Recognition (ICPR)_, pages 6150-6156. IEEE, 2021. * [40] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In _International conference on machine learning_, pages 6105-6114. PMLR, 2019. * [41] Michael Thuy and Fernando Puente Leon. Lane detection and tracking based on lidar data. _Metrology and Measurement Systems_, 17:311-321, 2010. * [42] Lucas Tabelini Torres, Rodrigo Ferreira Berriel, Thiago M Paixao, Claudine Badue, Alberto F De Souza, and Thiago Oliveira-Santos. Keep your eyes on the lane: Attention-guided lane detection. _CoRR_, 2020. * [43] Sourabh Vora, Alex H Lang, Bassam Helou, and Oscar Beijjom. Pointpainting: Sequential fusion for 3d object detection. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 4604-4612, 2020. * [44] Jinsheng Wang, Yinchao Ma, Shaofei Huang, Tianrui Hui, Fei Wang, Chen Qian, and Tianzhu Zhang. A keypoint-based global association network for lane detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1392-1401, 2022. * [45] Dong Wu, Manwen Liao, Weitian Zhang, and Xinggang Wang. Yolop: You only look once for panoptic driving perception. _arXiv preprint arXiv:2108.11250_, 2021. * [46] Xiaopei Wu, Liang Peng, Honghui Yang, Liang Xie, Chenxi Huang, Chengqi Deng, Haifeng Liu, and Deng Cai. Sparse fuse dense: Towards high quality 3d detection with depth completion. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5418-5427, 2022. * [47] Fan Yan, Ming Nie, Xinyue Cai, Jianhua Han, Hang Xu, Zhen Yang, Chaoqiang Ye, Yanwei Fu, Michael Bi Mi, and Li Zhang. Once-3dlanes: Building monocular 3d lane detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 17143-17152, 2022. * [48] Jie Zhang, Yi Xu, Bingbing Ni, and Zhenyu Duan. Geometric constrained joint lane segmentation and lane boundary detection. In _proceedings of the european conference on computer vision (ECCV)_, pages 486-502, 2018. * [49]* [49] Tu Zheng, Yifei Huang, Yang Liu, Wenjian Tang, Zheng Yang, Deng Cai, and Xiaofei He. Clrnet: Cross layer refinement network for lane detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 898-907, 2022. * [50] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. _arXiv preprint arXiv:2010.04159_, 2020.
Estimating accurate lane lines in 3D space remains challenging due to their sparse and slim nature. Previous works mainly focused on using images for 3D lane detection, leading to inherent projection error and loss of geometry information. To address these issues, we explore the potential of leveraging LiDAR for 3D lane detection, either as a standalone method or in combination with existing monocular approaches. In this paper, we propose M\\({}^{2}\\)-3DLaneNet to integrate complementary information from multiple sensors. Specifically, M\\({}^{2}\\)-3DLaneNet lifts 2D features into 3D space by incorporating geometry information from LiDAR data through depth completion. Subsequently, the lifted 2D features are further enhanced with LiDAR features through cross-modality BEV fusion. Extensive experiments on the large-scale OpenLane dataset demonstrate the effectiveness of M\\({}^{2}\\)-3DLaneNet, regardless of the range (75m or 100m).
Give a concise overview of the text below.
208
arxiv-format/2312_12222v1.md
EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question Answering Junjue Wang\\({}^{1}\\), Zhuo Zheng\\({}^{2}\\), Zihang Chen\\({}^{1}\\), Ailong Ma\\({}^{1}\\), Yanfei Zhong\\({}^{1}\\) \\({}^{1}\\)Wuhan University, \\({}^{2}\\)Stanford University {kingdrone, chenzihang, maailong007, zhongyanfei}@whu.edu.cn, [email protected] Corresponding author ## 1 Introduction High-spatial resolution (HSR) remote sensing images can assist us in quickly obtaining essential information [23, 24]. Most research focuses on the perception of object categories and locations, deriving related tasks such as semantic segmentation [11], species detection [20], and urban understanding [21]. However, the existing methods and datasets ignore the relations between the geospatial objects, thus limiting their ability to knowledge reasoning in complex scenarios. Especially in city planning [12, 23], the relations between the transportation hubs and schools, water situations around the farmland, and greenery distributions in residential areas are also significant and urgent to be analyzed. Hence, it is necessary to go beyond object perception and explore object relations, bridging the gap between information and comprehensive knowledge [13]. Visual question answering (VQA) aims to answer customized questions by searching for visual clues in the provided image. Since linguistic questions determine the task properties, the algorithms are flexible and can be developed for reasoning required answers. Recently, preliminary VQA datasets and methods have emerged in the remote sensing field [15, 24, 25]. However, most of these researches have the following drawbacks: 1) As for most datasets, QA pairs are automatically labelled based on existing data, such as Open Street Map (OSM) and classification datasets. Most tasks are simple counting and judging questions with no relational reasoning required. The automatic QA pairs do not match actual needs, limiting their practicalities. 2) The development of the remote sensing VQA model lags, and most research directly fuses the global visual and language features to predict the final answers. They ignore the local semantics and relations, which are unsuitable for the complex reasoning of multiple geospatial objects. To this end, we propose a multi-modal multi-task VQA dataset and a semantic object awareness framework to advance complex remote sensing VQA tasks. Main contributions are as follows: 1. We propose the EarthVQA dataset with triplet samples (image-mask-QA pairs). The 208,593 QA pairs encompass six main categories1. EarthVQA features diverse tasks from easy basic judging to complex relation reasoning and even more challenging comprehensive analysis. Specifically, the residential environments, traffic situations, and renovation needs of waters and unwarfaced roads are explicitly embedded in various questions. Footnote 1: basic judging, basic counting, relational-based judging, relational-based counting, object situation analysis, and comprehensive analysis. 2. To achieve relational reasoning-based VQA, we propose a semantic object awareness framework (SOBA). SOBA utilizes segmentation visual prompts and pseudo masks to generate pixel-level features with accurate locations. The object awareness-based hybrid attention models the relations for object-guided semantics and bidirectionally aggregates multi-modal features for answering. 3. To add distance sensitivity for regression questions, we propose a numerical difference (ND) loss. The dynamic ND penalty is seamlessly integrated into cross-entropy loss for the regression task. ND loss introduces the sensitivity of numerical differences into the model training. ## 2 Related Work **General visual question answering.** The vanilla VQA model [1] includes three parts: a convolutional neural network (CNN), a long-short term memory (LSTM), and a fusion classifier. Specifically, CNN extracts visual features for input images, and LSTM embeds the language features for the questions. Global features are interacted in the fusion classifier and finally generate the answer. Based on this architecture, more powerful encoders and fusion modules were proposed. To obtain local visual features, the bottom-up top-down attention (BUTD) mechanism [1] introduced objectness features generated by Faster-RCNN [14] pretrained on Visual Genome [15] data. For computational efficiency, a recurrent memory, attention, and composition (MAC) cell [11] was designed to explicitly model the relations between image and language features. Similarly, the stacked attention network (SAN) [17] located the relevant visual clues guided by question layer-by-layer. By combining objectness features with attention, the modular co-attention network (MCAN) [23] adopted a transformer to model intra- and inter-modality interactions. To alleviate language biases, D-VQA [14] applied an unimodal bias detection module to explicitly remove negative biases. BLIP-2 [13] and Instruct-BLIP [15] bridge the large pre-trained vision and language models using the Q-Former, addressing VQA as a generative task. Besides, many advanced VQA methods [16] eliminate statistical bias by accessing external databases. **Remote sensing visual question answering.** The remote sensing community has some early explorations including both datasets and methods. The QA pairs of the RSVQA dataset [12] are queried from OSM, and images are obtained from Sentinel-2 and other sensors. RSIVQA dataset [15] is automatically generated from the existing classification and object detection datasets, i.e., AID [18], HRRSD [19], etc. The FloodNet [1] dataset was designed for disaster assessment, mainly concerned with the inundation of roads and buildings. Compared with these datasets, the EarthVQA dataset has two advantages: **1) Multi-level annotations.** The annotations include pixel-level semantic labels, object-level analysis questions, and scene-level land use types. Supervision from different perspectives advances a comprehensive understanding of complex scenes. **2) Complex and practical questions.** The existing datasets focus on counting and judging questions, which only involve simple relational reasoning about one or two types of objects. In addition to counting and judging, EarthVQA also contains various object analysis and comprehensive analysis questions. These promote complex relational reasoning by introducing spatial or semantic analysis of more than three types of objects. Only basic judging and counting answers are auto-generated from the LoveDA masks. Other reasoning answers (Figure 1) are manually annotated (reasoning distances, layouts, topologies, sub-properties, etc) for city planning needs. Remote sensing algorithms are mainly modified from general methods, for example, RSVQA is based on vanilla VQA [1]. RSIVQA [15] designed a mutual attention component to improves interactions for multi-modal features. CDVQA [18] introduced VQA into change detection task. We novelly introduce pixel-level prompts for the guidance of VQA tasks, making it suitable for scenes with compact objects. ## 3 EarthVQA Dataset The EarthVQA dataset was extended from the LoveDA dataset [19], which encompasses 18 urban and rural regions from Nanjing, Changzhou, and Wuhan. LoveDA dataset provides 5987 HSR images and semantic masks with seven common land-cover types. There are three significant revisions: _1) Quantity expansion_. 8 urban and 5 rural samples are added to expand capacity to 6000 images (WorldView-3 0.3m). _2) Label refinement_. 'playground' class was added as an important artificial facility, and some errors were revised for semantic labels. _3) Addition of QA pairs_. We added 208,593 QA pairs to introduce VQA tasks for city planning. Each urban image has 42 QAs and each rural image has 29 QAs. Following the balanced division [19], train set includes 2522 images with 88166 QAs, val set includes 1669 images with 57202 Figure 1: Urban and rural samples (image-mask-QA pairs) from the EarthVQA dataset. The QA pairs are designed to based on city planning needs, including judging, counting, object situation analysis, and comprehensive analysis types. This multi-modal and multi-task dataset poses new challenges, requiring object-relational reasoning and knowledge summarization. QAs, and test set includes 1809 images with 63225 QAs. **Annotation procedure.** EarthVQA currently does not involve ambiguous questions such as geographical orientations. As for 'Are there any intersections near the school?' in Figure 2(a), by judging the topology, the recognized Road#1 and Road#2 firstly form Intersection#5. Similarly, Ground#4 and Building#3 jointly form the scene of School#6. We use the ArcGIS toolbox to calculate the polygon-to-polygon distance between School#6 and Intersection#5, and obtain \\(94.8\\)m \\(<\\) 100m. Hence, the final answer is 'Yes'. Each step has fixed thresholds and conditions. **Statistics for questions.** As is shown in Figure 2(b), urban and rural scenes have common and unique questions according to the city planning demands. The number of questions for urban and rural is balanced, eliminating geographical statistical bias. Basic questions involve the statistics and inference of a certain type of object, i.e., 'What is the area of the forest?'. Relational-based questions require semantic or spatial relational reasoning between different objects. Comprehensive analysis focuses on more than three types of objects, including a summarization of traffic facilities, water sources around agriculture, land-use analysis, etc. **Statistics for answers.** As shown in Figure 2(c), we selected the top 30 most frequent answers from 166 unique answers in the dataset. Similar to the common VQA datasets, the imbalanced distributions of answers bring more challenges when faced with the actual Earth environment. ## 4 Semantic object awareness framework To achieve efficient relational reasoning, we design the SOBA framework for complex city scenes. SOBA includes a two-stage training: 1) semantic segmentation network training for generating visual prompts and pseudo masks; and 2) hybrid attention training for reasoning and answering. ### Semantic segmentation for visual prompts Faced with HSR scenes containing multiple objects, we novelly adopt a segmentation network for refined guidance. For an input image \\(\\mathbf{I}\\in\\mathbb{R}^{H\\times W\\times 3}\\), we utilize the encoder outputs \\(\\mathbf{F}^{v}\\in\\mathbb{R}^{H^{\\prime}\\times W^{\\prime}\\times C}\\) as the visual prompts. \\(C\\) denotes feature dimension and \\(H^{\\prime}=\\frac{H}{32},W^{\\prime}=\\frac{W}{32}\\) according to common settings. Pseudo semantic output \\(\\mathbf{M}^{v}\\in\\mathbb{R}^{H\\times W}\\) is also adopted for object awareness. Compared with the existing Faster-RCNN based algorithms [23, 24] which averages box features in one vector, the pixel-level visual prompts preserve the locations and semantic details inside objects. This contributes to the modeling of various compact objects in HSR scenes. ### Object awareness-based hybrid attention Guided by questions and object masks, Object awareness-based hybrid attention reasons visual cues for final answers. As is shown in Figure 3, there are three components: 1) object-guided attention (OGA), 2) visual self-attention (VSA), and 3) bidirectional cross-attention (BCA). **OGA for object aggregation.** Because segmentation output has object details \\(\\mathbf{M}^{v}\\) (including categories and boundaries), it is adopted to explicitly enhance visual prompts. OGA is proposed to dynamically weight \\(\\mathbf{F}^{v}\\) and \\(\\mathbf{M}^{v}\\) from the channel dimension. Using the nearest interpolation, \\(\\mathbf{M}^{v}\\) is firstly resized into the same size as \\(\\mathbf{F}^{v}\\). One-hot encoding followed with a pre-convolutional embedding then serializes the object semantics. The embedding contains a 3 \\(\\times\\) 3 convolution, a batch normalization, and a ReLU. They are concatenated to obtain object-guided features \\(\\mathbf{F}^{v}_{q}\\) as inputs for OGA. The reduction and reverse projections further refine the features dimensionally. After activation, we use the refined features to calibrate subspaces of \\(\\mathbf{F}^{v}_{q}\\) from the channel dimension. **VSA for feature enhancement.** To capture long-distance relations between geospatial objects, VSA [10] hierarchically transforms the refined features. VSA includes \\(N_{e}\\) transformer blocks, and each includes a multi-head self-attention (MSA) and a feed-forward network (FFN). The refined features are reduced by a \\(1\\times 1\\) convolution and reshaped to generate patches \\(\\mathbf{X}\\in\\mathbb{R}^{P\\times d_{m}}\\). \\(P=\\frac{H}{32}\\times\\frac{W}{32}\\) denotes token size and \\(d_{m}\\) is hidden size. At each block \\(i\\), features are transformed into a triplet: \\(\\mathbf{Q}=\\mathbf{X}^{i-1}\\mathbf{W}^{q},\\mathbf{K}=\\mathbf{X}^{i-1}\\mathbf{ W}^{k},\\mathbf{V}=\\mathbf{X}^{i-1}\\mathbf{W}^{v}\\), where \\(\\mathbf{W}^{q}\\), \\(\\mathbf{W}^{k}\\), \\(\\mathbf{W}^{v}\\in\\mathbb{R}^{d_{m}\\times d_{e}}\\) denote the weights of three linear projections and \\(d_{v}=d_{m}/M\\) is the reduction dim of each head. The self-attention firstly calculates the similarities between each patch and then weight their values: \\(Att(\\mathbf{Q},\\mathbf{K},\\mathbf{V})=softmax(\\frac{\\mathbf{Q}\\mathbf{K}^{T} }{\\sqrt{d_{v}}})\\mathbf{V}\\). MSA repeats the attention operation \\(M\\) times in parallel and concatenates out Figure 2: Details of questions and answers in EarthVQA dataset. Each urban image has a set of 42 questions and each rural image has a set of 29 questions, ensuring relatively balanced for each question. The imbalanced distributions of answers bring more challenges when faced with the actual Earth environment. puts. Finally, outputs are fused by a linear projection. Formally, \\(MSA(\\mathbf{Q},\\mathbf{K},\\mathbf{V})=Concat(h_{1}, ,h_{M})\\mathbf{W}^{O}\\), where \\(h_{i}=Att(\\mathbf{Q}_{i},\\mathbf{K}_{i},\\mathbf{V}_{i})\\) and \\(\\mathbf{W}^{O}\\in\\mathbb{R}^{Md_{w}\\times d_{m}}\\) denotes projection weights. MSA models long-distance dependency by calculating the similarities between each geospatial object. FFN consists of two linear transformation layers, and a GELU to improve visual representations. The formulation is shown as \\(FFN(\\mathbf{X}^{i-1})=GELU(\\mathbf{X}^{i-1}\\mathbf{W}_{1})\\mathbf{W}_{2}\\), where \\(\\mathbf{W}_{1}\\in\\mathbb{R}^{d_{m}\\times d_{f}}\\), \\(\\mathbf{W}_{2}\\in\\mathbb{R}^{d_{f}\\times d_{m}}\\) represent the learnable projection parameters. \\(d_{f}\\) denotes the hidden size of FFN. **BCA for multi-modal interaction.** BCA advances the interaction with visual and language features via a bidirectional fusion mechanism. BCA consists of two series of \\(N_{d}\\) transformer blocks. The first stage aggregates useful language features to enhance visual features \\(\\mathbf{X}_{f}\\) and second stage implicitly models object external relations according to keywords, boosting language features \\(\\mathbf{Y}_{f}\\). The implementation can be formulated as follows: \\[\\begin{split}\\mathbf{Q}_{\\mathrm{v}}&=\\mathbf{X} \\mathbf{W}^{q},\\mathbf{K}_{\\mathrm{L}}=\\mathbf{Y}\\mathbf{W}^{k},\\mathbf{V}_{ \\mathrm{L}}=\\mathbf{Y}\\mathbf{W}^{v}\\\\ \\mathbf{X}_{f}&=\\text{Att}(\\mathbf{Q}_{\\mathrm{v}}, \\mathbf{K}_{\\mathrm{L}},\\mathbf{V}_{\\mathrm{L}})\\\\ \\mathbf{Q}_{\\mathrm{L}}&=\\mathbf{Y}\\mathbf{W}^{q}, \\mathbf{K}_{\\mathrm{v}}=\\mathbf{X}_{f}\\mathbf{W}^{k},\\mathbf{V}_{\\mathrm{v}}= \\mathbf{X}_{f}\\mathbf{W}^{v}\\\\ \\mathbf{Y}_{f}&=\\text{Att}(\\mathbf{Q}_{\\mathrm{L}}, \\mathbf{K}_{\\mathrm{v}},\\mathbf{V}_{\\mathrm{v}})\\end{split} \\tag{1}\\] Finally, the fused \\(\\mathbf{X}_{f}\\) and \\(\\mathbf{Y}_{f}\\) are used for the final analysis. Compared with previous research [10] which only uses one-way cross-attention, bidirectional attention mechanism hierarchically aggregates multi-modal features by simulating the human process of finding visual cues [10]. Besides, we have also conducted comparative experiments with alternative cross-attention variants in Table 3 and Table 6. ### Object counting enhanced optimization VQA tasks include both classification and regression (object counting) questions. However, existing methods regard them as a multi-classification task, which is processed with cross-entropy (CE) loss. Eq. (2) represents that CE loss is insensitive to the distance between predicted value and true value, and is therefore not suitable for the regression task. \\[CE(\\vec{p},\\vec{y})=-\\vec{y}\\odot log(\\vec{p})=\\sum_{i=1}^{class}-y_{i}log(p_{ i}) \\tag{2}\\] where \\(\\vec{y}\\) specifies one-hot encoded ground truth and \\(\\vec{p}\\) denotes predicted probabilities. To introduce difference penalty for the regression task, we add a modulating factor \\(d=\\alpha|\\mathbf{y}_{diff}|^{\\gamma}=\\alpha|\\mathbf{y}_{pr}-\\mathbf{y}_{gt}|^{\\gamma}\\) to CE loss. \\(\\mathbf{y}_{pr}\\) and \\(\\mathbf{y}_{gt}\\) represent the predicted and ground truth number, respectively. \\(\\alpha\\geq 0\\) and \\(\\gamma\\geq 0\\) are tunable distance awareness factors. \\(d\\) represents the distance penalty \\(d\\propto\\mathbf{y}_{diff}\\). Finally, we design the numerical difference (ND) loss as follows: \\[\\begin{split} ND(\\vec{p},\\vec{y})&=-(1+d)\\vec{y} \\odot log(\\vec{p})\\\\ &=-(1+\\alpha|\\mathbf{y}_{diff}|^{\\gamma})\\vec{y}\\odot log(\\vec{p}) \\\\ &=-(1+\\alpha|\\mathbf{y}_{pr}-\\mathbf{y}_{gt}|^{\\gamma})\\sum_{i=1}^{ class}y_{i}log(p_{i})\\end{split} \\tag{3}\\] ND loss unifies classification and regression objectives into one optimization framework. \\(\\alpha\\) controls the overall penalty for regression tasks compared to classification tasks. \\(\\gamma\\) determines the sensitivity of regression penalty to numerical differences. As the \\(\\alpha\\) increases, the overall penalty increases, meaning that optimization focuses more on regression tasks. With \\(\\alpha=0\\), the ND loss degenerates into the original CE loss and the penalty is constant (\\(d=0\\) when \\(|\\mathbf{y}_{diff}|\\in[0,+\\infty)\\)). The sensitivity of the regression penalty increases as \\(\\gamma\\) increases, and when \\(\\gamma>1\\), the penalty curve changes from concave to convex. ## 5 Experiments **Evaluation metrics.** Following common settings [22], we adopt the classification accuracy and root-mean-square error (RMSE) as evaluation metrics. Especially, RMSE is used to evaluate counting tasks. We use mean Union over Intersection (mIoU) to report semantic segmentation performance. All experiments were performed under PyTorch framework using one RTX 3090 GPU. Figure 3: **(Left)** The architecture of SOBA includes (a) deep semantic segmentation for visual prompts; (b) object awareness-based hybrid attention (**Right** shows the details); and (c) object counting enhanced optimization. **Experimental settings.** For comparison, we selected eight general (SAN Yang et al. 2016), MAC Hudson and Manning (2018), BUTD Anderson et al. (2018), BAN Kim et al. (2018), MCAN Yu et al. (2019), D-VQA Wen et al. (2021), BILP-2 Li et al. (2023), Imstruct-BLIP Dai et al. (2023)) and two remote sensing (RSVQA Lobry et al. (2020), RSIVQA Zheng et al. (2021)) VQA methods. Because MCAN, BUTD, BAN, and D-VQA need semantic prompts, we adopt visual prompts from Semantic-FPN Kirillov et al. (2019) fairly. All VQA models were trained for 40k steps with a batch size of 16. We set the two-layer LSTM with the hidden size of 384 and ResNet50 as default. As for large vision-language models, BLIP-2 and Instruct-BLIP trained Q-Former following their original settings. The vision encoder adopts ViT-g/14 and language decoder is FlanT5x. Following Wang et al. (2021), Semantic-FPN was trained for 15k steps using the same batch size, generating visual prompts and semantic masks. Segmentation augmentations include random flipping, rotation, scale jittering, and cropping for \\(512\\times 512\\) patches. We used Adam solver with \\(\\beta_{1}=0.9\\) and \\(\\beta_{2}=0.999\\). The initial learning rate was set to \\(5e^{-5}\\), and a 'poly' schedule with a power of 0.9 was applied. The hidden size of the language and image features was \\(d_{m}=384\\). The number of heads \\(M\\) is set to 8, and the numbers of layers in self- and cross-attention modules are \\(N_{E}=N_{D}=3\\). We set \\(\\alpha=1\\) and \\(\\gamma=0.5\\) for ND loss. ### Comparative experiments **Main comparative results.** Thanks to the diverse questions, EarthVQA can measure multiple perspectives of VQA models. Table 1 shows that all methods achieve high accuracies on basic judging questions. The models with pixel-level visual prompts obtain higher accuracies, especially for the counting tasks. This is because the semantic locations provide more spatial details, which benefits the object statistics. Compared with advanced methods, SOBA achieves the best overall performances with similar or lower complexity. **Object guided attention.** OGA introduces object semantics into visual prompts and we compare it with related variants. Table 2 shows compared results for spatial, channel, and combined attentions, i.e, SAWoo et al. (2018), SCSERoy et al. (2018), CBAMWoo et al. (2018), SEHu et al. (2018), GCCao et al. (2019). Channel attentions bring more stable improvements than spatial attentions. Because pseudo masks and visual prompts are concatenated dimensionally, spatial attentions are hard to calibrate the subspaces of visual prompts and object masks. Channel attentions enhance key object semantics and weaken uninterested background features. Hence, our OGA abandoned spatial attention and achieved the best accuracies. **One-way vs. bidirectional cross-attention.** Existing transformer-based methods Yu et al. (2019); Cascante-Bonilla et al. (2022) utilize one-way (vanilla) attention to perform interactions, where visual features are only treated as queries. In contrast, we further gather enhanced visual features via the keywords (language features as queries), simulating the human process of finding visual cues. As cross-attention consists of six transformer blocks, we compare the different combinations. Table 3 shows that in one-way attention, querying visual features outperforms querying the language features. This is because visual features are more informative, and their enhancement brings more improvements. Bidirectional attention outperforms one-way structure due to more comprehensive interactions. ### Module analysis **Architecture of SOBA.** SOBA was disassembled into five sub-modules: 1) VSA, 2) BCA, 3) semantic prompts, 4) OGA, and 5) ND loss. Table 4 shows that each module enhances the overall performance in distinct ways. BCA produces a more significant improvement than VSA, and they \\begin{table} \\begin{tabular}{l|c|c c c c c c c|c c c|c c c} \\multirow{2}{*}{Method} & \\multirow{2}{*}{Promp.} & \\multicolumn{8}{c}{\\(\\uparrow\\)Accuracy(\\%)} & \\multicolumn{3}{c|}{\\(\\downarrow\\)RMSE} & \\multirow{2}{*}{\\(\\downarrow\\)OR} & \\multirow{2}{*}{Param.} & \\multirow{2}{*}{FLOPs} \\\\ & & & & & & & & & & & & & & \\\\ \\multicolumn{11}{l}{} \\\\ \\multicolumn{11}{l}{\\(\\star\\)_General methods_} & \\multirow{11}{*}{\\(\\times\\)} & \\multirow{11}{*}{87.5} & \\multirow{11}{*}{81.79} & \\multirow{11}{*}{76.26} & \\multirow{11}{*}{59.23} & \\multirow{11}{*}{55.00} & \\multirow{11}{*}{43.25} & \\multirow{11}{*}{75.66} & \\multirow{11}{*}{1.367} & \\multirow{11}{*}{1.3180} & \\multirow{11}{*}{1.1609} & 32.30 & 87.68 \\\\ \\multicolumn{11}{l}{\\(\\star\\)MAC} & \\multirow{11}{*}{\\(\\times\\)} & \\multirow{11}{*}{82.89} & \\multirow{11}{*}{79.46} & \\multirow{11}{*}{72.53} & \\multirow{11}{*}{55.86} & \\multirow{11}{*}{46.32} & \\multirow{11}{*}{40.50} & \\multirow{11}{*}{71.98} & \\multirow{11}{*}{1.4073} & \\multirow{11}{*}{1.3375} & \\multirow{11}{*}{1.3987} & \\multirow{11}{*}{38.64} & \\multirow{11}{*}{147.80} \\\\ BUTD & & & & & & & & & & & & & \\\\ BAN & & & & & & & & & & & & & & \\\\ MCAN & & & & & & & & & & & & & & \\\\ D-VQA & & & & & & & & & & & & & & \\\\ \\multicolumn{11}{l}{} \\\\ \\multicolumn{11}{l}{\\(\\star\\)_Remote sensing methods_} & \\multirow{11}{*}{\\(\\times\\)} & \\multirow{11}{*}{82.43} & \\multirow{11}{*}{79.34} & \\multirow{11}{*}{70.68} & \\multirow{11}{*}{63.34} & \\multirow{11}{*}{59.72} & \\multirow{11}{*}{45.68} & \\multirow{11}{*}{75.25} & \\multirow{11}{*}{7.9994} & \\multirow{11}{*}{1.21complement each other (jointly obtaining OA=74.91%). OGA further improves the OA by explicitly adding the objectness semantics. ND loss significantly boosts the counting performance from the aspect of optimization. All modules are compatible with each other within the SOBA framework. **Encoder variants.** Table 5 shows the effects brought by segmentation networks with advanced CNN and Transformer encoders, i.e., HRNet (Wang et al., 2020), Swin Transformer Liu et al. (2021), Mix Transformer Xie et al. (2021), Conv-vNeXt Liu et al. (2022). SOBA is compatible with the mainstream encoders and VQA performance is stable at a high level (OA\\(>\\)77.22%). Although MiT-B3 achieves lower segmentation accuracies than HR-W40, their features provide similar VQA performances. As for similar segmentation architectures, larger encoders (Swin-S and ConvX-S) outperform better than smaller encoders (Swin-T and ConvX-T) in segmentation and VQA tasks. With Wikipedia's external knowledge, pretrained BERT-Base Kenton and Toutanova (2019) brings stable improvements. With abundant computing power and time, larger encoders are recommended. **Bidirectional cross-attention variants.** We explored BCA variants with different orders of query, i.e., V and L were processed alternately, cascade, and parallel. Table 6 shows that cascade structure VVV-LLL achieves the best accuracies. VVV hierarchically aggregates language features to enhance visual features, and LLL compresses the visual features to supplement language features. Compared with LLL, first considering VVV retains the most information. Hence, VVVV-LLL represents the integration process from details to the whole, which conforms to human perception Savage (2019). Parallel structure obtains a sub-optimal accuracy, and frequent alternation of cross-attentions may lead to feature confusion. **Optimization analysis.** We compare ND loss with similar optimization algorithms designed to address the sample imbalance problem, including 1) dynamic inverse weighting (DIW) Rajpurkar et al. (2017), 2) Focal loss Lin et al. (2017), 3) online hard example mining (OHEM) Shrivastava et al. (2016), 4) small object mining (SOM) Ma et al. (2022). In Table 7, Focal loss obtains better performance by adaptively balancing weights of easy and hard examples. DIW failed to exceed the CE due to its extreme weighting strategies. OHEM dynamic focuses on hard samples during the training, slightly improving OA (+0.31%). These optimization algorithms only focus on sample imbalances but are not sensitive to numerical distance. They inherently cannot contribute to regression tasks. In contrast, ND loss shows excellent performances on both classification and regression tasks. ### Hyperparameter analysis for ND loss As ND loss introduces two hyperparameters, \\(\\alpha\\) controls overall penalty and \\(\\gamma\\) determines sensitivity to numerical differences. In order to evaluate their effects on performances, we individually vary \\(\\alpha\\) and \\(\\gamma\\) from 0 to 2, and the results are reported in Figure 4. Compared with CE loss, the additional difference penalty can bring stable gains. The suitable \\begin{table} \\begin{tabular}{l|c} Query & \\(\\uparrow\\)OA(\\%) \\\\ \\hline LV-LV-LV & 77.51 \\\\ VL-VL-VL & 77.58 \\\\ LLLL-VVV & 77.57 \\\\ VVV-LLL & **78.14** \\\\ Parallel & 77.98 \\\\ \\end{tabular} \\begin{tabular}{l|c} Optim. & \\(\\uparrow\\)OA(\\%) \\\\ \\hline CE & 77.54 \\\\ DIW & 77.38 \\\\ Focal & 77.57 \\\\ OHEM & 77.85 \\\\ SOM & 77.58 \\\\ **ND** & **78.14** \\\\ \\end{tabular} \\end{table} Table 6: BCA variants \\begin{table} \\begin{tabular}{l|c|c} Cross-Attention & Query & \\(\\uparrow\\)OA(\\%) & \\(\\downarrow\\)OR \\\\ \\hline One-way (vanilla) & LLLLLLL & 77.11 & 0.977 \\\\ VVVVVV & 77.53 & 0.880 \\\\ \\hline Bidirectional & LLL-VVV & 77.57 & 0.867 \\\\ VVV-LLL & **78.14** & **0.839** \\\\ \\end{tabular} \\end{table} Table 3: Compared results between one-way (vanilla) and bidirectional cross-attention. ‘V’ and ‘L’ denote visual and language features, respectively. \\begin{table} \\begin{tabular}{l|c c|c c c} VSA & BCA & Promp. & OGA & ND & \\(\\uparrow\\)OA (\\%) & \\(\\downarrow\\)OR \\\\ \\hline ✓ & & & & & 72.55 & 1.509 \\\\ & ✓ & & & 73.78 & 1.520 \\\\ ✓ & ✓ & & & 74.91 & 1.128 \\\\ ✓ & ✓ & ✓ & & 77.30 & 0.866 \\\\ ✓ & ✓ & ✓ & ✓ & 77.54 & 0.859 \\\\ ✓ & ✓ & ✓ & ✓ & **78.14** & **0.839** \\\\ \\end{tabular} \\end{table} Table 4: Architecture ablation study \\begin{table} \\begin{tabular}{l|c c|c c c} Img Enc & Lan Enc & Param.(M) & \\(\\uparrow\\)moU(\\%) & \\(\\uparrow\\)OA(\\%) \\\\ \\hline HR-W40 & LSTM & 57.87 & 57.31 & 77.92 \\\\ MiT-B3 & LSTM & 60.30 & 56.44 & 77.43 \\\\ Swin-T & LSTM & 43.86 & 56.89 & 77.22 \\\\ Swin-S & LSTM & 65.17 & 57.44 & 78.01 \\\\ ConvX-T & LSTM & 44.16 & 57.17 & 78.24 \\\\ ConvX-S & LSTM & 65.79 & **57.34** & 78.43 \\\\ \\hline Swin-T & BERT-Base & 153.42 & 56.89 & 77.63 \\\\ Swin-S & BERT-Base & 174.74 & 57.44 & 78.23 \\\\ ConvX-S & BERT-Base & 175.36 & **57.34** & **78.65** \\\\ \\end{tabular} \\end{table} Table 5: Encoder variants analysis Figure 4: Experimental results with varied \\(\\alpha\\) and \\(\\gamma\\) for ND loss. The optimal values range from 0.125 to 1.25 with wide ranges of hyperparameters selection. The mean values and standard deviations are reported after five runs. value of \\(\\alpha\\) ranges from 0.125 to 1.25 and reaches the highest OA at 1. When \\(\\alpha>\\)1.25, the performance drops because the large loss will bring instability during training. When \\(\\alpha\\) is fixed at 1, the optional \\(\\gamma\\) also ranges from 0.125 to 1.25, and OA floats between 77.99% and 78.14%. When \\(\\gamma>1\\), the influence curve changes from concave to convex, resulting in a significant increase in difference penalties. The model performance is not very sensitive to the hyperparameters introduced by ND loss, which reflects high fault tolerance and robustness. Overall, our ND loss is superior to the CE baseline, with wide ranges of hyperparameter selection. ND loss comprises two components, i.e., the original classification loss and an enhanced regression loss. Figure 5 illustrates the effects of varying \\(\\alpha\\) and \\(\\gamma\\) on these two types of loss. It is evident that changes have little impact on classification optimization, as the difference penalty is only added to the regression loss. As the values of \\(\\alpha\\) and \\(\\gamma\\) increase, the regression losses become larger and more unstable. However, as training progresses, the regression losses gradually stabilize and eventually converge. Figure 5 shows that these two parameters control the numerical difference penalty in different ways. This decomposition analysis of training loss can also provide references for tuning \\(\\alpha\\) and \\(\\gamma\\). ### Visualizations on bidirectional cross-attention To analyze the mechanism of multi-modal feature interaction, we visualize the attention maps in each layer of BCA according to different queries. The question in Figure 6(a) is 'How many intersections are in this scene?', and 'intersections' is selected as a query word. The first attention map shows some incorrect activations on the scattered roads and playground tracks. However, as the layer deepens, BCA successfully reasons the right spatial relation for the key roads, and the attention map focuses on the intersection in the upper left corner. Similarly, Figure 6(b) shows another example, which displays the process of gradually attending to the'residential' area. The third example shows a rural scene, and we select 'water' to query the visual features. The attention map initially focuses on some trees and waters due to their similar spectral values. Then the correct waters are enhanced, and uninterested trees are filtered out. ## 6 Conclusion To go beyond information extraction, we introduce the VQA to remote sensing scene understanding, achieving relational reasoning-based judging, counting, and situation analysis. Based on the city planning needs, we designed a multi-modal and multi-task VQA dataset named EarthVQA. Besides, a two-stage semantic object awareness framework (SOBA) is proposed to advance complex VQA tasks. The extensive experiments demonstrated the superiority of the proposed SOBA. We hope the proposed dataset and framework serve as a practical benchmark for VQA in Earth observation scenarios. Future work will explore the interactions between segmentation and VQA tasks. ## 7 Acknowledgments This work was supported by National Natural Science Foundation of China under Grant Nos. 42325105, 42071350, and 42171336. Figure 5: The training losses of classification and regression tasks with different \\(\\alpha\\) and \\(\\gamma\\). The changes of \\(\\alpha\\) and \\(\\gamma\\) mainly affect the regression task optimization. Figure 6: Visualization of attention maps in BCA with language features as queries. From left to right are the \\(l_{1},l_{2}\\) and \\(l_{3}\\). Three examples are queried by different keywords: ‘intersections’, ‘residents’, and ‘water’. ## References * P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, and L. Zhang (2018)Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6077-6086. Cited by: SS1. * S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh (2015)VQA: visual question answering. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2425-2433. Cited by: SS1. * X. Bai, P. Shi, and Y. Liu (2014)Society: realizing china's urban dream. Nature509 (7499), pp. 158-160. Cited by: SS1. * Y. Cao, J. Xu, S. Lin, F. Wei, and H. Hu (2019)GCNet: non-local networks meet squeeze-excitation networks and beyond. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pp. 0-0. Cited by: SS1. * P. Cascante-Bonilla, H. Wu, L. Wang, R. S. Feris, and V. Ordonez (2022)Simvqa: exploring simulated environments for visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5056-5066. Cited by: SS1. * W. Dai, J. Li, D. Li, A. M. H. Tiong, J. Zhao, W. Wang, B. Li, P. Fung, and S. Hoi (2023)InstructBLIP: towards general-purpose vision-language models with instruction tuning. arXiv:2305.06500. Cited by: SS1. * A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby (2021)An image is worth 16x16 words: transformers for image recognition at scale. In International Conference on Learning Representations, Cited by: SS1. * J. Hu, L. Shen, and G. Sun (2018)Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132-7141. Cited by: SS1. * May 3, 2018, Conference Track Proceedings, Cited by: SS1. * J. D. M. Kenton and L. K. Toutanova (2019)BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pp. 4171-4186. Cited by: SS1. * J. Kim, J. Jun, and B. Zhang (2018)Bilinear attention networks. Advances in neural information processing systems31. Cited by: SS1. * A. Kirillov, R. Girshick, K. He, and P. Dollar (2019)Panoptic feature pyramid networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6399-6408. Cited by: SS1. * R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L. Li, D. A. Shamma, et al. (2017)Visual genome: connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision123 (1), pp. 32-73. Cited by: SS1. * F. Li and R. Krishna (2022)Searching for computer vision north stars. Deedalus151 (2), pp. 85-99. Cited by: SS1. * J. Li, D. Li, S. Savarese, and S. C. Hoi (2023)BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, pp. 1730-1742. Cited by: SS1. * T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar (2017)Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2980-2988. Cited by: SS1. * Y. Liu, Y. Zhong, A. Ma, J. Zhao, and L. Zhang (2023)Cross-resolution national-scale land-cover mapping based on noisy label learning: a case study of china. International Journal of Applied Earth Observation and Geoinformation118, pp. 103265. Cited by: SS1. * Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo (2021)Swin transformer: hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012-10022. Cited by: SS1. * Z. Liu, H. Mao, C. Wu, C. Feichtenhofer, T. Darrell, and S. Xie (2022)A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11976-11986. Cited by: SS1. * S. Lobry, D. Marcos, J. Murray, and D. Tuia (2020)RSVQA: visual question answering for remote sensing data. IEEE Transactions on Geoscience and Remote Sensing58 (12), pp. 8555-8566. Cited by: SS1. * A. Ma, J. Wang, Y. Zhong, and Z. Zheng (2022)FactSeg: foreground activation-driven small object semantic segmentation in large-scale remote sensing imagery. IEEE Transactions on Geoscience and Remote Sensing60, pp. 1-16. Cited by: SS1. * K. Marino, X. Chen, D. Parikh, A. Gupta, and M. Rohrbach (2021)Krisp: integrating implicit and symbolic knowledge for open-domain knowledge-based vqa. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14111-14121. Cited by: SS1. * M. Rahnemoonfar, T. Chowdhury, A. Sarkar, D. Varshney, M. Yari, and R. R. Murphy (2021)FloodNet: a high resolution aerial imagery dataset for post flood scene understanding. IEEE Access9, pp. 89644-89654. Cited by: SS1. * P. Rajpurkar, J. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, D. Ding, A. Bagul, C. Langlotz, K. Shpanskaya, et al. (2017)Chexnet: radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225. Cited by: SS1. * S. Ren, K. He, R. Girshick, and J. Sun (2015)Faster r-CNN: towards real-time object detection with region proposal networks. Advances in neural information processing systems28, pp.. Cited by: SS1. * A. G. Roy, N. Navab, and C. Wachinger (2018)Recalibrating fully convolutional networks with spatial and channel \"squeeze and excitation\" blocks. IEEE Transactions on Medical Imaging38 (2), pp. 540-549. Cited by: SS1. * N. Savage (2019)How AI and neuroscience drive each other forwards. Nature571 (7766), pp. S15-S15. Cited by: SS1. * S. Shi, Y. Zhong, Y. Liu, J. Wang, Y. Wan, J. Zhao, P. Lv, L. Zhang, and D. Li (2023)Multi-temporal urban semantic understanding based on GF-2 remote sensing imagery: from tri-temporal datasets to multi-task mapping. International Journal of Digital Earth16 (1), pp. 3321-3347. Cited by: SS1. * A. Shrivastava, A. Gupta, and R. Girshick (2016)Training region-based object detectors with online hard example mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 761-769. Cited by: SS1. * J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, D. Zhao, D. Liu, Y. Mu, M. Tan, X. Wang, et al. (2020)Deep high-resolution representation learning for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: SS1. * J. Wang, Z. Zheng, A. Ma, X. Lu, and Y. Zhong (2021)LoveDA: a remote sensing land-cover dataset for domain adaptive semantic segmentation. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, Vol. 1, pp.. Cited by: SS1. * Z. Wen, G. Xu, M. Tan, Q. Wu, and Q. Wu (2021)Debiased visual question answering from feature and sample perspectives. Advances in Neural Information Processing Systems34, pp. 3784-3796. Cited by: SS1. * S. Woo, J. Park, J. Lee, and I. S. Kweon (2018)CBAM: convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), pp. 3-19. Cited by: SS1. * G. Xia, J. Hu, F. Hu, B. Shi, X. Bai, Y. Zhong, L. Zhang, and X. Lu (2017)AID: a benchmark data set for performance evaluation of aerial scene classification. IEEE Transactions on Geoscience and Remote Sensing55 (7), pp. 3965-3981. Cited by: SS1. * Y. Xiao, Q. Yuan, K. Jiang, J. He, Y. Wang, and L. Zhang (2023)From degrade to upgrade: learning a self-supervised degradation guided adaptive network for blind remote sensing image super-resolution. Information Fusion96, pp. 297-311. Cited by: SS1. * E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo (2021)SegFormer: simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems34, pp. 12077-12090. Cited by: SS1. * Z. Yang, X. He, J. Gao, L. Deng, and A. Smola (2016)Stacked attention networks for image question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 21-29. Cited by: SS1. * Z. Yu, J. Yu, Y. Cui, and Q. Tian (2019)Deep modular co-attention networks for visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6281-6290. Cited by: SS1. * Z. Yuan, L. Mou, Z. Xiong, and X. X. Zhu (2022)Change detection meets visual question answering. IEEE Transactions on Geoscience and Remote Sensing60, pp. 1-13. Cited by: SS1. * Y. Zhang, Y. Yuan, Y. Feng, and X. Lu (2019)Hierarchical and robust convolutional neural network for very high-resolution remote sensing object detection. IEEE Transactions on Geoscience and Remote Sensing57 (8), pp. 5535-5548. Cited by: SS1. * H. Zhao, Y. Zhong, X. Wang, X. Hu, C. Luo, M. Boitt, R. Piiroinen, L. Zhang, J. Heiskanen, and P. Pellikka (2022)Mapping the distribution of invasive tree species using deep one-class classification in the tropical montane landscape of Kenya. ISPRS Journal of Photogrammetry and Remote Sensing187, pp. 328-344. Cited by: SS1. * X. Zheng, B. Wang, X. Du, and X. Lu (2021)Mutual attention inception network for remote sensing visual question answering. IEEE Transactions on Geoscience and Remote Sensing60, pp. 1-14. Cited by: SS1. * I. Zvonkov, G. Tseng, C. Nakalembe, and H. Kerner (2023)OpenMapFlow: a library for rapid map creation with machine learning and remote sensing data. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37, pp. 14655-14663. Cited by: SS1.
Earth vision research typically focuses on extracting geospatial object locations and categories but neglects the exploration of relations between objects and comprehensive reasoning. Based on city planning needs, we develop a multi-modal multi-task VQA dataset (**EarthVQA**) to advance relational reasoning-based judging, counting, and comprehensive analysis. The EarthVQA dataset contains 6000 images, corresponding semantic masks, and 208,593 QA pairs with urban and rural governance requirements embedded. As objects are the basis for complex relational reasoning, we propose a Semantic Object Awareness framework (**SOBA**) to advance VQA in an object-centric way. To preserve refined spatial locations and semantics, SOBA leverages a segmentation network for object semantics generation. The object-guided attention aggregates object interior features via pseudo masks, and bidirectional cross-attention further models object external relations hierarchically. To optimize object counting, we propose a numerical difference loss that dynamically adds difference penalties, unifying the classification and regression tasks. Experimental results show that SOBA outperforms both advanced general and remote sensing methods. We believe this dataset and framework provide a strong benchmark for Earth vision's complex analysis. The project page is at [https://Junjue-Wang.github.io/homepage/EarthVQA](https://Junjue-Wang.github.io/homepage/EarthVQA).
Provide a brief summary of the text.
268
arxiv-format/1911_03267v1.md
# Algorithmic Design and Implementation of Unobtrusive Multistatic Serial LiDAR Image Chi Ding, Zheng Cao, Matthew S. Emigh, Jose C. Principe _Department of Electrical and Computer Engineering_ _University of Florida_ Gainesville, FL [email protected] Bing Ouyang, Anni Vuorenkoski Fraser Dalgleish, Brian Ramos, Yanjun Li _Ocean Visibility and Optics Laboratory_ _Harbor Branch / Florida Atlantic Univ._ Fort Pierce, FL [email protected] ## I Introduction We present the proposed algorithmic module of UMSLI - a Lidar-based underwater imaging system. Gaining an understanding of the underwater scene is not an easy task, especially in a visually degraded environment with low contrast, non-uniform illumination, and ubiquitous backscattering noise. It usually takes multiple modules working together to achieve the final goal. This paper discusses how these modules cooperate with each other to achieve this goal. The paper is organized as follows: the system and image data for the imaging system are discussed in Section 2. Then we dive into details of specific methods for detection, and classification in Section 3 as our main topics. Experimental results are discussed in Section 4. We will conclude the paper in Section 5. ## II System Description ### _The Unobtrusive Multi-static Serial LiDAR Imager (UMSLI)_ The imaging system sensing front-end employs a red laser with a power density of 31.8 nJ/cm\\({}^{2}\\), which is well below 700 nJ/cm\\({}^{2}\\) - the maximum permissible exposure (MPE) for human. Therefore, the system is unobtrusive and eye-safe to marine life since their eyes can focus less light than human eyes [1]. The 638nm red laser was chosen since this wavelength is beyond their visibility range of the marine animals, and thus their behavior is not disturbed by the system. There are 6 fixed transmitters and receivers (cameras) deployed around the device to fully illuminate a complete 3-dimensional spherical volume so that the underwater surrounding environment can be effectively monitored. More details can be referred to from [1]. ### _LiDAR Image Data_ Compared with the conventional optical camera and imaging sonar, instead of fully explore high-resolution images or the ability to see further away, the UMSLI reaches a trade-off. There are two modes for capturing images: Sparse and Dense mode. In the sparse-mode, a pulse scan is performed by transmitters emitting red laser with lower density through a wide range, which makes the system see further and wider. The dense mode applies a high-density pulse scan for a narrower range, which gives us images with higher resolution and more detailed information within a focused area [1]. The data has 3 dimensions: \\(x\\), \\(y\\) and time \\(t\\). \\(x\\) and \\(y\\) depict the pixel locations while \\(t\\) is the time interval between the transmitter emits the laser pulse and receiver received reflected back laser signal. The range of z can be derived from t. Therefore, the signal amplitude at a spatial pixel (x,y) and at a time t represents the reflected intensity at a certain spatial point I(x,y,z). This, in turn, allows the construction of a 3-D point cloud of the scene. The advantage of this data is that it provides us with depth information and we can use this information for image preprocessing [1]. A 2-D I(x,y) image can be formed by summing over the time axis. Currently, our algorithms are still developed under 2D image data without using depth information. The depth information usage and automation for image restoration algorithms will be discussed in our future work. ## III The Algorithmic Framework After the image-acquisition module, a series of signal processing blocks have been developed for marine animal detection and classification. Before detection, some preprocessing techniques are applied to mitigate the typical underwater image problems such as backscattering-induced low contrast and low signal-to-noise ratio (SNR). This process enables the detection algorithm to reduce false alarms. After these steps, to achieve the final goal classification, tracking and segmentation methods are adopted for capturing and extracting detailed object information for our classifier. While our initial imageenhancement relied on the LiDAR image enhancement technique outlined in [2], further enhancement has been developed and implemented to achieve the desired results. ### _Illumination Correction_ When the system is operated in the sparse mode for detection, backscattering and attenuation in the turbid water are the main reasons for false alarms. As is shown in Fig. 1 left, the backscattering across the entire image can cause false detection among the area underneath the turtle with high-intensity values. It is observed that this non-uniformity is very similar to the non-uniform illumination problem. An illumination correction method was devised to correct the non-uniformity in the images [3]. Firstly, morphological open with a large structuring element \\(\\mathbf{s}\\) is applied over the image \\(\\mathbf{I}\\) to produce a background estimation \\(\\mathbf{B}\\). This step helps to remove small-scale details from the object-of-interest area and preserve background noise in \\(\\mathbf{B}\\). Then we subtract \\(\\mathbf{B}\\) from the original image to obtain an enhanced image \\(\\mathbf{E}\\). \\[B=I\\circ s \\tag{1}\\] \\[E=I-B \\tag{2}\\] The effectiveness of the illumination correct for the sparse-mode object detection is illustrated in Fig. 1. However, in the dense-mode, this morphological opening method is less effective because a large structuring element has difficult to remove foreground and preserve noise when the object area is larger. A more specialized method for image restoration is required. ### _Detection_ To localize salient objects, we apply a detection algorithm gamma saliency proposed by Burt _et al._[4] This algorithm defines a convolutional mask Gamma Kernel by equation 3 and saliency map is obtained by convolving the kernel with underwater images. Gamma kernel enhances local contrast by approximating statistics of objects and the surrounding area. The kernels are displayed as donut shapes with a radius of approximately \\(\\frac{k}{\\mu}\\)[4]. When object size is similar to kernel size, the neighborhood is highlighted after convolution. However, if kernel radius is small, image is center-focused on each pixel. Then by subtracting neighbor-highlighted image by center-focused image, we keep high-intensity values from the object and compress areas outside of the object. To construct effective gamma kernels, we use multiple gamma kernels from equation 4 with different kernel order \\(k\\) and decay factor \\(\\mu\\) to make sure objects with different sizes can be all detected [4]. \\[g_{k,\\mu}=\\frac{\\mu^{k+1}}{2\\pi!k}\\sqrt{n_{1}^{2}+n_{2}^{2}}^{k-1}\\exp^{-\\mu \\sqrt{n_{1}^{2}+n_{2}^{2}}} \\tag{3}\\] \\[g_{total}=\\sum_{m=0}^{M-1}(-1)^{m}g_{m}(k_{m},\\mu_{m}) \\tag{4}\\] ### _Tracking_ After the detection, the dense-mode scanning is required to be triggered for capturing more detailed information of the detected objects. However, the imager performs the scene-capture via the serial scanning. The marine animal can swim further away when the imager is reading and processing data before the dense-scan is complete. Therefore, short term tracking and state prediction are necessary for guiding the system to localize position for the dense scan. Currently, the Kalman filter is applied as a preliminary step for simulation purposes. But considering the unpredictability of the real-world data and low frame rate with our system, it is highly possible a better algorithm is needed. Considering the length of this paper, the tracking algorithm will not be discussed in detail. However, we will address this issue in our future study. ### _Classification_ Extensive alternatives are available from state-of-art methods such as CNN-based methods [5]. However, due to the lack of real-world training data, traditional computer vision techniques instead of CNN-based methods are preferred at the current stage, and thus we use the information point set registration algorithm proposed in the previous work [6]. This algorithm uses shape context as descriptors for the query objects. We firstly extract query shape point sets \\(\\mathbf{X}\\) and template shape point sets \\(\\mathbf{Y}\\) from threshold segmentation map. Then, shape contexts are computed from point sets \\(\\mathbf{X}\\) and \\(\\mathbf{Y}\\), they are denoted as \\(\\mathbf{SC_{x}}\\) and \\(\\mathbf{SC_{y}}\\). When classifying a query, cosine distance between shape contexts of this query and templates from each class are computed and they are noted as \\(\\mathbf{d_{ij}}\\), where \\(\\mathbf{i}\\), \\(\\mathbf{j}\\) indicates template \\(\\mathbf{i}\\) of class \\(\\mathbf{j}\\). The mean distance between this query and one specific class is calculated by averaging the sum of \\(\\mathbf{d_{ij}}\\) with respect to \\(\\mathbf{i}\\) for class \\(\\mathbf{j}\\). Then the query is assigned to the class \\(\\mathbf{j}\\) which has the least average distance. \\[d_{ij}=\\frac{1-SC_{X}\\cdot SC_{Y_{i,j}}}{\\|X\\|\\!\\cdot\\!\\|Y_{i,j}\\|} \\tag{5}\\] \\[\\bar{d_{j}}=\\frac{\\sum_{i=1}^{n_{j}}d_{ij}}{n_{j}} \\tag{6}\\] Usually using the descriptor alone is unable to achieve high classification accuracy because query shapes can be very noisy or distorted in real-world conditions. The noise and distortion can be caused by backscattering, partial bodies or variations due to the different poses of the object. To make classifier more Fig. 1: Result of an underwater LiDAR image. Left: original image. Right: after illumination correction. robust, we introduce a similarity measure between two aligned shapes. A projection matrix \\(\\mathbf{A}\\) is learned within a certain number of iterations by maximizing the similarity of \\(\\mathbf{XA}\\) and template point set \\(\\mathbf{Y}\\) under maximum correntropy criterion (MCC) [7]. In each iteration, \\(\\mathbf{X}\\) is updated by \\(\\mathbf{XA}\\). By doing this, the query point set \\(\\mathbf{X}\\) performs an affine projection by multiplying matrix \\(\\mathbf{A}\\) in each iteration that enables the query shape to align with the template. Because each time \\(\\mathbf{X}\\) performs an affine transformation, the correntropy (similarity) between itself and the template \\(\\mathbf{Y}\\) are maximized. Therefore, we calculate the correntropy similarity measure \\(\\mathbf{c_{j}}\\) between the aligned query and the template. The final dissimilarity score \\(\\mathbf{D_{j}}\\) is calculated by dividing \\(\\mathbf{d_{j}}\\) by \\(\\mathbf{c_{j}}\\). Then we assign query \\(\\mathbf{X}\\) to the class \\(\\mathbf{j}\\) with minimum average dissimilarity measure \\(\\bar{\\mathbf{d_{j}}}\\) across all other classes. \\[c_{j}=Corr(XA,Y_{j}) \\tag{7}\\] \\[\\bar{d_{j}}=\\frac{\\sum_{i=1}^{n_{j}}\\frac{d_{i}}{c_{i}j}}{n_{j}} \\tag{8}\\] The advantage of this method is that correntropy captures the higher-order statistic information and it is robust against noise [6, 7, 8]. These properties guarantee efficient alignment when two objects are similar or dissimilar. Another advantage of this method is that there are many other good shape descriptors to choose from, they can be mostly classified into two categories: area-based and boundary-based (Shape context is one of them). Boundary based methods are usually rich in representing details of shapes while area-based methods are more robust to noise. Therefore complementary shape descriptors can be chosen and integrated into our classifier according to the varying real-world conditions. According to our previous work, the shape can also be defined as a single \"view\" and integrated with other views (features) [9], such as texture, to improve classification accuracy. Furthermore, compared with the most popular state-of-art CNN-based methods, this algorithm does not require large amount of training data or a very complicated model, which is hard to update if new classes/animals appear. This method gives us the flexibility of building a new model or adapting the existing model very quickly with smaller efforts. ### _Reinforcement Learning Template Selection: Divergence to Go_ However, it can be troublesome when we have too many instances/templates, especially many of them are highly similar. A correct classification result can always be attained by comparing with as many shape templates as possible if they follow the correct distributions within their own class, but computation cost can be very high. Therefore, we apply the divergence-to-go (DTG) reinforcement learning framework for selecting shape templates to reduce the amount of computation load for our classifier. Since this is a classification task, the templates should be as discriminative as possible over other classes. For example, a template for amberjack should look very different from a template for barracuda. To achieve this goal, we set up a reinforcement learning experiment applying the DTG policy, which maximizes the uncertainty over the next step, to transit from one state (Hu moments [10] of a template within one species) to another. The training process takes certain iterations and we keep track of the number of visits for each state. Then we select the n most visited shapes as they preserve the most uncertainty with respect to other classes. DTG is a quantity that measures the uncertainty given a state-action pair and defined as the expected discounted sum of divergence over time [11]. This framework is similar to a Markov Decision Process (MDP) with the 5-tuple (\\(\\chi,A,P,R,\\gamma\\)) substituting reward \\(R\\) as uncertainty, which is measured by divergence that calculates the distance between two transition density distributions. Transition distribution and divergence are estimated using kernel density estimators under the kernel temporal difference (KTD) model [12]. \\[dtg(x,a)=E\\left[\\sum_{t=0}^{\\infty}\\gamma^{t}D(x_{t})\\right] \\tag{9}\\] Applying the dynamic programming framework for temporal credit assignment and we have the DTG update equation: \\[\\delta_{t}=D+\\gamma\\max_{a}\\{dtg_{t+1}(\\vec{x^{{}^{\\prime}}})\\}-dtg_{t}(\\bar{x}) \\tag{10}\\] \\[dtg_{t}(\\bar{x})=D_{0}+\\alpha\\sum_{j=1}^{t}\\delta_{j}k(\\bar{x},\\vec{x_{j}}), \\tag{11}\\] where \\(\\mathbf{D}\\) is divergence measure, \\(\\delta_{\\mathbf{t}}\\) is DTG error and \\(k(.)\\) is a similarity function of two states, here correntropy [8] is adopted. ## IV Experiments and Results ### _Description of the Test Dataset_ Two different datasets were used in this study. In November 2017, a Gen-I UMSLI prototype was deployed in the DOE test facility operated by the Marine Sciences Laboratory at Pacific Northwest National Laboratory (PNNL). During the two-week exercise, a substantial amount of images were acquired using the UMSLI prototype. These include images of the two artificial targets (turtle and barracuda) and images of harbor seals and other natural fishes. In addition, a bench-top system of Gen-II UMSLI with the improved optical and electronic system was used to validate the automated switching from sparse-mode scanning and dense-mode scanning after detection. ### _Detection_ We compare the performance of gamma saliency and 6 other most highly cited state-of-art methods based on detection datasets acquired at DOE PNNL/MSL test site in this study. These methods are Hou and Zhang [13], B. Schauerte _et al._[14], Achanta _et al._[15], Margolin _et al._[16], Goferman _et al._[17], and Jiang _et al._[18], denoted as SR, QT, IG, PCA, CA, and MC respectively. Reasons for choosing these six methods are twofold: _Diversity:_ PCA, CA are complicated region based methods [19]. PCA computes the distance between every patch and the average patch in PCA coordinates to maximize inter-class variations. CA incorporates both single-scale local saliency measured by surrounding patches and multi-scale global contrast information. MC is a segmentation method based on superpixels and absorbing Markov chain. Saliency is computed by weighted sum of absorbed time from transient state to the absorbed state. SR and QT are spectral based fast models by looking at salient regions as residual information in image spectral space by using Fourier Transform and Quaternion Fourier Transform respectively. IG uses band-pass filter to contain a wide range of frequencies information and then finds the saliency map by subtracting filtered image by the arithmetic mean pixel value to get rid of texture details. _Speed Limit:_ PCA, CA are computationally extensive while SR, QT, IG, and MC are fast. To measure performance of all methods, four widely-used and universally agreed metrics are applied for comparison purpose: 1. _Precision and Recall (PR) Curve_ 2. _Receiver Operating Characteristic (ROC) Curve_ 3. _F-measure_ 4. _Area Under Curve (AUC)_ Before calculating the PR curve, saliency maps are binarized with threshold values from 0 to 255 to get the binary masks\\(\\mathbf{M}\\). Then precision and recall are calculated by comparing the binary mask \\(\\mathbf{M}\\) with the ground truth \\(\\mathbf{G}\\). PR curve evaluates overall performance in terms of positive classes while ROC curve evaluates both positive and negative classes. For calculating F-measure, according to the adaptive binarization method Achanta _et al._ proposed [15], adaptive threshold value is chosen as parameter \\(\\alpha\\) times the mean value of saliency map. Instead of using fixed \\(\\alpha\\), we evaluate the overall F-measure by using values from 2 to 11 to realize fair comparison, because saliency maps based on different algorithms are sensitive to different adaptive threshold values. Then all F-measure values given different \\(\\alpha\\) are averaged as our final result as is shown in table 1. \\[F_{\\beta}=\\frac{(1+\\beta^{2})Precision\\times Recall}{\\beta^{2}Precision+Recall} \\tag{12}\\] According to [15], \\(\\beta^{2}=0.3\\) is selected since precision is more important than recall as high recall can be easily achieved by lowering threshold value so that binary mask mostly covers ground truth. Qualitative results of saliency detection on a natural small fish, artificial barracuda and turtle models are shown in Fig. 5. In Fig. 3, gamma saliency outperforms all the fast algorithms but not PCA or CA, which means gamma saliency demonstrates better accuracy than fast algorithms but produces more false positives than computational extensive region-based models. However, when the recall value is less than 0.65, gamma dominates all other methods. This result implies that gamma is useful for object localization without considering the fine details. This is demonstrated in Fig 5 that gamma only preserves saliency blob. According to the nature of our application, localization is the most important task before segmentation and classification, which can be achieved by gamma saliency efficiently. According to the ROC curve in Fig. 4, the gamma saliency outperforms other methods. The intersection of gamma saliency and CA around 0.5 FPR indicates that gamma captures more saliency than CA in terms of regions other than Fig. 3: PR curve. Comparisons for performances of gamma saliency against six methods: SR, QT, IG, PCA, CA, MC. Fast algorithms are plotted as dashed lines to seperate from computationally extensive algorithms. Fig. 2: Illustration of the PNNL/MSL Test site, instrument and sample test images acquired during the experiments. (a) PNNL/MSL test site (b) GEN-I UMSL1 prototype (c) Artificial turtle (d) Artificial barracudda (e) Natural harbor seal the object of interest because gamma kernel picks up wherever saliency appeared while CA integrates both local and global contrast information [17]. Therefore, a higher threshold value is needed for gamma to approach better quality saliency map. Even though CA performs better than gamma saliency, CA cannot meet the requirement for real-time saliency detection. More details are shown in table I. The area-under-curve (AUC) is also provided in Table I to validate and support our conclusion that gamma saliency maintains an overall good performance with low computation time since the AUC value of gamma is higher than all fast methods and slightly lower than computational extensive methods. Saliency maps for three representative images of different methods are provided in Fig. 5 The first row is a natural fish captured by our underwater imager, the second row is the artificial barracuda model appears in the center of the image with extremely low contrast (in the far-field), and the third row is the artificial turtle model in turbid water. As shown in the first row of Fig. 5, all methods capture the fish accurately under clear water background even though the object is small. However, when contrast between foreground and background is low, other methods either recognize noise as salient region (SR, QT) or unable to distinctively separate noise from object (PCA, MC). In the worst-case, IG cannot distinguish between the salient region and background at all. CA and gamma detect the object but only gamma successfully separate object from a noisy background. In conclusion, gamma satisfies the following properties for detection in a real-time manner: * Localize the whole salient object with high accuracy. * High robustness against noise. * Computationally efficient. ### _Classification_ There are 256 instances for each class generated by projecting a 3D shape model into 2D from different angles, and they are used as shape template sets \\(\\mathbf{Y}\\) as mentioned before. The experiment for classification is related to \\(DTG\\) policy selecting templates. We select 10 templates by DTG policy and compare its performance with k-means clustering from previous work [20]. Because each class has 256 templates, the previous method calculates \\(256\\times 256\\) similarity matrix and each row represents one shape template. Then k-means clustering is applied to cluster all templates into 10 categories, the representative of each category is simply selected by the one shape vector that has the least L1-norm [20]. The selection is based on the idea of fully representing the class with fewer instances, which gives reasonably good results for a small data set. However, discrimination between classes is not included under such scenario. In Fig. 6, it is obvious that templates for turtle are very different from both barracuda and amberjack. However, there are clear similarities between templates of barracuda and amberjack, such as the first-row third-column and second-row second-column in Fig. 6. Therefore, it is necessary that discrimination among different classes are introduced into the proposed method for selecting templates. DTG policy is a model-based reinforcement learning method, we can achieve the task of introducing discrimination by manipulating the transition model. Simply saying, if we want to train a policy that maximizes the uncertainty (divergence) of amberjack with respect to barracuda, then we train the DTG policy of amberjack based on the transition model of barracuda. We calculate Hu-momments for all shape templates as their corresponding states. Hu-momments can effectively discourage dissimilarity caused by translation, scale, rotation, and reflection [10]. These properties are important because many templates are highly similar to each other even though they are shifted, rotated or translated and applying Hu-moments can discard these variations and focus more on inherent information of shape itself. The experiment set-up is given as follows: Firstly calculating similarity matrix for all three classes that we have: Fig. 4: ROC curve. Comparisons for performances of gamma saliency against six methods: SR, QT, IG, PCA, CA, MC. Fast algorithms are plotted as dashed lines to seperate from contatiously extensive algorithms. Fig. 6: 10 templates chosen by k-means clustering for each classes. First row: amberjack, second row: barracuda, third row: turtle [20]. amberjack, baracuda, and turtle. Then we apply k-means to cluster each class into 10 clusterings respectively and select 20 representatives that have the least L1-norm from the 10 clusterings. Then we define action list [-10, , -1, 1, 2,, 10] (0 is eliminated because staying at the same state should be avoided for DTG policy). Each action **a** transits state **i** into state \\((a+i)\\ mod\\ 20\\). The transition model is built by running this environment by random policy for 5000 steps and then storing all transitions: [\\(x_{i}\\), \\(x_{i+1}\\), \\(a\\), \\(r\\)]. After building transition models for all classes, we start training DTG policies within certain steps. Then we narrow down the 20 templates into 10 by selecting the templates that correspond to the 10 most highly visited states within the 2,000 steps with respect to other classes. We compare the performance of templates selected by DTG and the original k-means method as well as random policy. The original k-means method calculates confusion matrix based on previous dataset composed of 8 amberjacks, 6 barracuda and 8 turtles [20]. Because this dataset is too small, we also compare the classification accuracy based on the same dataset for detection task, which only has 2 classes: barracuda and turtle (because templates for other natural fishes we encountered are not available). For the comparing experiment based on 3 classes, DTG policy of one class is trained under the concatenation of all transition models of other classes. For the experiment involving the only 2 classes, the DTG policy is only trained under the transition model of the other class. Classification results for two different dataset are provided in Fig. 7 and Table II respectively. In the confusion matrix, it is clearly demonstrated that discrimination is introduced into template selection since there are fewer false positives and more true positives. Classification accuracy is the highest among all methods that we apply for templates selection. ### _Camera System Implementation Results_ Currently, the detection algorithm has been implemented in the embedded system to realize real-time automatic switching from the sparse-mode to the dense-mode when an object is detected. A series of experiments were conducted at the optical test tank at HBOI to validate this implementation. Offline detection results are shown in Fig. 8 and real-time mode switching results are shown in Fig. 9 For real time object detection and mode switching, we hide bounding box for acquiring clear data. Fig. 5: _Comparison of saliency map results. From left to right: Original image, ground truth, SR, QT, IG, PCA, CA, MC, gamma._ Fig. 7: Confusion matrix of classification result given by templates selected by different methods where a, b and t stands for amberjack, barracuda and turtle respectively. (a) k-means (b) DTG ## Conclusion First of all, using the field data acquired at PNNL/MSL test site, we demonstrated both quantitatively and qualitatively that the optimality of the gamma saliency algorithm in real-time detection of undersea animals. One critical step in achieving this success is that the illumination correction based image processing was able to mitigate backscattering and attention induced image distortion that is typical in the turbid water. Furthermore, the gamma saliency algorithm has been integrated into the GEN-II UMSLI benchtop prototype to demonstrate the ability of automatic switching from the sparse-scan mode to the dense-scan mode critical for the comparatively low-frame-rate UMSLI imager to successfully acquire the high-resolution images needed for the subsequent classification task. The DTG reinforcement learning-based template selection method introduced in this paper was based on the belief that templates representing a class should be both representative and discriminative. The simulation results demonstrate the effectiveness of this technique. The experiments for classification demonstrated initial success in developing a highly flexible shape-matching framework that can evolve to incorporate more features [9] or additional templates to when additional data become available. This ability is critical to the success of the UMSLI deployment at any MHK site where the encounters with underwater animals are in general scarce. ## Acknowledgement This work was partially supported by USDOE contracts DE-0006787 and DE-ee0007828. The authors will also want to thank Mr. Michael Young for fabricating the system components and engineers at PNNL/MSL: Dr. Geneva Harker-Klimes, Mr. Garrett Staines, Mr. Stanley Tomich, Mr. John Vavrinec and Ms. Shon Zimmerman for their support. ## References * Abernetics_, June 2017, pp. 1-5. * [2] B. Ouyang, F. R. Dalgleish, F. M. Caimi, A. K. Vuorenkoski, T. E. Giddings, and J. J. Shirron, \"Image enhancement for underwater pulsed laser line scan imaging system,\" 2012. * [3] Rgiis CLOURAD, \"Tutorial: Illumination correction,\" [https://cloudar.users.grgcy.fr/Pantheon/experiments/illumination-correction/](https://cloudar.users.grgcy.fr/Pantheon/experiments/illumination-correction/) index-en.html, July 2011. * [4] R. Burt, E. Santana, J. C. Principe, N. Thigpen, and A. Keil, \"Predicting visual attention using gamma kernels,\" in _2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, March 2016, pp. 1606-1610. * MTS/IEEE Washington_, Oct 2015, pp. 1-6. * [6] Z. Cao, J. C. Principe, and B. Ouyang, \"Information point set registration for shape recognition,\" in _2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, March 2016, pp. 2603-2607. * [7] Yunlong Feng, Xiaolin Huang, Lei Shi, Yuning Yang, and Johan A.K. Suykens, \"Learning with the maximum correntropy criterion induced losses for regression,\" _Journal of Machine Learning Research_, vol. 16, no. 30, pp. 993-1034, 2015. * [8] W. Liu, P. P. Pokharel, and J. C. Principe, \"Correntropy: Properties and applications in non-gaussian signal processing,\" _IEEE Transactions on Signal Processing_, vol. 55, no. 11, pp. 5286-5298, Nov 2007. * [9] Z. Cao, S. Yu, B. Ouyang, F. Dalgleish, A. Vuorenkoski, G. Alsenas, and J. C. Principe, \"Marine animal classification with correntropy-loss-based multiview learning,\" _IEEE Journal of Oceanic Engineering_, pp. 1-14, 2018. * [10] Ming-Kuei Hu, \"Visual pattern recognition by moment invariants,\" _IRE Transactions on Information Theory_, vol. 8, no. 2, pp. 179-187, February 1962. * [11] M. Emigh, E. Kriminger, and J. C. Principe, \"A model based approach to exploration of continuous-state mdps using divergence-to-go,\" in _2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP)_, Sep. 2015, pp. 1-6. * [12] J. Bae, L. S. Giraldo, P. Chhatbar, J. Francis, J. Sanchez, and J. Principe, \"Stochastic kernel temporal difference for reinforcement learning,\" in _2011 IEEE International Workshop on Machine Learning for Signal Processing_, Sep. 2011, pp. 1-6. * [13] X. Hou and L. Zhang, \"Saliency detection: A spectral residual approach,\" in _2007 IEEE Conference on Computer Vision and Pattern Recognition_, June 2007, pp. 1-8. * Volume Part II_, Berlin, Heidelberg, 2012, ECCV'12, pp. 116-129, Springer-Verlag. * [15] R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk, \"Frequency-tuned salient region detection,\" in _2009 IEEE Conference on Computer Vision and Pattern Recognition_, June 2009, pp. 1597-1604. * [16] R. Margolin, A. Tal, and L. Zelnik-Manor, \"What makes a patch distinct?,\" in _2013 IEEE Conference on Computer Vision and Pattern Recognition_, June 2013, pp. 1139-1146. * [17] S. Goferman, L. Zelnik-Manor, and A. Tal, \"Context-aware saliency detection,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 34, no. 10, pp. 1915-1926, Oct 2012. * [18] B. Jiang, L. Zhang, H. Lu, C. Yang, and M. Yang, \"Saliency detection via absorbing markov chain,\" in _2013 IEEE International Conference on Computer Vision_, Dec 2013, pp. 1665-1672. * [19] Ali Borji, Ming-Ming Cheng, Huaiu Jiang, and Jia Li, \"Salient object detection: A survey,\" _CoRR_, vol. abs/1411.5878, 2014. * [20] Zheng Cao, Jose C. Principe, Bing Ouyang, Fraser Dalgleish, Anni Vuorenkoski, Brian Ramos, and Gabriel Alsenas, \"Marine animal classification using unstil in high optical test facility,\" _Multimedia Tools and Applications_, vol. 76, no. 21, pp. 23117-23138, Nov 2017. Fig. 8: Detection results on real-world data. Fig. 9: Real-time automatic switching from sparse-mode to dense mode after the object is detected: more details (texture, object shape) are available.
To fully understand interactions between marine hydkinetic (MHK) equipment and marine animals, a fast and effective monitoring system is required to capture relevant information whenever underwater animals appear. A new automated underwater imaging system composed of LiDAR (Light Detection and Ranging) imaging hardware and a scene understanding software module named Unobtrusive Multistatic Serial LiDAR Imager (UMSLI) to supervise the presence of animals near turbines. UMSLI integrates the front end LiDAR hardware and a series of software modules to achieve image preprocessing, detection, tracking, segmentation and classification in a hierarchical manner. detection, tracking, segmentation, classification
Write a summary of the passage below.
132
arxiv-format/1901_03281v1.md
# Multispectral and Hyperspectral Image Fusion by MS/HS Fusion Net Qi Xie\\({}^{1}\\), Minghao Zhou\\({}^{1}\\), Qian Zhao\\({}^{1}\\), Deyu Meng\\({}^{1,}\\)1, Wangmeng Zuo\\({}^{2}\\), Zongben Xu\\({}^{1}\\) \\({}^{1}\\)Xi'an Jiaotong University, \\({}^{2}\\)Harbin Institute of Technology [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Corresponding author. ## 1 Introduction A hyperspectral (HS) image consists of various bands of images of a real scene captured by sensors under different spectrums, which can facilitate a fine delivery of more faithful knowledge under real scenes, as compared to traditional images with only one or a few bands. The rich spectra of HS images tend to significantly benefit the characterization of the imaged scene and greatly enhance performance in different computer vision tasks, including object recognition, classification, tracking and segmentation [10, 37, 35, 36]. In real cases, however, due to the limited amount of incident energy, there are critical tradeoffs between spatial and spectral resolution. Specifically, an optical system usually can only provide data with either high spatial resolution but a small number of spectral bands (e.g., the standard RGB image) or with a large number of spectral bands but reduced spatial resolution [23]. Therefore, the research issue on merging a high-resolution multispectral (HrMS) image and a low-resolution hyperspectral (LrHS) image to generate a high-resolution hyperspectral (HrHS) image, known as MS/HS fusion, has attracted great attention [47]. The observation models for the HrMS and LrHS images are often written as follows [12, 24, 25]: \\[\\mathbf{Y} =\\mathbf{XR}+\\mathbf{N}_{y}, \\tag{1}\\] \\[\\mathbf{Z} =\\mathbf{CX}+\\mathbf{N}_{z}, \\tag{2}\\] where \\(\\mathbf{X}\\in\\mathbb{R}^{HW\\times S}\\) is the target HrHS image1 with \\(H\\), \\(W\\) and \\(S\\) as its height, width and band number, respectively, \\(\\mathbf{Y}\\in\\mathbb{R}^{HW\\times s}\\) is the HrMS image with \\(s\\) as its band number (\\(s<S\\)), \\(\\mathbf{Z}\\in\\mathbb{R}^{hw\\times S}\\) is the LrHS image with \\(h\\), \\(w\\) and \\(S\\) as its height, width and band number (\\(h<H\\), \\(w<W\\)), \\(\\mathbf{R}\\in\\mathbb{R}^{S\\times s}\\) is the spectral response of the multispectral sensor as shown in Fig. 1 (a), \\(\\mathbf{C}\\in\\mathbb{R}^{hw\\times HW}\\) is a linear operator which is often assumed to be composed of a cyclic Figure 1: (a)(b) The observation models for HrMS and LrHS images, respectively. (c) Learning bases \\(\\hat{\\mathbf{Y}}\\) by deep network, with HrMS \\(\\mathbf{Y}\\) and LrHS \\(\\mathbf{Z}\\) as the input of the network. (d) The HrHS\\(\\mathbf{X}\\) can be linearly represented by \\(\\mathbf{Y}\\) and to-be-estimated \\(\\hat{\\mathbf{Y}}\\), in a formulation of \\(\\mathbf{X}\\approx\\mathbf{Y}\\mathbf{A}+\\hat{\\mathbf{Y}}\\mathbf{B}\\), where the rank of \\(\\mathbf{X}\\) is \\(r\\). convolution operator \\(\\mathbf{\\phi}\\) and a down-sampling matrix \\(\\mathbf{D}\\) as shown in Fig. 1 (b), \\(\\mathbf{N}_{y}\\) and \\(\\mathbf{N}_{z}\\) are the noises contained in HrMS and LrHS images, respectively. Many methods have been designed based on (1) and (2), and achieved good performance [40, 14, 24, 25]. Since directly recovering the HrHS image \\(\\mathbf{X}\\) is an ill-posed inverse problem, many techniques have been exploited to recover \\(\\mathbf{X}\\) by assuming certain priors on it. For example, [54, 2, 11] utilize the prior knowledge of HrHS that its spatial information could be sparsely represented under a dictionary trained from HrMS. Besides, [27] assumes the local spatial smoothness prior on the HrHS image and uses total variation regularization to encode it in their optimization model. Instead of exploring spatial prior knowledge from HrHS, [52] and [26] assume more intrinsic spectral correlation prior on HrHS, and use low-rank techniques to encode such prior along the spectrum to reduce spectral distortions. Albeit effective for some applications, the rationality of these techniques relies on the subjective prior assumptions imposed on the unknown HrHS to be recovered. An HrHS image collected from real scenes, however, could possess highly diverse configurations both along space and across spectrum. Such conventional learning regimes thus could not always flexibly adapt different HS image structures and still have room for performance improvement. Methods based on Deep Learning (DL) have outperformed traditional approaches in many computer vision tasks [34] in the past decade, and have been introduced to HS/MS fusion problem very recently [28, 30]. As compared with conventional methods, these DL based ones are superior in that they need fewer assumptions on the prior knowledge of the to-be-recovered HrHS, while can be directly trained on a set of paired training data simulating the network inputs (LrHS&HrMS images) and outputs (HrHS images). The most commonly employed network structures include CNN [7], 3D CNN [28], and residual net [30]. Like other image restoration tasks where DL is successfully applied to, these DL-based methods have also achieved good resolution performance for MS/MS fusion task. However, the current DL-based MS/HS fusion methods still have evident drawbacks. The most critical one is that these methods use general frameworks for other tasks, which are not specifically designed for MS/HS fusion. This makes them lack interpretability specific to the problem. In particular, they totally neglect the observation models (1) and (2) [28, 30], especially the operators \\(\\mathbf{R}\\) and \\(\\mathbf{C}\\), which facilitate an understanding of how LrHS and HrMs are generated from the HrHS. Such understanding, however, should be useful for calculating HrHS images. Besides this generalization issue, current DL methods also neglect the general prior structures of HS images, such as spectral low-rankness. Such priors are intrinsically possessed by all meaningful HS images, and the neglect of such priors implies that DL-based methods still have room for further enhancement. In this paper, we propose a novel deep learning-based method that integrates the observation models and image prior learning into a single network architecture. This work mainly contains the following three-fold contributions: Firstly, we propose a novel MS/HS fusion model, which not only takes the observation models (1) and (2) into consideration but also exploits the approximate low-rankness prior structure along the spectral mode of the HrHS image to reduce spectral distortions [52, 26]. Specifically, we prove that if and only if observation model (1) can be satisfied, the matrix of HrHS image \\(\\mathbf{X}\\) can be linearly represented by the columns in HrMS matrix \\(\\mathbf{Y}\\) and a to-be-estimated matrix \\(\\hat{\\mathbf{Y}}\\), i.e., \\(\\mathbf{X}=\\mathbf{Y}\\mathbf{A}+\\hat{\\mathbf{Y}}\\mathbf{B}\\) with coefficient matrices \\(\\mathbf{A}\\) and \\(\\mathbf{B}\\). One can see Fig. 1 (d) for easy understanding. We then construct a concise model by combining the observation model (2) and the linear representation of \\(\\mathbf{X}\\). We also exploit the proximal gradient method [3] to design an iterative algorithm to solve the proposed model. Secondly, we unfold this iterative algorithm into a deep network architecture, called MS/HS Fusion Net or MHF-net, to implicitly learn the to-be-estimated \\(\\hat{\\mathbf{Y}}\\), as shown in Fig. 1 (c). After obtaining \\(\\hat{\\mathbf{Y}}\\), we can then easily achieve \\(\\mathbf{X}\\) with \\(\\mathbf{Y}\\) and \\(\\hat{\\mathbf{Y}}\\). To the best of our knowledge, this is the first deep-learning-based MS/HS fusion method that fully considers the intrinsic mechanism of the MS/HS fusion problem. Moreover, all the parameters involved in the model can be automatically learned from training data in an end-to-end manner. This means that the spatial and spectral responses (\\(\\mathbf{R}\\) and \\(\\mathbf{C}\\)) no longer need to be estimated beforehand as most of the traditional non-DL methods did, nor to be fully neglected as current DL methods did. Thirdly, we have collected or realized current state-of-the-art algorithms for the investigated MS/HS fusion task, and compared their performance on a series of synthetic and real problems. The experimental results comprehensively substantiate the superiority of the proposed method, both quantitatively and visually. In this paper, we denote scalar, vector, matrix and tensor in non-bold case, bold lower case, bold upper case and calligraphic upper case letters, respectively. ## 2 Related work ### Traditional methods The pansharpening technique in remote sensing is closely related to the investigated MS/HS problem. This task aims to obtain a high spatial resolution MS image by the fusion of a MS image and a wide-band panchromatic image. A heuristic approach to perform MS/HS fusion is to treat it as a number of pansharpening sub-problems, where each band of the HrMS image plays the role of a panchromatic image. There are mainly two categories of pansharpening methods: component substitution (CS) [5, 17, 1] and multiresolution analysis (MRA) [20, 21, 4, 33, 6]. These methods always suffer from the high spectral distortion, since a single panchromatic image contains little spectral information as compared with the expected HS image. In the last few years, machine learning based methods have gained much attention on MS/HS fusion problem [54, 2, 11, 14, 52, 48, 26, 40]. Some of these methods used sparse coding technique to learn a dictionary on the patches across a HrMS image, which delivers spatial knowledge of HrHS to a certain extent, and then learn a coefficient matrix from LrHS to fully represent the HrHS [54, 2, 11, 40]. Some other methods, such as [14], use the sparse matrix factorization to learn a spectral dictionary for LrHS images and then construct HrMS images by exploiting both the spectral dictionary and HrMS images. The low-rankness of HS images can also be exploited with non-negative matrix factorization, which helps to reduce spectral distortions and enhances the MS/HS fusion performance [52, 48, 26]. The main drawback of these methods is that they are mainly designed based on human observations and strong prior assumptions, which may not be very accurate and would not always hold for diverse real world images. ### Deep learning based methods Recently, a number of DL-based pansharpening methods were proposed by exploiting different network structures [15, 22, 42, 43, 29, 30, 32]. These methods can be easily adapted to MS/HS fusion problem. For example, very recently, [28] proposed a 3D-CNN based MS/HS fusion method by using PCA to reduce the computational cost. This method is usually trained with prepared training data. The network inputs are set as the combination of HrMS/panchromatic images and LrHS/multispectral images (which is usually interpolated to the same spatial size as HrMS/panchromatic images in advance), and the outputs are the corresponding HrHS images. The current DL-based methods have been verified to be able to attain good performance. They, however, just employ networks assembled with some off-the-shelf components in current deep learning toolkits, which are not specifically designed against the investigated problem. Thus the main drawback of this technique is the lack of interpretability to this particular MS/HS fusion task. In specific, both the intrinsic observation model (1), (2) and the evident prior structures, like the spectral correlation property, possessed by HS images have been neglected by such kinds of \"black-box\" deep model. ## 3 MS/HS fusion model In this section, we demonstrate the proposed MS/HS fusion model in detail. ### Model formulation We first introduce an equivalent formulation for observation model (1). Specifically, we have following theorem2. Footnote 2: All proofs are presented in supplementary material. **Theorem 1**.: _For any \\(\\mathbf{X}\\in\\mathbb{R}^{HW\\times S}\\) and \\(\\tilde{\\mathbf{Y}}\\in\\mathbb{R}^{HW\\times s}\\), if \\(\\text{rank}(\\mathbf{X})=r>s\\) and \\(\\text{rank}(\\tilde{\\mathbf{Y}})=s\\), then the following two statements are equivalent to each other: (a) There exists an \\(\\mathbf{R}\\in\\mathbb{R}^{S\\times s}\\), subject to,_ \\[\\tilde{\\mathbf{Y}}=\\mathbf{X}\\mathbf{R}. \\tag{3}\\] _(b) There exist \\(\\mathbf{A}\\in\\mathbb{R}^{s\\times S}\\), \\(\\mathbf{B}\\in\\mathbb{R}^{(r-s)\\times S}\\) and \\(\\hat{\\mathbf{Y}}\\in\\mathbb{R}^{HW\\times(r-s)}\\), subject to,_ \\[\\mathbf{X}=\\tilde{\\mathbf{Y}}\\mathbf{A}+\\hat{\\mathbf{Y}}\\mathbf{B}. \\tag{4}\\] In reality, the band number of an HrMS image is usually not large, which makes it full rank along spectral mode. For example, the most commonly used HrMS images, RGB images, contain three bands, and their rank along the spectral mode is usually also three. Thus, by letting \\(\\tilde{\\mathbf{Y}}=\\mathbf{Y}-\\mathbf{N}_{y}\\) where \\(\\mathbf{Y}\\) is the observed HrMS in (1), it is easy to find that \\(\\tilde{\\mathbf{Y}}\\) and \\(\\mathbf{X}\\) satisfy the conditions in **Theorem 1**. Then the observation model (1) is equivalent to \\[\\mathbf{X}=\\mathbf{Y}\\mathbf{A}+\\hat{\\mathbf{Y}}\\mathbf{B}+\\mathbf{N}_{x}, \\tag{5}\\] where \\(\\mathbf{N}_{x}=-\\mathbf{N}_{y}\\mathbf{A}\\) is caused by the noise contained in the HrMS image. In (5), \\([\\mathbf{Y},\\hat{\\mathbf{Y}}]\\) can be viewed as \\(r\\) bases that represent columns in \\(\\mathbf{X}\\) with coefficients matrix \\([\\mathbf{A};\\mathbf{B}]\\in\\mathbb{R}^{r\\times S}\\), where only the \\(r-s\\) bases in \\(\\hat{\\mathbf{Y}}\\) are unknown. In addition, we can derive the following corollary: **Corollary 1**.: _For any \\(\\tilde{\\mathbf{Y}}\\in\\mathbb{R}^{HW\\times s}\\), \\(\\tilde{\\mathbf{Z}}\\in\\mathbb{R}^{hw\\times S}\\), \\(\\mathbf{C}\\in R^{hw\\times HW}\\), if \\(\\text{rank}(\\tilde{\\mathbf{Y}})=s\\) and \\(\\text{rank}(\\tilde{\\mathbf{Z}})=r>s\\), then the following two statements are equivalent to each other: (a) There exist \\(\\mathbf{X}\\in\\mathbb{R}^{HW\\times S}\\) and \\(\\mathbf{R}\\in\\mathbb{R}^{S\\times s}\\), subject to,_ \\[\\tilde{\\mathbf{Y}}=\\mathbf{X}\\mathbf{R},\\ \\ \\tilde{\\mathbf{Z}}=\\mathbf{C}\\mathbf{X},\\ \\ \\text{rank}(\\mathbf{X})=r. \\tag{6}\\] _(b) There exist \\(\\mathbf{A}\\in\\mathbb{R}^{s\\times S}\\), \\(r>s\\), \\(\\mathbf{B}\\in\\mathbb{R}^{(r-s)\\times S}\\) and \\(\\hat{\\mathbf{Y}}\\in\\mathbb{R}^{HW\\times(r-s)}\\), subject to,_ \\[\\tilde{\\mathbf{Z}}=\\mathbf{C}\\left(\\tilde{\\mathbf{Y}}\\mathbf{A}+\\hat{\\mathbf{Y}}\\mathbf{B}\\right). \\tag{7}\\] By letting \\(\\tilde{\\mathbf{Z}}=\\mathbf{Z}-\\mathbf{N}_{z}\\), it is easy to find that, when being viewed as equations of the to-be-estimated \\(\\mathbf{X}\\), \\(\\mathbf{R}\\) and \\(\\mathbf{C}\\), the observation model (1) and model (2) are equivalent to the following equation of \\(\\tilde{\\mathbf{Y}}\\), \\(\\mathbf{A}\\), \\(\\mathbf{B}\\) and \\(\\mathbf{C}\\): \\[\\mathbf{Z}=\\mathbf{C}\\left(\\mathbf{Y}\\mathbf{A}+\\hat{\\mathbf{Y}}\\mathbf{B}\\right)+\\mathbf{N}, \\tag{8}\\] where \\(\\mathbf{N}=\\mathbf{N}_{z}-\\mathbf{C}\\mathbf{N}_{y}\\mathbf{A}\\) denotes the noise contained in HrMS and LrHS image. By (8), we design the following MS/HS fusion model: \\[\\min_{\\hat{\\mathbf{Y}}}\\left\\|\\mathbf{C}\\left(\\mathbf{Y}\\mathbf{A}+\\hat{\\mathbf{Y}}\\mathbf{B}\\right)-\\bm {Z}\\right\\|_{F}^{2}+\\lambda f\\left(\\hat{\\mathbf{Y}}\\right), \\tag{9}\\] where \\(\\lambda\\) is a trade-off parameter, and \\(f(\\cdot)\\) is a regularization function. We adopt regularization on the to-be-estimated bases in \\(\\hat{\\mathbf{Y}}\\), rather than on \\(\\mathbf{X}\\) as in traditional methods. This will help alleviate destruction of the spatial detail information in the known \\(\\mathbf{Y}\\)3 when representing \\(\\mathbf{X}\\) with it. Footnote 3: Many regularization terms, such as total variation norm, will lead to loss of details like the sharp edge, lines and high light point in the image. It should be noted that for the same data set, the matrices \\(\\mathbf{A}\\), \\(\\mathbf{B}\\) and \\(\\mathbf{C}\\) are fixed. This means that these matrices can be learned from the training data. In the later sections we will show how to learn them with a deep network. ### Model optimization We now solve (9) using a proximal gradient algorithm [3], which iteratively updates \\(\\hat{\\mathbf{Y}}\\) by calculating \\[\\hat{\\mathbf{Y}}^{(k+1)}=\\arg\\min_{\\hat{\\mathbf{Y}}}Q\\left(\\hat{\\mathbf{Y}},\\hat{\\mathbf{Y}}^{( k)}\\right), \\tag{10}\\] where \\(\\hat{\\mathbf{Y}}^{(k)}\\) is the updating result after \\(k-1\\) iterations, \\(k=1,2,\\cdots,K\\), and \\(Q(\\hat{\\mathbf{Y}},\\hat{\\mathbf{Y}}^{(k)})\\) is a quadratic approximation [3] defined as: \\[\\begin{split} Q\\!\\left(\\!\\hat{\\mathbf{Y}},\\hat{\\mathbf{Y}}^{(k)}\\!\\right) \\!=& g\\left(\\!\\hat{\\mathbf{Y}}^{(k)}\\!\\right)\\!+\\!\\left\\langle \\hat{\\mathbf{Y}}-\\hat{\\mathbf{Y}}^{(k)},\ abla g\\left(\\!\\hat{\\mathbf{Y}}^{(k)}\\!\\right) \\right\\rangle\\\\ &+\\frac{1}{2\\eta}\\left\\|\\hat{\\mathbf{Y}}-\\hat{\\mathbf{Y}}^{(k)}\\right\\|_ {F}^{2}+\\lambda f\\left(\\hat{\\mathbf{Y}}\\right),\\end{split} \\tag{11}\\] where \\(g(\\hat{\\mathbf{Y}}^{(k)})=\\|\\mathbf{C}(\\mathbf{Y}\\mathbf{A}+\\hat{\\mathbf{Y}}^{(k)}\\mathbf{B})-\\mathbf{Z}\\| _{F}^{2}\\) and \\(\\eta\\) plays the role of stepsize. It is easy to prove that the problem (10) is equivalent to: \\[\\min_{\\hat{\\mathbf{Y}}}\\frac{1}{2}\\left\\|\\hat{\\mathbf{Y}}-\\!\\left(\\!\\hat{\\mathbf{Y}}^{(k) }\\!-\\!\\eta\ abla g\\left(\\hat{\\mathbf{Y}}^{(k)}\\!\\right)\\!\\right)\\right\\|_{F}^{2} \\!\\!+\\!\\lambda\\eta f\\left(\\hat{\\mathbf{Y}}\\right). \\tag{12}\\] For many kinds of regularization terms, the solution of Eq. (12) is usually in a closed-form [8], written as: \\[\\hat{\\mathbf{Y}}^{(k+1)}=\\text{prox}_{\\lambda\\eta}\\left(\\!\\hat{\\mathbf{Y}}^{(k)}\\!-\\! \\eta\ abla g\\left(\\hat{\\mathbf{Y}}^{(k)}\\right)\\!\\right). \\tag{13}\\] Since \\(\ abla g\\left(\\hat{\\mathbf{Y}}^{(k)}\\right)=\\mathbf{C}^{T}\\!\\left(\\mathbf{C}\\left(\\mathbf{Y} \\mathbf{A}+\\hat{\\mathbf{Y}}^{(k)}\\mathbf{B}\\right)\\!-\\!\\mathbf{Z}\\right)\\!\\mathbf{B}^{T}\\), we can obtain the final updating rule for \\(\\hat{\\mathbf{Y}}\\): \\[\\hat{\\mathbf{Y}}^{(k+1)}\\!=\\!\\text{prox}_{\\lambda\\eta}\\!\\left(\\!\\hat{\\mathbf{Y}}^{(k) }\\!-\\!\\eta\\mathbf{C}^{T}\\!\\left(\\mathbf{C}\\!\\left(\\mathbf{Y}\\mathbf{A}+\\hat{\\mathbf{Y}}^{(k)}\\! \\mathbf{B}\\right)\\!-\\!\\mathbf{Z}\\right)\\!\\mathbf{B}^{T}\\!\\!\\right)\\!. \\tag{14}\\] In the later section, we will unfold this algorithm into a deep network. ## 4 MS/HS fusion net Based on the above algorithm, we build a deep neural network for MS/HS fusion by unfolding all steps of the algorithm as network layers. This technique has been widely utilized in various computer vision tasks and has been substantiated to be effective in compressed sensing, dehazing, deconvolution, etc. [44, 45, 53]. The proposed network is a structure of \\(K\\) stages implementing \\(K\\) iterations in the iterative algorithm for solving Eq. (9), as shown in Fig. 3 (a) and (b). Each stage takes the HrMS image \\(\\mathbf{Y}\\), LrHS image \\(\\mathbf{Z}\\), and the output of the previous stage \\(\\hat{\\mathbf{Y}}\\), as inputs, and outputs an updated \\(\\hat{\\mathbf{Y}}\\) to be the new input of next layer. ### Network design **Algorithm unfolding.** We first decompose the updating rule (14) into the following four sequential parts: \\[\\mathbf{X}^{(k)}=\\mathbf{Y}\\mathbf{A}+\\hat{\\mathbf{Y}}^{(k)}\\mathbf{B}, \\tag{15}\\] \\[\\mathbf{E}^{(k)}=\\mathbf{C}\\mathbf{X}^{(k)}-\\mathbf{Z}, \\tag{16}\\] \\[\\mathbf{G}^{(k)}=\\eta\\mathbf{C}^{T}\\mathbf{E}^{(k)}\\mathbf{B}^{T}, \\tag{17}\\] \\[\\hat{\\mathbf{Y}}^{(k+1)}=\\text{prox}_{\\lambda\\eta}\\left(\\hat{\\mathbf{Y}}^{(k)}-\\mathbf{G}^ {(k)}\\right). \\tag{18}\\] In the network framework, we use the images with their tensor formulations (\\(\\mathcal{X}\\in\\mathbb{R}^{H\\times W\\times S}\\), \\(\\mathcal{Y}\\in\\mathbb{R}^{H\\times W\\times s}\\) and \\(\\mathcal{Z}\\in\\mathbb{R}^{h\\times w\\times S}\\)) instead of their matrix forms to protect their original structure knowledge and make the network structure (in tensor form) easily designed. We then design a network to approximately perform the above operations in tensor version. Refer to Fig. 2 for easy understanding. In tensor version, Eq. (15) can be easily performed by the two multiplications between a tensor and a matrix along the \\(3^{rd}\\) mode of the tensor. Specifically, in the TensorFlow4 framework, multiplying \\(\\mathcal{Y}\\in\\mathbb{R}^{H\\times W\\times s}\\) with matrix \\(\\mathbf{A}\\in\\mathbb{R}^{s\\times S}\\) along the channel mode can be easily performed by using the 2D convolution function with a \\(1\\times 1\\times s\\times S\\) kernel tensor \\(\\mathcal{A}\\). \\(\\hat{\\mathcal{Y}}\\) and \\(\\mathbf{B}\\) can be multiplied similarly. In summary, we can perform the tensor version of (15) by: Footnote 4: [https://tensorflow.google.cn/](https://tensorflow.google.cn/) \\[\\mathcal{X}^{(k)}=\\mathcal{Y}\\times_{3}\\mathbf{A}^{T}+\\hat{\\mathcal{Y}}^{(k)} \\times_{3}\\mathbf{B}^{T}, \\tag{19}\\] Figure 2: An illustration of relationship between the algorithm with matrix form and the network structure with tensor form. where \\(\\times_{3}\\) denotes the mode-3 Multiplication for tensor5. Footnote 5: For a tensor \\(\\mathcal{U}\\in\\mathbb{R}^{I\\times J\\times K}\\) with \\(u_{ijk}\\) as its elements, and \\(\\mathbf{V}\\in\\mathbb{R}^{K\\times L}\\) with \\(v_{kl}\\) as its elements, let \\(\\mathcal{W}=\\mathcal{U}\\times_{3}\\mathbf{V}\\), the elements of \\(\\mathcal{W}\\) are \\(w_{ijl}=\\sum_{k=1}^{K}u_{ijk}v_{lk}\\). Besides, \\(\\mathcal{W}=\\mathcal{U}\\times_{3}\\mathbf{V}\\Leftrightarrow\\mathbf{W}=\\mathbf{U}\\mathbf{V}^{T}\\). In Eq. (16), the matrix \\(\\mathbf{C}\\) represents the spatial downsampling operator, which can be decomposed into 2D convolutions and down-sampling operators [12, 24, 25]. Thus, we perform the tensor version of (16) by: \\[\\mathcal{E}^{(k)}=\\text{downSample}_{\\theta_{d}^{(k)}}\\left(\\mathcal{X}^{(k)} \\right)-\\mathcal{Z}, \\tag{20}\\] where \\(\\mathcal{E}^{(k)}\\) is an \\(h\\times w\\times S\\) tensor, \\(\\text{downSample}_{\\theta_{d}^{(k)}}(\\cdot)\\) is the downsampling network consisting of 2D channel-wise convolutions and average pooling operators, and \\(\\theta_{d}^{(k)}\\) denotes filters involved in the operator at the \\(k^{th}\\) stage of network. In Eq. (17), the transposed matrix \\(\\mathbf{C}^{T}\\) represents a spatial upsampling operator. This operator can be easily performed by exploiting the 2D transposed convolution [9], which is the transposition of the combination of convolution and downsampling operator. By exploiting the 2D transposed convolution with filter in the same size with the one used in (20), we can approach (17) in the network by: \\[\\mathcal{G}^{(k)}=\\eta\\cdot\\text{upSample}_{\\theta_{u}^{(k)}}\\left(\\mathcal{E }^{(k)}\\right)\\times_{3}\\mathbf{B}, \\tag{21}\\] where \\(\\mathcal{G}^{(k)}\\in\\mathbb{R}^{H\\times W\\times S}\\), \\(\\text{upSample}_{\\theta_{u}^{(k)}}(\\cdot)\\) is the spacial upsampling network consisting of transposed convolutions and \\(\\theta_{u}^{(k)}\\) denotes the corresponding filters in the \\(k^{th}\\) stage. In Eq. (18), \\(\\text{prox}(\\cdot)\\) is a to-be-decided proximal operator. We adopt the deep residual network (ResNet) [13] to learn this operator. We then represent (18) in our network as: \\[\\hat{\\mathcal{Y}}^{(k+1)}=\\text{proxNet}_{\\theta_{p}^{(k)}}\\left(\\hat{ \\mathcal{Y}}^{(k)}-\\mathcal{G}^{(k)}\\right), \\tag{22}\\] where \\(\\text{proxNet}_{\\theta_{p}^{(k)}}(\\cdot)\\) is a ResNet which represents the proximal operator in our algorithm and the parameters involved in the ResNet at the \\(k^{th}\\) stage are denoted by \\(\\theta_{p}^{(k)}\\). With Eq. (19)-(22), we can now construct the stages in the proposed network. Fig. 3 (b) shows the flowchart of a single stage of the proposed network. **Normal stage.** In the first stage, we simply set \\(\\hat{\\mathcal{Y}}^{(1)}=\\mathbf{0}\\). By exploiting (19)-(22), we can obtain the first network stage as shown in Fig. 3 (c). Fig. 3 (d) shows the \\(k^{th}\\) stage (\\(1<k<K\\)) of the network obtained by utilizing (19)-(22). **Final stage.** As shown in Fig. 3(e), in the final stage, we can approximately generate the HrHS image by (19). Note that \\(\\mathbf{X}^{(K)}\\) (the unfolding matrix of \\(\\mathcal{X}^{(K)}\\)) has been intrinsically encoded with low-rank structure. Moreover, according to **Theorem 1**, there exists an \\(\\mathbf{R}\\in\\mathbb{R}^{S\\times s}\\), s.t., \\(\\mathbf{Y}=\\mathbf{X}^{(K)}\\mathbf{R}\\), which satisfies the observation model (1). However, HrMS images \\(\\mathcal{Y}\\) are usually corrupted with slight noise in reality, and there is a little gap between the low rank assumption and the real situation. This implies that \\(\\mathbf{X}^{(K)}\\) is not exactly equivalent to the to-be-estimated HrHS image. Therefore, as shown in Fig. 3 (e), in the final stage of the network, we add a ResNet on \\(\\mathcal{X}^{(K)}\\) to adjust the gap between the to-be-estimated HrHS image and the \\(\\mathbf{X}^{(K)}\\): \\[\\hat{\\mathcal{X}}=\\text{resNet}_{\\theta_{r}}\\left(\\mathcal{X}^{(K)}\\right). \\tag{23}\\] In this way, we design an end-to-end training architecture, dubbed as HSI fusion net. We denote the entire MS/HS fusion net as \\(\\hat{\\mathcal{X}}=\\text{MHFnet}\\left(\\mathcal{Y},\\mathcal{Z},\\Theta\\right)\\), where \\(\\Theta\\) represents Figure 3: (a) The proposed network with \\(K\\) stages implementing \\(K\\) iterations in the iterative optimization algorithm, where the \\(k^{th}\\) stage is denoted as \\(\\mathcal{S}_{k},(k=1,2,\\cdots,K)\\). (b) The flowchart of \\(k^{th}\\) (\\(k<K\\)) stage. (c)-(e) Illustration of the first, \\(k^{th}\\) (\\(1<k<K\\)) and final stage of the proposed network, respectively. When setting \\(\\hat{\\mathcal{Y}}^{(k)}=\\mathbf{0}\\), \\(\\mathcal{S}_{k}\\) is equivalent to \\(\\mathcal{S}_{1}\\). all the parameters involved in the network, including \\(\\mathbf{A}\\), \\(\\mathbf{B}\\), \\(\\{\\theta_{d}^{(k)},\\theta_{u}^{(k)},\\theta_{p}^{(k)}\\}_{k=1}^{K-1}\\), \\(\\theta_{d}^{(K)}\\) and \\(\\theta_{r}\\). Please refer to supplementary material for more details of the network design. ### Network training **Training loss.** As shown in Fig. 3 (e), the training loss for each training image is defined as following: \\[L=\\|\\hat{\\mathcal{X}}-\\mathcal{X}\\|_{F}^{2}+\\alpha\\sum\ olimits_{k=1}^{K}\\| \\mathcal{X}^{(k)}-\\mathcal{X}\\|_{F}^{2}+\\beta\\|\\mathcal{E}^{(K)}\\|_{F}^{2}, \\tag{24}\\] where \\(\\hat{\\mathcal{X}}\\) and \\(\\mathcal{X}^{(k)}\\) are the final and per-stage outputs of the proposed network, \\(\\alpha\\) and \\(\\beta\\) are two trade-off parameters6. The first term is the pixel-wise \\(L_{2}\\) distance between the output of the proposed network and the ground truth \\(\\mathcal{X}\\), which is the main component of our loss function. The second term is the pixel-wise \\(L_{2}\\) distance between the output \\(\\mathcal{X}^{(k)}\\) and the ground truth \\(\\mathcal{X}\\) in each stage. This term helps find the correct parameters in each stage, since appropriate \\(\\hat{\\mathcal{Y}}^{(k)}\\) would lead to \\(\\mathcal{X}^{(k)}\\approx\\mathcal{X}\\). The final term is the pixel-wise \\(L_{2}\\) distance of the residual of observation model (2) for the final stage of the network. Footnote 6: We set \\(\\alpha\\) and \\(\\beta\\) with small values (\\(0.1\\) and \\(0.01\\), respectively) in all experiments, to make the first term play a dominant role. **Training data.** For simulation data and real data with available ground-truth HrHS images, we can easily use the paired training data \\(\\{(\\mathcal{Y}_{n},\\mathcal{Z}_{n}),\\mathcal{X}_{n}\\}_{n=1}^{N}\\) to learn the parameters in the proposed MHF-net. Unfortunately, for real data, HrHS images \\(\\mathcal{X}_{n}\\)s are sometimes unavailable. In this case, we use the method proposed in [30] to address this problem, where the Wald protocol [50] is used to create the training data as shown in Fig. 4. We downsample both HrMS images and LrHS images, so that the original LrHS images can be taken as references for the downsampled data. Please refer to supplementary material for more details. **Implementation details.** We implement and train our network using TensorFlow framework. We use Adam optimizer to train the network for 50000 iterations with a batch size of 10 and a learning rate of 0.0001. The initializations of the parameters and other implementation details are listed in supplementary materials. ## 5 Experimental results We first conduct simulated experiments to verify the mechanism of MHF-net quantitatively. Then, experimental results on simulated and real data sets are demonstrated to evaluate the performance of MHF-net. **Evaluation measures.** Five quantitative picture quality indices (PQI) are employed for performance evaluation, including peak signal-to-noise ratio (PSNR), spectral angle mapper (SAM) [49], errour relative globale adimensionnelle de synthese (ERGAS [38]), structure similarity (SSIM [39]), feature similarity (FSIM [51]). SAM calculates the average angle between spectrum vectors of the target MSI and the reference one across all spatial positions and ERGAS measures fidelity of the restored image based on the weighted sum of MSE in each band. PSNR, SSIM and FSIM are conventional PQIs. They evaluate the similarity between the target and the reference images based on MSE and structural consistency, perceptual consistency, respectively. The smaller ERGAS and SAM are, and the larger PSNR, SSIM and FSIM are, the better the fusion result is. ### Model verification with CAVE data To verify the efficiency of the proposed MHF-net, we first compare the performance of MHF-net with different settings on the CAVE Multispectral Image Database [46]7. The database consists of 32 scenes with spatial size of \\(512\\times 512\\), including full spectral resolution reflectance data from 400nm to 700nm at 10nm steps (31 bands in total). We generate the HrMS image (RGB image) by integrating all the ground truth HrHS bands with the same simulated spectral response \\(\\mathbf{R}\\), and generate the LrHS images via downsampling the ground-truth with a factor of \\(32\\) implemented by averaging over \\(32\\times 32\\) pixel blocks as [2, 16]. Footnote 7: [http://www.cs.columbia.edu/CAVE/databases/](http://www.cs.columbia.edu/CAVE/databases/) To prepare samples for training, we randomly select \\(20\\) HS images from CAVE database and extract \\(96\\times 96\\) overlapped patches from them as reference HrHS images for training. Then the utilized HrHS, HrMS and LrHS images are of size \\(96\\times 96\\times 31\\), \\(96\\times 96\\times 3\\) and \\(3\\times 3\\times 31\\), respectively. The remaining \\(12\\) HS images of the database are used for validation, where the original images are treated as ground truth HrHS images, and the HrMS and LrHS images are generated similarly as the training samples. We compare the performance of the proposed MHF-net under different stage number \\(K\\). In order to make the competition fair, we adjust the level number \\(L\\) of the ResNet used in proxNet\\({}_{\\theta_{p}^{(k)}}\\) for each situation, so that the total level number of the network in each setting is similar to each other. Moreover, to better verify the efficiency of the proposed network, we implement another network for competition, which only uses the ResNet in (22) and (23) without using other structures in MHF-net. This method is simply denoted as \"ResNet\". In this method, we set the input as \\([\\mathcal{Y},\\mathcal{Z}_{up}]\\), where \\(\\mathcal{Z}_{up}\\) is obtained by interpolating the LrHS image \\(\\mathcal{Z}\\) (using a bicubic filter) to the dimension of \\(\\mathcal{Y}\\) as [28] did. We set the level number of ResNet to be 30. Figure 4: Illustration of how to create the training data when HrHS images are unavailable. Table 1 shows the average results over 12 testing HS images of two DL methods in different settings. We can observe that MHF-net with more stages, even with fewer net levels in total, can significantly lead to better performance. We can also observe that the MHF-net can achieve better results than ResNet (about 5db in PSNR), while the main difference between MHF-net and ResNet is our proposed stage structure in the network. These results show that the proposed stage structure in MHF-net, which introduces interpretability specifically to the problem, can indeed help enhance the performance of MS/HS fusion. ### Experiments with simulated data We then evaluate MHF-net on simulated data in comparison with state-of-art methods. **Comparison methods.** The comparison methods include: FUSE [41]8, ICCV15 [18]9, GLP-HS [31]10, SFIM-HS [19]10, GSA [1]10, CNMF [48]11, M-FUSE [40]12 and SASFM [14]13, representing the state-of-the-art traditional methods. We also compare the proposed MHF-net with the implemented ResNet method. Footnote 8: [http://wei.perso.enseeiht.fr/publications.html](http://wei.perso.enseeiht.fr/publications.html) Footnote 9: [https://github.com/lenhse/SupResPALM](https://github.com/lenhse/SupResPALM) Footnote 10: [http://openreotesensing.net/Knowledgebase/hyperspectral-and-multispectral-data-fusion/](http://openreotesensing.net/Knowledgebase/hyperspectral-and-multispectral-data-fusion/) Footnote 11: [http://nastoyokoya.com/Download.html](http://nastoyokoya.com/Download.html) Footnote 12: [https://github.com/wp245/BlindFuse](https://github.com/wp245/BlindFuse) Footnote 13: We write the code by ourselves. **Performance comparison with CAVE data.** With the same experiment setting as previous section, we compare the performance of all competing methods on the 12 testing HS images (\\(K=13\\) and \\(L=2\\) in MHF-net). Table 2 lists the average performance over all testing images of all comparison methods. From the table, it is seen that the proposed MHF-net method can significantly outperform other competing methods with respect to all evaluation measures. Fig. 5 shows the \\(10\\)-th band (490nm) of the HS image _chart and staffed toy_ obtained by the completing methods. It is easy to observe that the proposed method performs better than other competing ones, in the better recovery of both finer-grained textures and coarser-grained structures. More results are depicted in the supplementary material. **Performance comparison with Chikusei data.** The Chikusei data set [47]14 is an airborne HS image taken over Chikusei, Ibaraki, Japan, on 29 July 2014. The data set is of size \\(2517\\times 2335\\times 128\\) with the spectral range from 0.36 to 1.018. We view the original data as the HrHS image and simulate the HrMS (RGB image) and LrMS (with a factor of 32) image in the similar way as the previous section. Footnote 14: [http://naotogyokoya.com/Download.html](http://naotogyokoya.com/Download.html) We select a \\(500\\times 2210\\)-pixel-size image from the top area of the original data for training, and extract \\(96\\times 96\\) overlapped patches from the training data as reference HrHS images for training. The input HrHS, HrMS and \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\hline & \\multirow{2}{*}{ResNet} & \\multicolumn{4}{c}{MHF-net with \\((K,L)\\)} \\\\ \\cline{3-6} & & \\((4,9)\\) & \\((7,5)\\) & \\((10,4)\\) & \\((13,2)\\) \\\\ \\hline PSNR & 32.25 & 36.15 & 36.61 & 36.85 & **37.23** \\\\ SAM & 19.093 & 9.206 & 8.636 & 7.587 & **7.298** \\\\ ERGA & 141.28 & 92.94 & 88.56 & 86.53 & **81.87** \\\\ SSIM & 0.865 & 0.948 & 0.955 & 0.960 & **0.962** \\\\ FSIM & 0.966 & 0.974 & 0.975 & 0.975 & **0.976** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Average performance of the competing methods over 12 testing samples of CAVE data set with respect to 5 PQIs. \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\hline & PSNR & SAM & ERGAS & SSIM & FSIM \\\\ \\hline FUSE & 30.95 & 13.07 & 188.72 & 0.842 & 0.933 \\\\ ICCV15 & 32.94 & 10.18 & 131.94 & 0.919 & 0.961 \\\\ GLP-HS & 33.07 & 11.58 & 126.04 & 0.891 & 0.942 \\\\ SFIM-HS & 31.86 & 7.63 & 147.41 & 0.914 & 0.932 \\\\ GSA & 33.78 & 11.56 & 122.50 & 0.884 & 0.959 \\\\ CNMF & 33.59 & 8.22 & 122.12 & 0.929 & 0.964 \\\\ M-FUSE & 32.11 & 8.82 & 151.97 & 0.914 & 0.947 \\\\ SASFM & 26.59 & 11.25 & 362.70 & 0.799 & 0.916 \\\\ ResNet & 32.25 & 16.14 & 141.28 & 0.865 & 0.966 \\\\ MHF-net & **37.23** & **7.30** & **81.87** & **0.962** & **0.976** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Average performance of the competing methods over 12 testing images of CAVE date set with respect to 5 PQIs. Figure 5: (a) The simulated RGB (HrMS) and LrHS (left bottom) images of _chart and staffed toy_, where we display the 10th (490nm) band of the HS image. (b) The ground-truth HrHS image. (c)-(l) The results obtained by 10 comparison methods, with two demarcated areas zoomed in 4 times for easy observation. LrHS samples are of sizes \\(96\\times 96\\times 128\\), \\(96\\times 96\\times 3\\) and \\(3\\times 3\\times 128\\), respectively. Besides, from remaining part of the original image, we extract 16 non-overlap \\(448\\times 544\\times 128\\) images as testing data. More details about the experimental setting are introduced in supplementary material. Table 3 shows the average performance over 16 testing images of all competing methods. It is easy to observe that the proposed method significantly outperforms other methods with respect to all evaluation measures. Fig. 6 shows the composite images of a test sample obtained by the competing methods, with bands 70-100-36 as R-G-B. It is seen that the composite image obtained by MHF-net is closest to the ground-truth, while the results of other methods usually contain obvious incorrect structure or spectral distortion. More results are listed in supplementary material. ### Experiments with real data In this section, sample images of _Roman Colosseum_ acquired by World View-2 (WV-2) are used in our experiments15. This data set contains an HrMS image (RGB image) of size \\(1676\\times 2632\\times 3\\) and an LrHS image of size \\(419\\times 658\\times 8\\), while the HrHS image is not available. We select the top half part of the HrMS (\\(836\\times 2632\\times 3\\)) and LrHS (\\(209\\times 658\\times 8\\)) image to train the MHF-net, and exploit the remaining parts of the data set as testing data. We first extract the training data into \\(144\\times 144\\times 3\\) overlapped HrMS patches and \\(36\\times 36\\times 3\\) overlapped LrHS patches and then generate the training samples by the method as shown in Fig. 4. The input HrHS, HrMS and LrHS samples are of size \\(36\\times 36\\times 8\\), \\(36\\times 36\\times 3\\) and \\(9\\times 9\\times 8\\), respectively. Footnote 15: [https://www.harrisgeospatial.com/DataImagery/SatelliteImagery/HighResolution/WorldView-2.aspx](https://www.harrisgeospatial.com/DataImagery/SatelliteImagery/HighResolution/WorldView-2.aspx) Fig. 6 shows a portion of the fusion result of the testing data (left bottom area of the original image). Visual inspection evidently shows that the proposed method gives the better visual effect. By comparing with the results of ResNet, we can find that the results of both methods are clear, but the color and brightness of result of the proposed method are much closer to the LrHS image. Figure 6: (a) The simulated RGB (HrMS) and LrHS (left bottom) images of a test sample in Chikusei data set. We show the composite image of the HS image with bands 70-100-36 as R-G-B. (b) The ground-truth HrHS image. (c)-(l) The results obtained by 10 comparison methods, with a demarcated area zoomed in 4 times for easy observation. \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\hline & PSNR & SAM & ERGAS & SSIM & FSIM \\\\ \\hline FUSE & 26.59 & 7.92 & 272.43 & 0.718 & 0.860 \\\\ ICCV15 & 27.77 & 3.98 & 178.14 & 0.779 & 0.870 \\\\ GLP-HS & 28.85 & 4.17 & 163.60 & 0.796 & 0.903 \\\\ SFIM-HS & 28.50 & 4.22 & 167.85 & 0.793 & 0.900 \\\\ CSA & 27.08 & 5.39 & 238.63 & 0.673 & 0.835 \\\\ CNAMF & 28.78 & 3.84 & 173.41 & 0.780 & 0.898 \\\\ M-FUSE & 24.85 & 6.62 & 282.02 & 0.642 & 0.849 \\\\ SASFM & 24.93 & 7.95 & 369.35 & 0.636 & 0.845 \\\\ ResNet & 29.35 & 3.69 & 144.12 & 0.866 & 0.930 \\\\ MHF-net & **32.26** & **3.02** & **109.55** & **0.890** & **0.946** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Average performance of the competing methods over 16 testing samples of Chikusei data set with respect to 5 PQIs. Figure 7: (a) and (b) are the HrMS (RGB) and LrHS images of the left bottom area of _Roman Colosseum_ acquired by World View-2 (WV-2). We show the composite image of the HS image with bands 5-3-2 as R-G-B. (c)-(l) The results obtained by 10 comparison methods, with a demarcated area zoomed in 5 times for easy observation. ## 6 Conclusion In this paper, we have provided a new MS/HS fusion network. The network takes the advantage of deep learning that all parameters can be learned from the training data with fewer prior pre-assumptions on data, and furthermore takes into account the generation mechanism underlying the MS/HS fusion data. This is achieved by constructing a new MS/HS fusion model based on the observation models, and unfolding the algorithm into an optimization-inspired deep network. The network is thus specifically interpretable to the task, and can help discover the spatial and spectral response operators in a purely end-to-end manner. Experiments implemented on simulated and real MS/HS fusion cases have substantiated the superiority of the proposed MHF-net over the state-of-the-art methods. ## References * [1] B. Aiazzi, S. Baronti, and M. Selva. Improving component substitution pansharpening through multivariate regression of ms + pan data. _IEEE Transactions on Geoscience and Remote Sensing_, 45(10):3230-3239, 2007. * [2] N. Akhtar, F. Shafait, and A. Mian. Sparse spatio-spectral representation for hyperspectral image super-resolution. In _European Conference on Computer Vision_, pages 63-78. Springer, 2014. * [3] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. _SIAM journal on imaging sciences_, 2(1):183-202, 2009. * [4] P. J. Burt and E. H. Adelson. The laplacian pyramid as a compact image code. In _Readings in Computer Vision_, pages 671-679. Elsevier, 1987. * [5] P. Chavez, S. C. Sides, J. A. Anderson, et al. Comparison of three different methods to merge multiresolution and multispectral data- landsat tm and spot panchromatic. _Photogrammetric Engineering and remote sensing_, 57(3):295-303, 1991. * [6] M. N. Do and M. Vetterli. The contourlet transform: an efficient directional multiresolution image representation. _IEEE Transactions on image processing_, 14(12):2091-2106, 2005. * [7] C. Dong, C. C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. _IEEE transactions on pattern analysis and machine intelligence_, 38(2):295-307, 2016. * [8] D. L. Donoho. De-noising by soft-thresholding. _IEEE transactions on information theory_, 41(3):613-627, 1995. * [9] V. Dumoulin and F. Visin. A guide to convolution arithmetic for deep learning. _arXiv preprint arXiv:1603.07285_, 2016. * [10] M. Fauvel, Y. Tarabalka, J. A. Benediktsson, J. Chanussot, and J. C. Tilton. Advances in spectral-spatial classification of hyperspectral images. _Proceedings of the IEEE_, 101(3):652-675, 2013. * [11] C. Grohnfeldt, X. Zhu, and R. Bamler. Jointly sparse fusion of hyperspectral and multispectral imagery. In _IGARSS_, pages 4090-4093, 2013. * [12] R. C. Hardie, M. T. Eismann, and G. L. Wilson. Map estimation for hyperspectral image resolution enhancement using an auxiliary sensor. _IEEE Transactions on Image Processing_, 13(9):1174-1184, 2004. * [13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016. * [14] B. Huang, H. Song, H. Cui, J. Peng, and Z. Xu. Spatial and spectral image fusion using sparse matrix factorization. _IEEE Transactions on Geoscience and Remote Sensing_, 52(3):1693-1704, 2014. * [15] W. Huang, L. Xiao, Z. Wei, H. Liu, and S. Tang. A new pan-sharpening method with deep neural networks. _IEEE Geoscience and Remote Sensing Letters_, 12(5):1037-1041, 2015. * [16] R. Kawakami, Y. Matsushita, J. Wright, M. Ben-Ezra, Y.-W. Tai, and K. Ikeuchi. High-resolution hyperspectral imaging via matrix factorization. In _Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on_, pages 2329-2336. IEEE, 2011. * [17] C. A. Laben and B. V. Brower. Process for enhancing the spatial resolution of multispectral imagery using pansharpening, Jan. 4 2000. US Patent 6,01,875. * [18] C. Lanaras, E. Baltsavias, and K. Schindler. Hyperspectral super-resolution by coupled spectral unmixing. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 3586-3594, 2015. * [19] J. Liu. Smoothing filter-based intensity modulation: A spectral preserve image fusion technique for improving spatial details. _International Journal of Remote Sensing_, 21(18):3461-3472, 2000. * [20] L. Loncan, L. B. Almeida, J. M. Bioucas-Dias, X. Briottet, J. Chanussot, N. Dobigeon, S. Fabre, W. Liao, G. A. Licciardi, M. Simoes, et al. Hyperspectral pansharpening: A review. _arXiv preprint arXiv:1504.04531_, 2015. * [21] S. G. Mallat. A theory for multiresolution signal decomposition: the wavelet representation. _IEEE transactions on pattern analysis and machine intelligence_, 11(7):674-693, 1989. * [22] G. Masi, D. Cozzolino, L. Verdoliva, and G. Scarpa. Pansharpening by convolutional neural networks. _Remote Sensing_, 8(7):594, 2016. * [23] S. Michel, M.-J. LEFEVRE-FONOLLOSA, and S. HOSFORD. Hypxim-a hyperspectral satellite defined for science, security and defence users. _PAN_, 400(800):400, 2011. * [24] R. Molina, A. K. Katsaggelos, and J. Mateos. Bayesian and regularization methods for hyperparameter estimation in image restoration. _IEEE Transactions on Image Processing_, 8(2):231-246, 1999. * [25] R. Molina, M. Vega, J. Mateos, and A. K. Katsaggelos. Variational posterior distribution approximation in bayesian super resolution reconstruction of multispectral images. _Applied and Computational Harmonic Analysis_, 24(2):251-267, 2008. * [26] Z. H. Nezhad, A. Karami, R. Heylen, and P. Scheunders. Fusion of hyperspectral and multispectral images using spectral unmixing and sparse coding. _IEEE Journal of SelectedTopics in Applied Earth Observations and Remote Sensing_, 9(6):2377-2389, 2016. * [27] F. Palsson, J. R. Sveinsson, and M. O. Ulfarsson. A new pansharpening algorithm based on total variation. _IEEE Geoscience and Remote Sensing Letters_, 11(1):318-322, 2014. * [28] F. Palsson, J. R. Sveinsson, and M. O. Ulfarsson. Multispectral and hyperspectral image fusion using a 3-D-convolutional neural network. _IEEE Geoscience and Remote Sensing Letters_, 14(5):639-643, 2017. * [29] Y. Rao, L. He, and J. Zhu. A residual convolutional neural network for pan-shaprening. In _Remote Sensing with Intelligent Processing (RSIP), 2017 International Workshop on_, pages 1-4. IEEE, 2017. * [30] G. Scarpa, S. Vitale, and D. Cozzolino. Target-adaptive cnn-based pansharpening. _IEEE Transactions on Geoscience and Remote Sensing_, (99):1-15, 2018. * [31] M. Selva, B. Aiazzi, F. Butera, L. Chiarantini, and S. Baronti. Hyper-sharpening: A first approach on sim-ga data. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 8(6):3008-3024, 2015. * [32] Z. Shao and J. Cai. Remote sensing image fusion with deep convolutional neural network. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 11(5):1656-1669, 2018. * [33] J.-L. Starck, J. Fadili, and F. Murtagh. The undecimated wavelet decomposition and its reconstruction. _IEEE Transactions on Image Processing_, 16(2):297-309, 2007. * [34] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1-9, 2015. * [35] Y. Tarabalka, J. Chanussot, and J. A. Benediktsson. Segmentation and classification of hyperspectral images using minimum spanning forest grown from automatically selected markers. _IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)_, 40(5):1267-1279, 2010. * [36] M. Uzair, A. Mahmood, and A. S. Mian. Hyperspectral face recognition using 3d-dct and partial least squares. In _BMVC_, 2013. * [37] H. Van Nguyen, A. Banerjee, and R. Chellappa. Tracking via object reflectance using a hyperspectral video camera. In _Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on_, pages 44-51. IEEE, 2010. * [38] L. Wald. _Data Fusion: Definitions and Architectures: Fusion of Images of Different Spatial Resolutions_. Presses des IEcole MINES, 2002. * [39] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. _IEEE Trans. Image Processing_, 13(4):600-612, 2004. * [40] Q. Wei, J. Bioucas-Dias, N. Dobigeon, J.-Y. Tourneret, and S. Godsill. Blind model-based fusion of multi-band and panchromatic images. In _Multisensor Fusion and Integration for Intelligent Systems (MFI), 2016 IEEE International Conference on_, pages 21-25. IEEE, 2016. * [41] Q. Wei, N. Dobigeon, and J.-Y. Tourneret. Fast fusion of multi-band images based on solving a sylvester equation. _IEEE Transactions on Image Processing_, 24(11):4109-4121, 2015. * [42] Y. Wei and Q. Yuan. Deep residual learning for remote sensor imagery pansharpening. In _Remote Sensing with Intelligent Processing (RSIP), 2017 International Workshop on_, pages 1-4. IEEE, 2017. * [43] Y. Wei, Q. Yuan, H. Shen, and L. Zhang. Boosting the accuracy of multispectral image pansharpening by learning a deep residual network. _IEEE Geosci. Remote Sens. Lett_, 14(10):1795-1799, 2017. * [44] D. Yang and J. Sun. Proximal dehaze-net: A prior learning-based deep network for single image dehazing. In _Proceedings of the European Conference on Computer Vision (ECCV)_, pages 702-717, 2018. * [45] Y. Yang, J. Sun, H. Li, and Z. Xu. Admm-net: A deep learning approach for compressive sensing mri. _arXiv preprint arXiv:1705.06869_, 2017. * [46] F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum. _IEEE transactions on image processing_, 19(9):2241-2253, 2010. * [47] N. Yokoya, C. Grohfeldt, and J. Chanussot. Hyperspectral and multispectral data fusion: A comparative review of the recent literature. _IEEE Geoscience and Remote Sensing Magazine_, 5(2):29-56, 2017. * [48] N. Yokoya, T. Yairi, and A. Iwasaki. Coupled non-negative matrix factorization (CNMF) for hyperspectral and multispectral data fusion: Application to pasture classification. In _Geoscience and Remote Sensing Symposium (IGARSS), 2011 IEEE International_, pages 1779-1782. IEEE, 2011. * [49] R. H. Yuhas, J. W. Boardman, and A. F. Goetz. Determination of semi-arid landscape endmembers and seasonal trends using convex geometry spectral unmixing techniques. 1993. * [50] Y. Zeng, W. Huang, M. Liu, H. Zhang, and B. Zou. Fusion of satellite images in urban area: Assessing the quality of resulting images. In _Geoinformatics, 2010 18th International Conference on_, pages 1-4. IEEE, 2010. * [51] L. Zhang, L. Zhang, X. Mou, and D. Zhang. Fsim: a feature similarity index for image quality assessment. _IEEE Trans. Image Processing_, 20(8):2378-2386, 2011. * [52] Y. Zhang, Y. Wang, Y. Liu, C. Zhang, M. He, and S. Mei. Hyperspectral and multispectral image fusion using CNMF with minimum endmember simplex volume and abundance sparsity constraints. In _Geoscience and Remote Sensing Symposium (IGARSS), 2015 IEEE International_, pages 1929-1932. IEEE, 2015. * [53] J. Zhang13, J. Pan, W.-S. Lai, R. W. Lau, and M.-H. Yang. Learning fully convolutional networks for iterative non-blind deconvolution. 2017. * [54] Y. Zhao, J. Yang, Q. Zhang, L. Song, Y. Cheng, and Q. Pan. Hyperspectral imagery super-resolution by sparse representation and spectral regularization. _EURASIP Journal on Advances in Signal Processing_, 2011(1):87, 2011. * [55] Y. Zhao, J. Yang, Q. Zhang, L. Song, Y. Cheng, and Q. Pan. Hyperspectral imagery super-resolution by sparse representation and spectral regularization. _EURASIP Journal on Advances in Signal Processing_, 2011(1):87, 2011. * [56] Y. Zhao, J. Yang, Q. Zhang, L. Song, Y. Cheng, and Q. Pan. Hyperspectral imagery super-resolution by sparse representation and spectral regularization. _EURASIP Journal on Advances in Signal Processing_, 2011(1):87, 2011. * [57] Y. Zhao, J. Yang, Q. Zhang, L. Song, Y. Cheng, and Q. Pan. Hyperspectral imagery super-resolution by sparse representation and spectral regularization. _EURASIP Journal on Advances in Signal Processing_, 2011(1):87, 2011.
Hyperspectral imaging can help better understand the characteristics of different materials, compared with traditional image systems. However, only high-resolution multispectral (HrMS) and low-resolution hyperspectral (LrHS) images can generally be captured at video rate in practice. In this paper, we propose a model-based deep learning approach for merging an HrMS and LrHS images to generate a high-resolution hyperspectral (HrHS) image. In specific, we construct a novel MS/HS fusion model which takes the observation models of low-resolution images and the low-rankness knowledge along the spectral mode of HrHS image into consideration. Then we design an iterative algorithm to solve the model by exploiting the proximal gradient method. And then, by unfolding the designed algorithm, we construct a deep network, called MS/HS Fusion Net, with learning the proximal operators and model parameters by convolutional neural networks. Experimental results on simulated and real data substantiate the superiority of our method both visually and quantitatively as compared with state-of-the-art methods along this line of research.
Provide a brief summary of the text.
221
elsevier/cb3fd0fb_f55f_4295_a06a_9c07026ff6dc.md
# Contents lists available at ScienceDirect International Journal of Applied Earth Observation and Geoinformation journal homepage: www.elsevier.com/locate/jag Contents lists available at ScienceDirect International Journal of Applied Earth Observation and Geoinformation journal homepage: www.elsevier.com/locate/jag ## 1 Introduction European Commission defines smart cities as _places where traditional networks and services are made more efficient with the use of digital solutions for the benefit of its inhabitants and business_(European Commission, 2023). Furthermore, the importance of digital solutions for better management of the city and its mobility network is enhanced. In recent years there has been a strong push towards development and implementation of smart city concept (Kim, 2022), both at decision-making and scientific levels. Within the various scientific fields concerned with the topic, it is remarkable the synergy between Geomatics, Information and Communication Technologies, and city planning (Mortaheb and Jankowski, 2022; Heidari et al., 2022). Mobility management, intended as the ability to move freely and easily between two points of the city, can be identified as one of the most interesting aspects of this emerging paradigm. In this context, the term smart mobility is often used to highlight its importance for the development of a smart city (Surdonja et al., 2020). To foster smart mobility in urban areas, it becomes essential to ensure an efficient transportation network, which must comprehend private vehicular traffic, public transportation and pedestrian mobility. Focusing then on pedestrians, the proper management and maintenance of sidewalks, pedestrian areas, arcades, squares and their connections have great importance. Plus, in order to ensure physical accessibility to places, specific actions that take into consideration several issues of the public space (Marconcini, 2018) should be considered. One of them is to grasp the knowledge of the characteristics of the connection ways (e.g., sidewalks), and their correspondence with national standards concerning physical accessibility. Within this context, it becomes essential to develop solutions that enable rapid and effective data collection. In particular, an up-to-date inventory of relevant information on the networks that enable urban pedestrian mobility is of essential importance for all future planning, maintenance, and broader management actions on the urban structure. An excellent example of data collection is crowdsourcing, which is used in the case of project sidewalk (Saha et al., 2019) and wheelmap (Mobasheri et al., 2017). Both rely on data provided by users who voluntarily contribute to replenishing project databases. Another data collection method is the performance of manual in-situ measurements by specialized technicians. Other methods, instead, involve theuse of artificial intelligence techniques aimed at analysing geospatial data (Hou and Ai, 2020). The data collected are then generally used to create maps for decision-making and as support for route design and scheduling of maintenance and interventions. This article is focused on the analysis of geospatial data exploiting artificial intelligence and automatized techniques focusing on pedestrian mobility networks and city's sidewalks. In scientific literature, there are various articles dealing with automatic sidewalk recognition and assessment through point cloud analysis (Ai and Tsai, 2016; Hou and Ai, 2020; Halabya and El-Rayes, 2020). In those cases, the methods are based on modern and standardized urban infrastructure patterns. For example, a curb separating the street from the sidewalk and a different elevation of the two elements are often considered prerequisites. This paper presents a complete and automated method for the characterization of urban navigable areas for pedestrians on historic sites, analysing and processing a point cloud. A historic urban area has elements of peculiarity in which it is not always possible to identify standard layouts. Existing methods to locate and assess sidewalks are based on standard urban areas and are prone to fail when used in historic cities. Historic urban layouts rarely follow standardized logic but are often a set of ad hoc solutions that actually make the historic urban environment unique. Such solutions are commonly the result of successive interventions over time, that attempt to blend well with the historic urban tissue of which they are part. By carefully observing various historic cities and historic districts of various towns, we noticed one frequent peculiarity: roadways and sidewalks are at the same elevation and are not separated by curbs. Instead, they can be identified because they are paved with different materials. Therefore, the procedures of this paper exploit differences in urban pavings. The difference in pavings is exploited by Machine Learning (ML) and Deep Learning (DL) approaches to segment sidewalks on the point cloud and identify their paving material. The novel method we propose here is improved and extended with respect to some preliminary results previously communicated and focused only on some steps. Here we describe a complete workflow that from a given raw data input (point cloud) allows obtaining a very accurate vector file containing a correctly spatially referenced network of the navigable space for pedestrians in historical sites. Based on that network, routing analyses are then carried out. The method is developed entirely with _Python_, using specific libraries for point cloud management, ML, and DL workflows. The resulting vector file is then published on _GitHub_, and used for updating OpenStreetMap (OSM) dataset of the city selected as case study: Sabbioneta, a historic city and UNESCO site located in northern Italy. The article is organized as follows. Section 2 presents articles related to point cloud processing for urban accessibility management and sidewalk inventory. Section 3 widely describes the method, focusing on and describing in detail each step. Section 4 presents the case study identified for the test phase, results and discussion. Section 5 is devoted to conclusions and future works. ## 2 Related work The use of point cloud processing methods for the management of pedestrian mobility in urban areas is a well-investigated topic in scientific literature. The role of point clouds is manyfold: they could be segmented and analysed in order to search and inventory specific elements of the urban scene; they could be used as support for the generation of navigable routes within the city; and they could be a way to assist the physical accessibility assessment of the city. Regarding the detection of urban objects, there is a great interest in curb identification, because they are the element of separation between roadway and sidewalk. For example, Serna and Marcotegui (2013) focused on the detection of curbs on Mobile Laser Scanning (MLS) point cloud. They segmented urban objects using range images as well as height and geodesic features. They performed accessibility analysis using geometrical features and accessibility standards. Then, they built an obstacle map for the generation of adaptive itineraries considering wheelchair users. curb detection and classification were also performed by Ishikawa et al. (2018), who extracted curbs from MLS and then classified them into two categories: those that allow access to off-road facilities and those that do not. By categorizing the curb types they assessed also the accessibility. The method was based on analysis of the angles of adjacent points on a scan line, then a voting process was implemented using surrounding classification results. A method to automatically classify urban ground elements from MLS data was also proposed by Balado et al. (2018). Their method was based on a combination of topological and geometrical analysis. Element classification was based on adjacency analysis and graph comparison. _Road, treaf, riser, curb_ and _sidewalk_ were detected to provide valuable data from an accessibility point of view. Regarding sidewalk inventory and assessment, Ai and Tsai (2016) extracted sidewalks and curb ramps from a combination of images and mobile LiDAR, based on some specific dimensional characteristics of curbs. The sidewalk features (width and slope) were measured and compared with the American with Disabilities Act (ADA), and the resulting data were stored in a GIS layer. Similarly, Hou and Ai (2020) proposed a deep neural network approach to extract and characterize sidewalks from LiDAR data. The stripe-based sidewalk extraction was also able to detect sidewalks' geometry features like width, grade, and cross slope, and compare them with ADA requirements. To support administrations in assessing existing conditions of sidewalk networks and their compliance with accessibility requirements, Halabya and El-Rayes (2020) used ML, photogrammetry and point cloud processing to extract sidewalk dimensions and conditions. Further approaches to sidewalk detection refer first to road boundaries detection and then to curb detection methods to separate road pavement from roadside (Ma et al., 2018). Regarding sidewalk materials, Hosseini et al. (2022) proposed a method to classify sidewalk materials on photos, based on a DL technique and capable of recognizing several urban fabrics. The topic of computation of navigable routes in outdoor environments typically includes a first part concerning the classification of urban elements and their accessibility evaluation, followed by the calculation of routes through a pathfinding algorithm. An example of this approach has been presented by Lopez-Pazos et al. (2017) and Balado et al. (2019), who proposed a method for the direct use of MLS point clouds for the generation of paths for pedestrians with different motor skills and also considering possible barriers for people with reduced mobility. The method involved using an already classified point cloud, obstacle refinement, graph modelling and the creation of paths. Similarly, the project presented by Arenas et al. (2016) and by Corso Sarminento and Casals Fernandez (2017), had the main goal of developing a tool to assess the accessibility of public space and compute optimal routes. The tool was developed both on web and mobile phone platforms. The starting point of the work was TLS point clouds analysed through specific algorithms. The results were stored in specific GIS raster layers and applied for further accessibility studies. Another example, proposed by Luaces et al. (2021), aimed at computing accessible routes integrating data from multiple sources. In this work, the starting dataset was OSM, which was improved with information extracted from MLS point clouds (_ramps_, _steps_, _pedestrian crossings_). Obstacles and accessibility problems were detected by analysing social network interactions. The computed routes were generated by considering all the needs and limited mobility of each individual and were provided to final users employing a specifically developed mobile application. Another comprehensive work was proposed by Ning et al. (2022), which converted street view images into land cover maps, identified sidewalks, computed their widths and generated a sidewalk network. Similarly, Hosseini et al. (2022) developed a sidewalk network dataset based on ML and computer vision techniques applied to aerial images. In contrast to previously described works, in this paper, the urban area investigated is a historic city. Urban object segmentation cannot be based on curb detection, and the peculiarity of urban element organization should be taken into consideration. In light of that, a novel automatic procedure is presented that allows the generation of an accurate and realistic vector network representing the sidewalks of the historic city including some of their geometric attributes and their pavings information. The topic of accessibility management in historic urban environments through the use of point clouds has been covered by a doctoral thesis (Treccani, 2022), from which the idea of the method presented in this paper is derived. Compared with the aforementioned thesis, the method discussed here is more comprehensive and sophisticated. Concretely, the improvement over the previous work is threefold: * This paper presents the refined and strengthened version of the general workflow, which was never published as a whole, and where each step is carefully developed and commented on. Previously published results described only some steps of the general workflow and were tested on smaller areas. * Here, the semantic segmentation of the point cloud into sidewalk and road exploits ML approach, through a Random Forest (RF) classifier, showing higher performances with respect to previous work. * Here, paving material segmentation is based on a DL approach applied on the rasterized point cloud, performing a reprojection of predicted values onto the points of the point cloud. ## 3 Materials and method The methodology (Fig. 1) is composed of three main steps: preparation of the raw data, data processing, and representation of the data for pedestrian mobility purposes. The pre-processing step consists of the subdivision into Regions of Interest (ROIs), and the computation of several features on the point clouds. The processing phase includes a DL segmentation, applied on raster images generated from the ROIs point clouds. The predicted values, related to paving materials composing the ground surfaces, are re-projected back onto the points. Then, following a ML approach, the ROIs are segmented into road and sidewalk. The cluster of points labelled as sidewalk are then used to compute some sidewalk's geometric attributes. The representation of the data involves the vectorization of the sidewalks network and the computed attributes. The output data are used for the generation of accurate pedestrian mobility paths, taking into account physical accessibility needs and National regulations. ### Data preparation The input data is a MLS point cloud of a historic urban environment, correctly georeferenced and provided with trajectory data. The point cloud is here subdivided in ROIs and point features are computed, recalled by Table 1. #### 3.1.1 Global features Some point attributes, acquired by the survey instrument, are automatically stored within the MLS point cloud. The ones used in this work are the Intensity and the Red Green Blue (RGB) colour data for each point. The Intensity is a function of several variables, including the distance from the laser, the angle of incidence of the laser beam on the surface and the specific material reflectance (Yuan et al., 2020), and can be intended as the amount of energy of the backscattering signal of the instrument. RGB data are acquired using cameras equipped on the MLS system. The colour space is then changed from RGB to Hue Saturation Value (HSV). By doing so, brightness data are stored within the V channel and the influence of shadows and sunny areas can be controlled (Pierdicca et al., 2020). #### 3.1.2 Selection of the region of interest The point cloud is subdivided into several ROIs along the MLS trajectory. The purpose is twofold: the reduction of memory consumption during computation, and the focus of the analysis by discretizing the road environment into portions of fixed extension along the road trajectory. By doing so, we can control the resolution of the output data. In fact, since the final output data is a sidewalk network whose edges and attributes are computed on the basis of the ROIs, its resolution is proportional to the dimension of the ROI itself. ROIs are generated by cropping the point cloud following the survey trajectory (i.e., along the road route), using a set of oriented Bounding Boxes (BBs). The BB orientation is based on two consecutive points at \\begin{table} \\begin{tabular}{l l} \\hline \\hline Features & Values \\\\ \\hline Global features & Intensity, RGB, HSV \\\\ Geometric features & Roughness, Omnivariance, Sphericity, Anisotropy, Normal change rate, Vertically, Normal vector \\\\ Local features & Relative elevation, Relative distance \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Point features computed and used in the workflow. Global features are computed on the whole point cloud, local features and geometric features are computed after the subdivision of the point cloud into ROIs. Figure 1: Conceptual scheme of the workflow presented in this paper. The method takes as input data a point cloud of an urban environment, it performs automatic computations focusing on the sidewalks of the city. The workflow includes data preparation, data processing through ML and DL, sidewalks’ attribute computation, and vectorization of the extracted information. The output data is a representation of pedestrian navigable space which contains also sidewalks specific information. a fixed \\(d\\) distance extracted along the trajectory line (Fig. 2a). The Z coordinate of trajectory points corresponds to the instrument position (on top of the car). For the purpose of the BB creation, those points are projected on the ground surface of the road by moving their Z value. The main BB dimension is the \\(d\\) distance, the other two dimensions are a width \\(b\\) selected bigger than the average road width, and a height \\(h\\) centred on the road level and selected bigger enough to ensure the selection of points on the ground surface (Fig. 2a). Typically in historical cities, straight portions of roads are not so long, while sharp bends and crossings with other roads are very common. In those areas, the oriented BB could not be able to include all the necessary points, as in the example provided by Fig. 2b. To cope with that, BBs in those areas are enlarged and their length is increased to \\(d^{\\prime}\\) (Fig. 2c and d). Plus, an overlap between neighbouring BBs is implemented in those specific areas. The value of \\(d^{\\prime}\\) is selected after several empirical tests, looking at the coverage of the ROIs on the point cloud in road bend areas. The value of \\(d^{\\prime}\\) is defined such that \\(d^{\\prime}=1.5d\\) and so it is directly related to the chosen value of \\(d\\). Following this iterative process, starting from the first point of the trajectory and moving, two by two, to the last point, the point cloud is subdivided into ROIs. Then, on each ROI a refinement of the selection is implemented to remove points not pertaining to ground surfaces. Firstly, the exclusion of points inside buildings, exploiting the OSM building footprint dataset. Secondly, the removal of points aligned on vertical surfaces, exploiting the \\(N_{z}\\) component of the Normal vector of each point. During the survey of the city, it may happen that the LiDAR mounted on the instrument acquires also some points of objects inside the houses, passing through windows. To remove those points and other possible noisy points the OSM dataset can be used. OSM dataset can be downloaded for free from the website or using specific plugins existing for commercial or open-source GIS software. All the layers containing _building_ as Key value are downloaded and used to define polygons representing the buildings' footprint of the city. All points of the ROI within these polygons are then removed from the ROI (Fig. 3). Lastly, all the points aligned on the facades of the buildings or on other vertical surfaces are selected by relying on the Z component (\\(N_{z}\\)) of the normal vector of each point and are removed. In order to do that, all points with a value of \\(N_{z}\\) lower than a specific threshold \\(N_{z,lin}\\) are considered points aligned on vertical surfaces, and removed. The value of \\(N_{z,lin}\\) is established after empirical investigation. #### 3.1.3 Geometric features Geometric features are derived from the covariance matrix of the 3D structure tensor computed on the point neighbourhood (Weinmann et al., 2017, 2015). They describe the arrangement in space of points within the considered neighbourhood. Geometric features are here computed using the open-source software CloudCompare (www. cloudcompare.org). The software allows to compute several geometric features; within the workflow, only some features are selected and effectively used (reported by Table 1). Additionally, also the Normal vectors of points are computed. #### 3.1.4 Local features These features, which we define as local features, are computed on each ROI, rely on the relation of each point to the surrounding environment, and are specifically focused on the road and sidewalk areas. These features are the relative elevation \\(Z_{rel}\\) and the relative distance \\(d_{rel}\\), calculated as a function of the centre line of the road derived from the trajectory. The two features are computed as relative and not absolute values so that they determine the relative position of each point in a generic cross-section of the road. In that way, points on different road widths maintain the same proportion and are comparable. The relative elevation \\(Z_{rel,B_{\\gamma}}\\) of the generic point \\(P_{2}\\) is computed as the difference between the elevation \\(Z_{B_{\\gamma}}\\) of the point \\(P_{2}\\) (Fig. 4b), and the average elevation \\(Z_{centrale}\\) of the centreline of the road, as in the following: \\[Z_{rel,\\tau_{2}}=\\left|Z_{P_{2}}-Z_{centrale}\\right| \\tag{1}\\] where the elevation of the centreline (\\(Z_{centrale}\\)) is computed as the average value of \\(Z\\) coordinate of points on the trajectory falling within the ROI. This feature is helpful for the exceptional areas where sidewalks have higher elevations with respect to the roadway. Figure 2: Regions of Interest (ROI) generation detailed schemes. (a) Diagram of the subdivision of the initial point cloud into several ROIs using the trajectory data and a set of bounding boxes of dimensions \\(d\\), \\(b\\), \\(h\\). (b) Example of ROI selection in a corner; it is noticeable how some portions of the point cloud can be not selected, in particular in the corner position. (c) Increase of ROI length for corner areas, from length \\(d\\) to \\(d^{\\prime}\\). (d) Example of bounding boxes width \\(d\\) for straight portions of road and \\(d^{\\prime}\\) for bends, to ensure that all points are selected also in bench. The relative distance \\(d_{rel,P_{1}}\\) of a generic point \\(P_{1}\\) is computed by relying on the trajectory from the instrument, which is assumed to correspond to the road centreline. For each ROI the trajectory is considered as a single straight line of equation \\(l:\\ ax+by+c=0\\), where \\(a\\), \\(b\\), \\(c\\) are the coefficient of the straight line general equation. Then, the relative distance \\(d_{rel,P_{1}}\\) of the generic point \\(P_{1}:(x_{P_{1}}.y_{P_{1}})\\) (Fig. 4a) is computed by the euclidean shorter distance \\(d_{P_{1},l}\\) from point \\(P_{1}\\) to the line \\(l\\), divided by the maximum distance \\(d_{max}\\) identified within the same ROI. The relative distance can be computed as: \\[d_{rel,P_{1}}=\\frac{d_{P_{1},l}}{d_{max}} \\tag{2}\\] where: \\[d_{P_{1},l}=\\frac{\\left|ax_{P_{1}}+by_{P_{1}}+c\\right|}{\\sqrt{a^{2}+b^{2}}} \\tag{3}\\] An exception is defined for relative distance computation in ROI including road crossings. In those cases, unlike in straight road portions, the trajectory line might be inclined and not parallel to sidewalks, and it could not be useful to compute the road proportions. To cope with that, OSM dataset is exploited. In OSM roads are represented by polylines, and can be downloaded using the key _roadway_. These polylines are used as alternative trajectory lines for the representation of the road centreline. Plus, for those ROIs, the relative distance feature is computed differently. The modified relative distance \\(d^{\\prime}_{rel,P}\\) of the generic point \\(P\\) (Fig. 5) in a ROI containing a crossing is computed as the geometric mean of the relative distances \\(d_{rel,P_{1}}\\) between that point \\(P\\) and all the OSM lines \\(l\\), involved in the crossing, as follows: \\[d^{\\prime}_{rel,P}=\\sqrt[]{\\prod_{i=1}^{n}d_{rel,P_{i}}} \\tag{4}\\] where \\(d_{rel,P_{i}}\\) are computed according to Eq. (2), and \\(n\\) is the maximum number of lines involved in the crossing (generally 2). Fig. 5 shows two examples of crossings, where each point was coloured depending on the computed distance value in a colour scale that goes from blue for the lower values, passing to green and yellow, and with red for the higher values. It is clearly visible how \\(d_{rel}\\) mimics the proportion of the road. ### Data processing #### 3.2.1 Paving materials segmentation The goal of this phase is to identify the different paving materials on each ROI. The approach proposed here (Fig. 6) is based on Treccani et al. (2022a), which after rasterizing the point cloud, performed image segmentation using a Convolutional Neural Network (CNN). In this paper, a different neural network is used and the process is entirely developed in _Python_. Furthermore, the process is applied to the raster images generated from the ROIs, which after the segmentation, are reprojected back onto the ROIs, and stored as a point attribute. The reprojection method implemented is based on the one presented by Paz Mourino et al. (2021), where the correspondence between ROI points and raster image pixel indexes is leveraged. The ROI is rasterized by subdividing the XY plane into several cells, whose size is defined according to the raster resolution chosen. Each cell corresponds to a pixel in the rasterized image. The R, G, and B channels assigned to each pixel are here used to store not the point colour information, but three different point features. In fact, for the DL approach, R, G and B identify the input data that the network is fed with, so it is actually possible to convey within them other data than colour. The three selected features are Intensity, Roughness and Omnivariance. To complete the rasterization process, the RGB value Figure 4: Scheme of local features calculation. (a) The relative distance \\(d_{rel}\\) of a generic point \\(P_{1}\\) is defined as the euclidean shorter distance \\(d_{P_{1}}\\) from point \\(P_{1}\\) to the line \\(l\\), divided by the maximum distance \\(d_{max}\\). (b) The relative elevation \\(Z_{rel,P_{1}}\\) of the generic point \\(P_{1}\\) is computed as the difference between the elevation \\(Z_{R_{1}}\\) of the point \\(P_{1}\\) and the average elevation \\(Z_{smooth}\\) of the road’s centreline. Figure 3: OpenStreetMap dataset is used to remove noisy points not related to the road environment. (a) building dataset downloaded from OSM for the case study presented in this paper, all visible polygons represent the building footprint. (b) example of a road point cloud with noisy points inside buildings (e.g., scanned elements by laser pasting through windows or open doors). In the method, the noisy points are selected when they are inside the building footprint and then removed. of each pixel is then set as the average of the values of each specific feature of points falling inside the cell. Since RGB channels have a strict range of values: \\([0;255]\\), the features are normalized before applying the rasterization. The general formula used here for the normalization is \\(F_{max}=(F-F_{max})/(F_{max}-Fmin)*255\\), where \\(F\\) stands for a general feature. To define \\(F_{min}\\) and \\(F_{max}\\) different strategies are used, according to each feature's characteristics. Since the Intensity, in the case study dataset, is saved by the instrument processing software (from Leica) in a format with a defined range: \\([-2048;2048]\\), this range limit is used to define the maximum and the minimum for the normalization. Specifically \\(I_{max}=2048\\) and \\(I_{max}=-2048\\). Then, since the other two features do not have an absolute minimum or maximum value, \\(F_{max}\\) is defined as the maximum value of feature \\(F\\) over the considered dataset, and \\(F_{min}\\) the minimum value of feature \\(F\\) over the considered dataset. The DL image segmentation is achieved by training a semantic segmentation neural net from _PyTorch_ library, using the architecture _DeepLabv3_. The pre-trained model _Resnet50_ is exploited, and Adam optimization algorithm is used. The training loop is developed using batches of 10 images in each iteration. The training dataset is selected from the areas of the city most representative of the paving materials. The reference values for the segmentation mask are taken from the Ground Truth (GT). The GT is manually created by expert architects, based on the paving materials actually present in the city. Classes for DL segmentation are identified depending on paving materials effectively present in the case study: _cobblestone_, _stone_, _brick_, _sampierini_, _ asphalt_. An additional class is used for the background pixels, named _background_. The resulting trained model is then applied to segment all the ROIs. The process of reprojecting the classified images back to the point cloud take place by recalling the indexes of points falling within the XY cells (i.e. the image pixels). The classes predicted for each pixel are then conveyed to the corresponding points. As a result, points within the ROI have a new feature related to the paving material of the ground surface, this feature is used later within the methodology. #### 3.2.2 Ground elements segmentation The purpose of this step is the segmentation of the urban ground surfaces of the city into _sidewalk_ and _road_, and it is performed for each ROI. The segmentation is carried out using a ML approach, and a RF classifier is implemented from scikit-learn library. The RF classifier constructor provided by the library has several parameters, all of which are set to their default value. The classifier requires also to specify the Features to be used for the classification. They are selected by analysing the Feature Importance plot, which allows evaluation of the importance of each feature on the classification task. This plot is generated by using the scikit-learn library, and each bar of the plot shows the feature's importance. Fig. 7 shows a scheme of the ML approach. The features included in the Feature Importance plot are the previously mentioned Local, Global, and Geometric features. Geometric features are computed with various neighbourhood _radii_, and among them the selected features are the ones with a higher rating in the Feature Importance plot, preferring those computed with a higher radius. The RF classifier is trained and validated on a portion of the dataset. The training is based on the manually created GT. The two classes identified are _sidewalk_ and _road_. The training dataset is selected including portions of the most representative road environments, considering all the possible scenarios that appear within the city. The trained model is then used to classify all the ROIs of the dataset. As a result, each point within each ROI is labelled according to the urban ground element. #### 3.2.3 Sidewalks' attributes computation Focusing on the _sidewalk_ class points of each ROI, some geometric attributes are computed and assigned to those points. The computed attributes are those deemed useful for technicians and professionals involved in the management, maintenance and design of the city's urban environment. For each sidewalk within the ROI, the computed attributes are width, transverse and longitudinal slopes, elevation with respect to the road surface, and main paving material. Some of them are estimated on the basis of some point attributes, and by applying Figure 5: Two examples of road crossings, where the relative distance is computed with a different approach (Fig. (4)). (a) A X-shaped crossing. (b) A T-shaped crossing. In both cases the image on the right shows several ROIs; each ROI is coloured depending on the computed relative distance \\(d_{\\mu}^{*}\\) value in a colour scale that goes from blue for the lower values, passing to green and yellow, and with red for the higher values. It is clear how \\(d_{\\mu,\\mu}\\) better describes the road shape. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Figure 6: Scheme of the DL classification. Each ROI is rasterized, the image is segmented using a DL approach (exploiting the model _Resnet 50_) into classes related to the paving materials present in the case study (_cobblestone_, _stone_, _brick_, _sampierini_, _ asphalt_), and the predicted attributes are projected back onto the points of the ROI. specific formulas; others are derived simply by retrieving the value of features previously computed. The methods used to calculate the attributes are listed below: * The **average width** (\\(W\\)) is computed as the difference between the distances of the closest and the farthest points of the sidewalk cluster of points with respect to the road centreline. \\(D_{min}\\) is defined as the 5th percentile of all the distances in the ROI, and \\(D_{max}\\) is identified as the 95th percentile of all the distances. The approach is schematized by Fig. 8a, and the average width is computed as follows: \\[W=D_{max}-D_{min}\\] (5) * The **sidewalk relative elevation** (\\(AZ\\)) is referred to the jump in elevation (if present) between the sidewalk and the part of the road in close proximity to it. Exploiting the distance feature, it is possible to select the band of points on the road that are on the boundary with the sidewalk and extract the mean value of their Z-coordinate. Similarly, a limited band of sidewalk points, adjacent to the boundary with the road points, is selected and the mean value of \\(Z\\) is extracted. Having defined the two reference height value (\\(Z_{mean,road}\\) and \\(Z_{mean,sidewalk}\\)) (Fig. 8b), it is then possible to compute the height difference between the two urban elements as: \\[AZ=\\left|Z_{mean,sidewalk}-Z_{mean,road}\\right|\\] (6) * The **transverse and longitudinal slopes** (\\(Slope_{Lang}\\ Slope_{T_{max}}\\)) are computed by leveraging on the Principal Component Analysis (PCA). Computing the PCA of \\(XYZ\\) attributes of a point cloud, the resulting eigenvectors can be used to describe the point cloud orientation. The three principal components are oriented in the three major directions that best fit and describe the point distribution in space. In the case of the sidewalk cluster of points, the three eigenvectors represent respectively the three main directions \\(C_{1}\\), \\(C_{2}\\), \\(C_{3}\\) (Fig. 8c) on which points are distributed. The higher eigenvector is aligned longitudinally (\\(C_{1}\\)) on the cluster, the middle one is aligned transversely (\\(C_{2}\\)), and the smaller one is aligned in the normal direction (\\(C_{3}\\)). Considering then the vector oriented according to the longitudinal direction (\\(C_{1}\\)), by decomposing it on the three planes \\(xyz\\), obtaining its three components, it is possible to compute the slope as referred to the angle between the vertical projection \\(C_{1Z}\\) and its \\(xyz\\) projection \\(C_{1,XY}\\): \\[Slope_{Long}\\ [\\%]=\\frac{C_{1,Z}}{C_{1,XY}}=\\frac{C_{1,Z}}{\\sqrt{C_{1,X}^{2}+C_{1,Y}^{ 2}}}*100\\] (7) The same reasoning is then applied to eigenvector \\(C_{2}\\) to compute the transversal slope. The equation is as follows: \\[Slope_{T_{max}}\\ [\\%]=\\frac{C_{2,Z}}{C_{2,XY}}=\\frac{C_{2,Z}}{\\sqrt{C_{2,X}^{2}+C_{2,Y}^{ 2}}}*100\\] (8) * The **paving material** of the sidewalk is identified by defining the most frequent paving material attribute of the analysed cluster of points, as computed on Section 3.2.1. Figure 8: Schemes showing the calculation of sidewalk attributes. (a) Average width \\(W\\) is computed as the difference between the distances of the closest (\\(D_{max}\\)) and the farthest (\\(D_{min}\\))points of the sidewalk cluster of points with respect to the road centreline. (b) The sidewalk relative elevation (\\(AZ\\)) is referred to the jump in elevation (if present) between the sidewalk (\\(Z_{mean,sidewalk}\\)) and the part of the road in close proximity to it (\\(Z_{mean,road}\\)). (c) Slope in the longitudinal and transverse directions (\\(Slope_{Long}\\ Slope_{T_{max}}\\)) computed by leveraging on the Principal Component Analysis, and exploiting the principal components C1 and C2. Figure 7: Scheme of the ML segmentation. The points of each ROI are classified by a Random Forest classifier and two classes are predicted: sidewalk and road. ### Representation Several methods can be used for representing sidewalks and pedestrian mobility information collected from the point cloud. In scientific literature, some authors used directly the point cloud as a representation type for conveying and disseminating the data to final users, especially regarding 3D city modelling (Wegen et al., 2022; Nys et al., 2021). On the other hand, other authors performed path findings directly on the point cloud in urban environments (Balado et al., 2019). In this paper, the representation type selected is a very accurate vector file for pedestrian mobility management. We believe that this type of representation is easier to be understood and used by a wider range of final users independently of their experience. This file is then used to compute pedestrian paths within the city, considering various sidewalks attributes and physical accessibility regulations as constraints. #### 3.3.1 Crossing identification In order to correctly perform the vectorization of the sidewalk network, the knowledge of crossings position is fundamental. Three typologies of crossings between roads can be defined: we call it _T-shaped_ crossing when one road enters another road making a \\(T\\) shape; we call it _X-shaped_ crossing when two roads cross; and we define it as _L-shaped_ crossing when two roads meet, creating an approximately 90-degree bend. OSM dataset is used to identify road crossing position and type. In OSM roads are represented by polylines, and as different roads come together and create a crossroads in the city, in a similar way different polylines met at a single point on the dataset. Crossroads areas can be identified by exploiting points where polylines met. Then, to identify also the type of crossing, a simple reasoning is implemented (Fig. 9). The multiplicity of a point is used to identify the type of crossroads: if the point is in common with 3 lines it is a _T-shaped_ crossing, if it is in common with 4 segments it is an _X-shaped_ crossing, and if the multiplicity is 2, it could possibly be a _L-shaped_ crossing, but in this last case a further check should be applied. In fact, a point in common to 2 lines could be every point of a polyline, made by several segments. To define if it is a _L-shaped_ crossing the angle between the associated segments is checked. Since we define the _L-shaped_ crossing as a bend of approximately 90 degrees, we choose that if the angle between the lines is in the range [80;100] degrees the point defines an _L-shaped_ crossing, otherwise the point is not considered representative of a crossing. #### 3.3.2 Vectorization The development of a Sidewalk network, composed of nodes and edges, follows a method previously presented (Treccani et al., 2022). For each ROI, the centre of each sidewalk cluster of points is computed by averaging X and Y coordinates. The resulting point is converted into a node of the network. Then the nodes are joined by edges. Consecutive and neighbouring nodes are connected together, always comparing the location of the edges with the city's road network (derived from OSM), in order to avoid redundant edges or incorrect connections. This workflow is recalled by Fig. 10. The topology of the Sidewalk network is ensured by the fact that the pattern and organization of urban elements (sidewalk and road, de facto) within the ROIs are uniform and consistent, and because consecutive ROIs were considered following the road path (i.e. the trajectory) and in continuity one with the other. The only discontinuity element are the crossings. Edges generation near crossings areas is done differently: a constraint is added that does not allow the creation of edges that cross the road (i.e., that intersect the polylines representing the centre of the road). Crossings within the city are already identified and named as _L-_, T-_, or _X-shaped_. Before performing the edges generation in crossing areas as previously described, node regularization in those areas is necessary. The correct identification of the type of intersections allows the computation of best-fitting lines and the regularization of the nodes' position by slightly adjusting their XY coordinates (Fig. 11). Lastly, a manual check is done to identify and correct any possible errors and to complete the network in any areas of the city that are not surveyed and of which data are missing. During the development of the network, the attributes of the various portions of sidewalks are linked to the respective edge representing them. The output file of the workflow is a shapefile containing the sidewalk network filled with the attributes of the sidewalks themselves. #### 3.3.3 Routing analysis The vector file is used to compute pedestrian paths within the city, taking into consideration accessibility regulations. There are a variety of software solutions, both commercial and open-source, that allow the calculation of pedestrian flow or that calculate routes. These analyses are here carried out using open-source software: QGIS (www.agis.org). Exploiting the _network analysis_ tool on the processing toolbox of QGIS, the path between two points can be computed. This tool is capable of computing the shortest path between two selected points using as guidelines the edges of the previously computed sidewalk network. Furthermore, instead of the shorter path, also the fastest path can be computed. This path is computed relying on a speed value given to each edge. By giving a fictitious speed value to the edges, proportional to the sidewalk accessibility attribute, it is possible to generate the most accessible path. A new attribute for the edges is created, representing the fictitious speed. This new attribute is based on the sidewalk geometric attribute and its comparison with National regulations about physical accessibility; a high speed is set for accessible edges and a low speed is set for non-accessible edges. The resulting path uses mostly the edges that are considered more accessible (i.e., the ones with higher speed). Figure 9: Simplified diagram of the three crossing types identified and used within the workflow. OSM’s dataset and reasoning about the multiplicity of points are exploited to identify the crossing type. (a) _T-shaped_ crossing, when a point is in common to three lines. (b) _X-shaped_ crossing, when the point is in common to two lines. (c) _L-shaped_ crossing when a point is in common to two lines; in those cases, a check on the angle between lines is implemented; if the angle is in the range [80;100] degrees, the point is representative of an _L-shaped_ crossing. ## 4 Results and discussion ### Case study description The case study selected is Sabbioneta, a historic city located in northern Italy. Sabbioneta was re-founded in the second half of the 16th century, based on a pre-existing medieval village. The city was built following the _ideal city_ principles of the Italian Renaissance. In 2008 Sabbioneta, together with the near city of Mantova, was inserted into the UNESO World Heritage List. The two cities have been included in the list because they offer an exceptional testimony to the urban, architectural and artistic achievements of the Renaissance, linked together through the ideas and ambitions of the ruling family, the Gonzaga. The historic city has a small areal extent, about 0.4 square kilometres. The street structure is organized into _cardi_ and _decumani_, and it has a chessboard layout. The city consists of 34 blocks, fairly regular, rectangular or square in shape, but vary in size, with a predominantly east-west orientation (Lorenzi, 2020). The streets do not have a fixed width, varying from 5 m to 14 m wide stretches. Sometimes it happens that the change in width occurs within the same stretch of road, either slightly or more markedly. Remarking on this change in width, sometimes also the paving of the street roadway is different. In Sabbioneta urban pavings assume the role of highlighting the destination of use of urban ground (Fig. 12). Roadways and sidewalks are often at the same elevation, but they can be identified because they are paved with different materials. Specifically, within the city, cobblestone, samepitrini and asphalt are typically used for the roadway surfaces, while bricks and stone (mainly porphyry) are used for the sidewalk surfaces. Table 2 describes the physical aspects of Sabbioneta pavings. The peculiar organization of the urban structure, and the stratification of various pavings within the city, make Sabbioneta a proper case study for testing the presented methodology. Sabbioneta was surveyed with a MLS system: Leica Pegasus:Two, which mounted, as a laser scanner, a Z+F profile 9012. The profiler's main characteristics are recalled by Table 3. The instrument was mounted on top of a car, and almost the entire historic city was surveyed. The resulting point cloud covered almost 7 km of road and was Figure 11: Regularization of the network nodes for three types of crossing, named _L-shaped_ (a), _T-shaped_ (b), _X-shaped_ (c). The process is the same for all crossings: first, nodes are vectorized, then lines are interpolated through nodes, taking into consideration the type of crossing and the road shape, and lastly, nodes are slightly moved to be on the best-fitted lines. Figure 10: Scheme of the process of data vectorization. The segmented ROIs (into sidewalk and road) and the sidewalks attributes (average width, relative elevation, longitudinal and transverse slopes, paving material) are used to generate the sidewalk network. The centre of each sidewalk points on the ROIs are transformed into nodes, which are regularized according to the road framework, and then connected by edges. composed of a total of 1.2 billion points. The instrument mounted 360-degree cameras, so the points attribute included the RGB colour. The dataset of Sabbioneta was organized by conducting several missions of acquisition while moving within the roads of the city (Fig. 13). The full-density point clouds (no subsampling of the data was done) of each mission together with the trajectory data are used. For both the ML and DL approaches, one acquisition mission is identified as most representative of the city: Track C. This acquisition mission includes areas from outside the fortified walls, roads of various widths, and areas closer to squares and portices. This track also includes all paving materials identified within the city. ### Results #### 4.2.1 Data preparation For the definition of the BB size, after some empirical tests, the parameter \\(d\\) is set to 2 m, while \\(d^{\\prime}\\) for curved portions is set to 3 m. Then, for the refinement of the selection, the value of the limit parameter \\(N_{z,lim}\\), is defined after empirical investigation. The \\(N_{z,lim}\\) value is defined by making some tests setting it at different values and visually inspecting the point cloud to see which points were selected. For the point cloud of the case study a value of 0.8 was considered enough for the purposes. In fact, by setting it to 0.8, and then removing the points with \\(N_{z}<0.8\\), the resulting points belonged only to non-vertical surfaces, as desired. Table 4 recalls the pre-processing parameters. The subdivision into ROI generates 1530 ROIs. Each ROI contains on average 80,000 points, representing the ground surfaces. Geometric and Local features are then computed for each ROI. Geometric features to be used in the following steps are computed with CloudCompare on the whole point cloud. Three _radii_ for the neighbourhood selection used are, respectively 0.05, 0.08, and 0.10 m. From the geometric feature _Point density_ it is possible to identify that the average point density on the ground is 5000 points for each square metre. \\begin{table} \\begin{tabular}{l l} \\hline Parameter & Value \\\\ \\hline \\(d\\) & 2 m \\\\ \\(d^{\\prime}\\) & 3 m \\\\ \\(N_{z,lim}\\) & 0.8 \\\\ \\hline \\end{tabular} \\end{table} Table 4: Values of the pre-processing parameters used in the method. \\begin{table} \\begin{tabular}{l l} \\hline Paving material name & Description \\\\ \\hline Sampietrini & Squared stone blocks, aligned in consecutive radial grids (they can also be aligned on rectangular grids, but not in Sabbioneta). In Sabbioneta they are typically used for the road surface. \\\\ \\multirow{2}{*}{Bricks} & Rectangular brick arranged in a stretcher bond pattern. Typically in Sabbioneta, they are used for sidewalks, the largest dimension of the rectangle is orthogonal to the main direction of sidewalks. \\\\ \\multirow{2}{*}{Cobblestone} & River stones lined up react to each other without any apparent regular arrangement. The individual elements are rounded and protrude from the surface. In Sabbioneta are typically used for roadway surface \\\\ \\multirow{2}{*}{Stone} & Rectangular stone blocks, aligned on a regular grid similar to the English cross bond pattern. Typically used for sidewalks in Sabbioneta. \\\\ \\multirow{2}{*}{Asphalit} & Homogeneous, mostly flat surface. \\\\ \\cline{2-3} \\end{tabular} \\end{table} Table 2: Description of the 5 classes of paving materials identified in the case study, the city of Sabbioneta. Figure 12: Photos of the city of Sabbioneta, a case study for this paper. (a) Views of some streets in the city. (b) Layering of different pavements used for different elements of the urban area in the city, sidewalks and roads are highlighted by the use of different paving materials. \\begin{table} \\begin{tabular}{l l} \\hline Feature & Value \\\\ \\hline Rotation speed & 200 Hz \\\\ Coverage & one profile every 5 cm at a speed of 36 km/h \\\\ Acquisition range & from 0.3 to 119 m \\\\ Field of view & 360\\({}^{\\circ}\\) \\\\ Scan rate & 1.016 million points per second \\\\ Accuracy & 0.020 m RMS in horizontal and 0.015 m RMS in vertical \\\\ \\hline \\end{tabular} \\end{table} Table 3: Description of Z+P profile 9012 features, from technical data sheet. The profiler was mounted on the instrument used for data acquisition, Leica PegasureTwo. #### 4.2.2. Parting materials segmentation For the DL classification, ROIs are rasterized, and cell size is set to 0.02 m. Pixel values for channels RGB are set as the average of three specific point features. After a visual inspection of the point cloud, the features selected are the ones considered the most representative of differences in pavings. The features are Intensity, stored in the R channel, Omurian variance, stored within the G channel, and Roughness, stored within the B channel. Features used were calculated with a neighbourhood radius of 0.05 m. For the purpose of DL classification, the image size is transformed to 500 \\(\\times\\) 500 pixels, and then after the classification, they are transformed back to the original size. Parameters are reported in Table 5. DL classes are defined after an on-site inspection of pavings, made by experienced technicians. For Sabbioneta, the most representative and diffuse pavings, and so the classes, are _sampiterini_, _bricks_, _obiststones_, _songs_, _agablt_. These classes are depicted by Fig. 14. An extra class, _background_, is set for the pixels of the image which do not represent any point, required by the rasterization process. For the training dataset ROIs from Track C are selected, from the most representative road of the city, including all pavings. The trained model is applied to all other ROIs to classify pavings; classified images are reprojected back onto points. Focusing on the point cloud and on the predicted paving material of each point, it is possible to compute the confusion matrix (Fig. 15), and the performance metrics (Table 6). The average accuracy of the prediction, computed as the ratio between the correctly classified points over the total, is 99.08%. Fig. 16 shows some rasterized ROIs and the predicted paving material by the DL workflow. #### 4.2.3 Ground elements segmentation Parameters for the RF classifier are selected as follows. All the parameters for the scikit-learn library RF constructor were left to the default value. The feature selection is conducted by exploiting the Feature Importances plot (Fig. 17). All features, global and local, are included in the graph; geometric features computed using three _radii_ for the neighbourhood selection are included. From this graph and after reasoning on the meaning of features, some of them are selected for the ML classification. For the geometric features, only the \\begin{table} \\begin{tabular}{l l l l} \\hline Class & Precision & Recall & F1-score \\\\ \\hline background & 0.99 & 0.98 & 0.99 \\\\ sameperitni & 0.92 & 0.97 & 0.94 \\\\ brids & 0.93 & 0.82 & 0.87 \\\\ colliststones & 0.93 & 0.98 & 0.95 \\\\ stone & 0.84 & 0.93 & 0.89 \\\\ asphalt & 0.86 & 0.94 & 0.90 \\\\ \\hline \\end{tabular} \\end{table} Table 6Performance metrics for the DL classification of Sabbioneta dataset. They are computed on the point cloud after the classified images are reprojected back onto the ROI points. \\begin{table} \\begin{tabular}{l l} \\hline Parameter & Value \\\\ \\hline Cell size & 0.02 m \\\\ R-channel feature & Intensity \\\\ G-channel feature & Omurianç (radius = 0.05 m) \\\\ B-channel feature & Roughness (radius = 0.05 m) \\\\ image re-size & 500 \\(\\times\\) 500 pixels \\\\ DL classes & sameperitni, bricks, cohiststones, stones, asphalt, background \\\\ \\hline \\end{tabular} \\end{table} Table 5Values of the paving material segmentation parameters used in the method. Figure 13. Map of the city of Sabbioneta. The survey is conducted with a MLS system: Leica PegasuxTwo. The city is surveyed by 10 acquisition missions, reported on the legend on the right. Near each scan mission name, its length is reported in kilometers, the total length is almost 7 km. Acquisition track ”C” (in black) is used as training dataset in both ML and DL approaches. Figure 16: Some of the rasterized ROIs from the case study. The correspondent Ground Truth, and the DL prediction are reported below each ROI. A legend is reported at the bottom of the image. Figure 14: Classes of materials used for DL classification for Sablioneta. Figure 15: Normalized by rows confusion matrix for parsing material segmentation, classes are background, sampieriri, bricks, cohkitanome, same, asphalt. ones computed by a higher radius are selected. Selected features are presented by Table 7. Among the features chosen, there are global (geometric, radiometric) and local ones. The training dataset is selected among the most representative ROIs from track C, the same used for DL segmentation. The trained model is then applied to segment all the other ROIs. Confusion matrix (Fig. 18) and precision metrics are computed (Table 8). The average accuracy, computed as the ratio between the correctly classified points over the total, is 88.2%. A top view of the classified Sabbioneta point cloud is shown in Fig. 19. #### 4.2.4 Sidewalks' attributes computation The calculation of sidewalk attributes is done by following the equations shown in previous sections. Table 9 summarizes the average values of attributes. The resulting values are also compared with the legal minimums related to physical accessibility with reference to Italian laws (Ministerial Decree n. 236/89 and Decree of the President of the Republic n. 503/96). \\begin{table} \\begin{tabular}{l l l} \\hline Parameter & Value \\\\ \\hline Classes & road, sidewalk \\\\ RF classification features & Intensity, d-red, Zrel, Roughness (0.1), \\\\ & Omnivariance (0.1), H, S, Z, Normal change rate \\\\ & (0.1), Anisotropy (0.1), Sphericity (0.1) \\\\ \\hline \\end{tabular} \\end{table} Table 7: Values of the ground elements segmentation parameters used in the method. \\begin{table} \\begin{tabular}{l l l l} \\hline Class & Precision & Recall & F1-score \\\\ \\hline sidewalk & 0.89 & 0.81 & 0.85 \\\\ road & 0.82 & 0.90 & 0.86 \\\\ \\hline \\end{tabular} \\end{table} Table 8: Precision metrics for the ML classification of the Sabbioneta dataset. Figure 18: Normalized by rows confusion matrix for ground elements segmentation, classes are road and sidewalk. Figure 17: Feature Importances plot computed for the ML classification. This plot is used to determine which features to use for the Random Forest classification and shows the importance of each feature for the ML process. On the \\(x\\)-axis the Features, and on the \\(y\\)-axis their importance value. In order to make the values on the \\(x\\)-axis more readable, numerical indices have been inserted in the graph, and the names of the corresponding features are given below. The geometric features are calculated with 3 different neighbourhood search radii (0.05, 0.08, 0.10 m). #### 4.2.5 Representation In order to generate the vector file, it is necessary to first proceed with the identification of crossings. Specifically, 34 _L-shaped_ crossings, S2 _T-shaped_ crossings, and 10 _X-shaped_ crossings are identified in the city of Sabbioneta. The automatically generated vector file is composed of 1780 nodes and 1720 edges. Manual refinement is done for a few missing or erroneously generated edges. A total of 23 edges are refined, corresponding to 1.3% of the total. In Sabbioneta there are only two zebra crosswalks; according to Italian regulation (Article 190 of the Highway Code), in urban or suburban roads, if there are no crosswalks or the closer zebra crossing is farther than 100 m, it is possible to cross the street without passing on the zebra crossing. In trying to mimic this scenario, additional edges are inserted into the network, so that in the route calculation the crossing of the street, at any point, can be provided. This second-level network is composed of 1357 extra edges (for a total of 3077 edges). Fig. 20 shows the vectorization process applied on Sabbioneta and a portion of the network with the two types of edges: regular ones (in blue), and the ones considering the possibility of crossing the street without using zebra crossing (in fuchsia). These last edges type connects edges on opposite sides of the road. They could be used for navigation purposes allowing the computed path to cross the street in every position. The shapefile with sidewalk network is published on GitHub (github.com/HeSuTech), in order to make it easily available to all possible interested users. Furthermore, the shapefile is submitted to the Italian OSM community, which approved the file as suitable for the uploading on OSM database. The process has begun, carried out by the community itself, and can be followed on the special web page created on the OSM-wiki website (wiki.opensretremap.org/wiki/Import/Catalogue/Sabbioneta.Sidewalk.Import). The vector file is then used to compute paths within the city. Here we present one example, derived from Treccani et al. (2022). The QGIS tool allows the computation of the shortest path between two points, but acting on the speed value of each edge it is possible to compute the fastest path. In order to compute an accessible path, for example selecting only sidewalks with accessible width (at least 0.9 m according to Italian law), a very low speed (0.001 km/h) is assigned to inaccessible edges, while a relatively higher speed is set to accessible edges (4 km/h). The result is a path that tends to be on accessible sidewalks (Fig. 21). \\begin{table} \\begin{tabular}{l l l l l} Attribute & Ranges & Mott frequent & Reference value & Accessible segments \\\\ Width & 0.45 m \\(\\div\\) 2.20 m & 0.95 m & \\(\\geq\\) 0.90 m & 72.3\\% \\\\ Transverse slope & 0.05\\% \\(\\div\\) 9.75\\% & 1.25\\% & \\(\\leq\\) 1\\% & 8.6\\% \\\\ Longitudinal slope & 0.10\\% \\(\\div\\) 9.77\\% & 2.1\\% & \\(\\leq\\) 5\\% & 79.3\\% \\\\ Relative Z difference & 0 m \\(\\div\\) 0.12 m & 0 m & \\(\\leq\\) 0.025 m & 66.5\\% \\\\ \\hline \\end{tabular} \\end{table} Table 9: Results of the Attributes computed for each sidewalk segment. For each attribute, the range of values is presented and compared with the Italian laws reference value (Minisrient Decree n. 236/89 and Decree of the President of the Republic n. 503/96). Also, the percentage of accessible segments out of the total is reported. Figure 19: Top view of the point cloud of Sabbioneta, superimposed to buildings polygons from OSM. The point cloud is coloured according to sidewalk and road segmentation. Misclassified points are depicted in red. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) ### Discussion The method presented and tested on Sabbioneta proves to be capable of analysing pedestrian mobility in historic urban environments. Although quite limited in size, the city has elements of interest and distinctive features, such as the use of different materials for paving, streets with different widths, and a large number of crossings and one-way streets. The use of this method on Sabbioneta proves its effectiveness in historical cities, and thus could be imagined its possibility of use in historical settings with similar scenarios. During the preprocessing of the data, several features are computed, which proved to be crucial for subsequent stages. Although geometric features were calculated with different _radii_, those actually used by the ML and DL procedure are only a few and with specific _radii_ (0.005 m for DL and 0.1 for ML). The choice of these features is the result of reasoning based on empirical tests and on the basis of statistical graphs made during the workflow. Among these features, the local ones (i.e., \\(Z_{rd}\\) and \\(d_{rd}\\)) proved to be of fundamental importance for the proper execution of the method. ROIs were selected with a fixed width, of 2 m. This value allowed not only to reduce the memory consumption by the computer but mainly to obtain an output shapefile with sidewalk information with high resolution: data every 2 m along the sidewalk route. To properly select ROIs, the trajectory data is necessary. In case such data is Figure 21: Path computed on the basis of the generated shapefile. Two different paths are computed using the generated vector network and QGIS. (a) the generated sidewalk network of Sabbioneta, sidewalls edges are coloured according to their width, according to Italian law, the width is accessible if \\(>\\) 90 cm (coloured in green), otherwise the sidewalk is not accessible (in red). (b) Computed path to move from \\(A\\) to \\(B\\) considering the shortest path. (c) Computed path considering the width value as a weight, the result is the most accessible path according to Italian law for sidewalk widths. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Figure 20: Vectorization process in Sabbioneta. (a) Scheme of the vectorization for ROIs of Sabbioneta, where for each ROI the centre of the cluster of points segmented as sidewalk are transformed into nodes for the sidewalk vector network. (b) a portion of the sidewalk network, with two types of edges, the regular ones (in blu) connecting consecutive edges, and the cross-edges (in fuchsia) used to mimic the possibility of crossing the road everywhere in the absence of zebra crossing in the vicinity. Source: Images modified from Treccani et al. (2022b). missing, it should be fictitiously generated, for example using road lines from OSM or using other specific methods suited to the purpose. DL image segmentation approach is capable of correctly identifying the city's pavements. Analysing the performance metrics it is possible to note that improvements could be done for _brids_ and _stone_ classes. It should also be noticed that _abstalt_ paving is only in few areas of the city. In some areas of the city, the paving is so damaged that its roughness can lead to errors in segmentation. In such cases, damaged _stone_ or damaged _sampietrini_ are very similar to _cobblestone_ paving. Anyway, it is important to note that in the method, the paving attribute is assigned as the most present on the sidewalk cluster of points analysed, so local segmentation errors can be avoided. Important is the choice of cell size and the resolution during rasterization phase. This value should also be chosen accordingly to the density of points on the ground. Rasterized images should be such that they have uniformly distributed pixels, so there should be no missing data (background pixels) between other pixels. Segmentation of the point cloud through RF classification is able to correctly segment ROIs into _sidewalk_ and _road_. An important assumption is the presence of sidewalks; in fact, for the vast majority of the city, sidewalks are present on both sides of the street, so in cases where they are present only on one side, it happens that there are some false positives in the prediction. From the test made developing the method, the _\\(d_{rel}\\)_ feature proved to be crucial for the ML segmentation. Given the good results obtained with ML for _sidewalk_ and _road_ segmentation, we tested a similar ML-based approach to perform the task of paving materials segmentations. Since the results obtained with this approach showed very poor accuracy, we decided to perform the paving material segmentation task with a different approach. We decided to apply DL segmentation methods to the rasterized point clouds, which showed very high accuracy results. The choice of these two separated approaches was crucial for reaching higher accuracy results for both tasks, which are fundamental for the subsequent phases of the presented workflow. Concerning attribute calculation, the information extracted from the sidewalks provides a large amount of data for analysing the sidewalks of the city. These attributes are very useful because they allow rapid identification of portions where to focus the attention for planned maintenance or to make improvements, allowing more thoughtful decisions. The choice of the shapefile format is the result of balancing the desire to reach the largest number of end users with maintaining a rigorousness of the final data. In fact, even if point cloud representation techniques exist, it was considered that a vector file is easier to be understood and used by a wider range of final users independently of their experience in Geomatics. Although not presented in this article, the vector file can be easily used to generate thematic maps, by showing different themes on the edges of the shapefile. In addition, by comparing attribute values with National regulations, edges can be coloured according to their compliance or noncompliance. Lastly, the very accurate vector file generated allows the analysis and management of pedestrian mobility within the city. As an example, path calculation proved to be very easy to perform based on the vector file made. Although a specific open-source tool was used in this article, the great interoperability of the shapefile format makes the generated file usable by other software and in other procedures. In addition, the positive feedback from the Italian OSM community on the possibility of uploading the vector file to the OSM database makes it clear how effective and readily usable the output of the presented method is. ## 5 Conclusions In this paper, a method for the automatic characterization of the navigable space for pedestrians in historic urban areas from point clouds is presented. The input point cloud dataset is analysed through ML and DL approaches, identifying paving materials and ground elements. Geometric attributes of sidewalks are then computed and conveyed into a network made available in vector format. The method is successfully tested on an Italian historic city: Sabbionen. The method aims to propose a complete strategy that, from an initial datum (point cloud), and through a series of automatic procedures, allows obtaining an output datum (a shapefile) that can be easily exploited for an accurate calculation of routes. Apart from the initial geometric features computation, which is done with CloudCompare, and the path computation, which is done with QGIS, the workflow for the production of the output shapefile is completely implemented through _Python_ scripting. The resulting sidewalk network data certainly provides a basis for various future developments, depending on the end user. For example, it can be used by technical experts, as a basis for making maintenance and urban design plans; it can be used by public administrations to make informed decisions and guide their policies; and it can be made available in a variety of ways to citizens and tourists, who can use its potential for planning their daily movements in the city. The promising results obtained from this research allow foreseeing various future developments. The method will be tested on a larger scale, on cities of greater extents, and with source data obtained from a different instrument than the presented one, for example using terrestrial laser scanners (TLS), aerial laser scanners (ALS), or portable mobile mapping systems (PMMS). It is also planned to make more use of the output shapefile. Concretely, The possibility of making it usable on a web platform will be explored and a more refined method of calculating routes will be studied. Besides, the output shapefile will be combined with a city model to perform improved mobility analyses. ## CRediT authorship contribution statement D. Treccani: Conceptualization, Methodology, Investigation, Software, Visualization, Writing - original draft, Writing - review & editing. A. Fernandez: Methodology, Software, Validation, Supervision. L. Diaz-Vilarino: Conceptualization, Methodology, Validation, Supervision, Writing - review & editing. A. Adami: Conceptualization, Resources, Validation, Supervision. ### Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ### Data availability Data will be made available on request ### Acknowledgements Authors would like to thank Leica Geosystem Italy for providing the instrument Leica Pegasus:Two and technical support to raw data processing. This work was partially supported by human resources grant RYC2020-029193-1 funded by MCH/AE/10.13039/501100011033 and FSE \"El FSE invite en tu future\", by grant ED431F2022/08 funded by Xunta de Galicia, Spain-GAIN, and by the projects PID2019-105221RB-C43 funded by MCIN/AE/10.13039/501100011033 and PG2022-132943, funded by MCIN/AE/10.13039/501100011033 and by the European Union \"Next Generation EU\"/PRTR. The statements made herein are solely the responsibility of the authors. ## References * Ai and Tsai (2016) Ai, C., Tsai, Y.J., 2016. Automated sidewalk assessment method for Americans with disabilities act compliance using three-dimensional mobile ldaor. Transp. Res. Rec. 2542 (1), 25-32. [http://dx.doi.org/10.3141/2542-04](http://dx.doi.org/10.3141/2542-04). * Arenas et al. (2016) Arenas, R., Arellan, B., Roca, J., 2016. City without barriers, ICT tools for the universal accessibility. Study cancels in Barcelona. In: International Conference on Virtual City and Territory, \"Back to the Sense of the City\". Krakow, Poland. * Balado et al. (2018) Balado, J., Diaz-Vilariario, L., Aris, P., Gonzalez-Jorge, H., 2018. Automatic classification of urban ground elements from mobile laser scanning. Autom. Contr. 86 (September 2017), 226-239. [http://dx.doi.org/10.1016/j.artonc.2017.09.004](http://dx.doi.org/10.1016/j.artonc.2017.09.004). * Balado et al. (2019) Balado, J., Diaz-Vilariario, L., Aris, P., Lorenzo, H., 2019. Point clouds for direct pedestrian pathfinding in urban environments. ISPRS J. Photogramm. Remote Sens. 148 (January), 184-196. [http://dx.doi.org/10.1016/j.ispspirs.2019.01.004](http://dx.doi.org/10.1016/j.ispspirs.2019.01.004). * Congrsiman et al. (2017) Congrsiman, J., Casals Fernandez, J., 2017. Obtaining optimal routes from point cloud surveys. ACS-Archit. City Environ. 11 (33), [http://dx.doi.org/10.5821/acc.11.33.5154](http://dx.doi.org/10.5821/acc.11.33.5154). * European Commission (2023) European Commission, 2023. Smart cities. URL: [https://commission.europa.eu/en-regional-and-urban-development/topics/cities-and-urban-development/city-initiatives/mart-cities,m](https://commission.europa.eu/en-regional-and-urban-development/topics/cities-and-urban-development/city-initiatives/mart-cities,m). (Accessed 10 January 2023). * Halbaya and Rix (2002) Halbaya, A., Rix-Rix, 2002. Automated compliance assessment for sidewalls using machine learning. In: Construction Research Congress, pp. 288-295. [http://dx.doi.org/10.1061/9780784482865.031](http://dx.doi.org/10.1061/9780784482865.031). * Heidari et al. (2022) Heidari, A., Navim
Pedestrian mobility networks have a primary role on historical urban areas, hence knowledge of navigable space for pedestrians becomes crucial. Collection and inventory of those areas can be conducted by exploiting several techniques, including the analysis of point clouds. Existing point cloud processing techniques are typically developed for modern urban areas, which have standard layouts, and may fail when dealing with historical sites. We present a complete and automated novel method to tackle the analysis of pedestrian mobility in historic urban areas. Starting from a mobile lasers canning point cloud, the method exploits artificial intelligence to identify sidewalks and to characterize them in terms of paving material and geometric attributes. Output data are vectorized and stored in a very accurate high-definition shapefine representing the sidewalk network. It is used to automatically generate pedestrian routes. The method is tested in Sabbioneta, an Italian historic city. Paving material segmentation showed accuracy of 99.08%; urban element segmentation showed an accuracy of 88.2%; automatic data vectorization required only 1.3% of manual refinement on the generated data. Future advancements of this research will focus on testing the method on similar historical cities, using different survey techniques, and exploiting other possible uses of the generated shapefile.
Provide a brief summary of the text.
243
arxiv-format/2401_15354v1.md
# DeepGI: An Automated Approach for Gastrointestinal Tract Segmentation in MRI Scans Ye Zhang\\({}^{*}\\) University of Pittsburgh, Pittsburgh, USA Yulu Gong Northern Arizona University, Flagstaff, USA Dongji Cui Cornell University, New York, USA Xinrui Li Cornell University, New York, USA Xinyu Shen Columbia University, Frisco, USA ## 1 Introduction Gastrointestinal (GI) tract cancers pose a significant global health challenge, impacting millions annually. Radiotherapy, a crucial treatment modality, delivers targeted radiation to tumors while minimizing damage to surrounding healthy tissues. Advanced technologies like MR-Linacs (Magnetic Resonance Linear Accelerator systems) offer real-time imaging during treatment, allowing dynamic adjustments to radiation beams based on tumor and organ positioning. Despite these advancements, the current standard practice in radiotherapy planning involves the manual delineation of the GI tract by radiation oncologists. This labor-intensive and time-consuming process is susceptible to inter-observer variability, hindering the efficiency of radiotherapy planning. In response, this paper proposes an innovative solution employing deep learning techniques for the automated segmentation of the GI tract in magnetic resonance imaging (MRI) scans. The primary objective is to develop a model capable of accurately delineating the colon, small intestine, and stomach regions. The proposed model integrates sophisticated architectures, including Inception-V4 for initial classification, UNet++ with a VGG19 encoder for 2.5D data processing, and Edge UNet for grayscale data segmentation. The methodology involves meticulous data preprocessing, incorporating 2.5D and grayscale processing to enhance the model's adaptability and robustness. The integrated segmentation architecture utilizes Inception-V4 for preliminary classification and UNet++ with VGG19 and Edge UNet for detailed segmentation. The strength of the model lies in its ability to provide automated, accurate, and efficient GI tract segmentation, addressing a critical gap in current radiotherapy planning practices. The paper contributes to the field by:1. **Automating GI Tract Segmentation:** Introducing a model that automates the segmentation of the GI tract in MRI scans, significantly reducing the manual effort required in radiotherapy planning. 2. **Integration of Advanced Architectures:** Leveraging state-of-the-art deep learning architectures, including Inception-V4, UNet++ with VGG19, and Edge UNet, to optimize accuracy in GI tract segmentation. 3. **Innovative Data Preprocessing:** Implementing meticulous data preprocessing techniques, including 2.5D and grayscale processing, to enhance the model's adaptability and robustness to various imaging scenarios. 4. **Efficiency Enhancement and Inter-Observer Variability:** Providing an efficient and accurate tool for clinicians to streamline the radiotherapy planning process, ultimately improving patient care and accessibility to advanced treatments while addressing inter-observer variability. The combination of these contributions positions the proposed model as a valuable advancement in the field of GI tract image segmentation, with potential implications for optimizing radiotherapy planning and improving patient outcomes. ## 2 Related Work The dynamic field of medical image segmentation has witnessed a surge in research endeavors, particularly in the context of gastrointestinal cancers. Understanding the landscape of related work is paramount to contextualize the advancements made in our study. In the quest for automated segmentation of gastrointestinal organs, our exploration extends to a comprehensive review of existing methodologies and breakthroughs, serving as the bedrock upon which our research stands. Kocak et al.[1] explores the application of deep learning in medical image segmentation, laying the foundation for subsequent advancements in the field. Zhou et al.[2] Investigating the utilization of U-Net in medical imaging, this paper highlights its effectiveness in capturing intricate details while maintaining spatial context. Lu et al.[3] introduces a processor for analyzing power consumption data and enhancing modeling accuracy through comparisons between actual and converted power cycles. Edge U-Net incorporates innovations in edge detection using Holistically-Nested Edge Detection (HED), contributing to improved segmentation by capturing edge features[4]. Tianbo et al.[5] introduces a communication-free swarm intelligence system with robust functionality, integrating adaptive gain control, flocking SWARM algorithms, and object recognition for real-world applications. Zhang et al.[6] Using a transformer module and deep evidential learning, TrEP outperforms existing models on pedestrian intent benchmarks.SuperCon presents a two-stage strategy for imbalanced skin lesion classification, emphasizing feature alignment and classifier fine-tuning, leading to state-of-the-art performance[7]. Maccioni et al.[8] provides insights into the challenges and advancements specific to segmenting gastrointestinal organs, considering anatomical variations and pathological conditions. Ronneberger et al.[9] introduces the U-Net architecture, specifically designed for biomedical image segmentation, laying the foundation for subsequent advancements in the field. The VGG architecture, with its deep convolutional networks, has significantly contributed to large-scale image recognition, forming the basis for subsequent image analysis models[10]. Szegedy et al.[11] proposes improvements to the Inception architecture, enhancing accuracy while reducing computational costs, which has implications for various computer vision tasks. SegNet introduces an encoder-decoder architecture for image segmentation, contributing to the development of efficient segmentation models[12]. Zhang et al.[13] introduced a Machine Vision-based Manipulator Control System at the 2020 Cyber Intelligence Conference, enhancing robotic arms for precise object manipulation in two dimensions. Zhang et al.[14][2023] introduce a novel deep learning model for breast cancer detection, achieving enhanced binary classification accuracy with a new pooling scheme and training method. Liao et al.[15] (2020) propose the Attention Selective Network (ASN) for superior pose-invariant face recognition, achieving realistic frontal face synthesis and outperforming existing methods.J Lin's paper introduces a deep learning framework for Bruch'smembrane segmentation in high-resolution OCT, aiding biomarker investigation in AMD and DR progression on both healthy and diseased eyes [16]. T Xiao et al. [17] present dGLCN, a dual-graph learning convolutional network for interpretable Alzheimer's diagnosis,outperforming in binary classification on ADNI datasets with subject and feature graph learning.J Hu et al. [18] present M-GCN, a multi-scale graph convolutional network for superior 3D point cloud classification on ModelNet40, emphasizing efficient local feature fusion.L Zeng et al. [19]suggest a two-phase framework for Alzheimer's diagnosis, enhancing interpretability through weighted assignments in the graph convolutional network.S Chen et al. [20] in Scientific Reports present a high-speed, long-range SS-OCT technology for anterior eye imaging with potential clinical applications. S Chen et al. [21] in Ophthalmology use ultrahigh resolution OCT to differentiate early age-related macular degeneration from normal aging by detecting sub-RPE deposits.CL Quintana et al. [22] in Investigative Ophthalmology Visual Science propose an automatic method using ultrahigh-speed OCT for precise identification of external limbal transition points in scleral lens fitting.L Wang and W Xia in the Journal of Futures Markets propose an analytical framework for volatility derivatives, efficiently incorporating rough volatility and jumps [23]. While prior works have made valuable contributions, they often fall short in addressing the specific challenges posed by GI tract segmentation. Our study aims to fill these gaps by integrating advanced architectures, implementing innovative data preprocessing techniques, and leveraging the strengths of diverse models. This comprehensive approach offers a more tailored and effective solution for accurate GI tract image segmentation in medical imaging applications. ## 3 Methodology ### Overall Architecture The proposed GI-Tract-Image-Segmentation model employs a tri-path approach, integrating advanced neural network architectures to achieve detailed segmentation. The overall architecture, depicted in Figure 1, consists of three distinct pathways, each serving a specific purpose. #### 3.1.1 Inception-V4 Pathway The first pathway initiates with Inception-V4, a state-of-the-art classification algorithm developed by the Google research team. Inception-V4 performs initial classification to identify healthy organs such as the colon, small intestine, and stomach in the input images. If no healthy organs are detected, the segmentation process concludes, generating a blank mask indicating the absence of segmentation. Figure 1: Overall Architecture #### 3.1.2 2.5D U-Net++ Pathway Simultaneously, the second pathway involves 2.5D data processing, incorporating depth information by stacking three consecutive MRI slices to create a 2.5D representation. The processed data is then input into the U-Net++ architecture, where the encoder utilizes VGG19. This pathway focuses on capturing detailed features in the segmented regions. #### 3.1.3 Grayscale Edge U-Net Pathway The third pathway processes input images in grayscale and employs Edge U-Net for segmentation. Edge U-Net integrates Holistically-Nested Edge Detection (HED) for enhanced edge detection. Grayscale data processing is chosen for its simplicity and efficiency in capturing essential information for segmentation tasks. The predictions from the 2.5D U-Net++ and grayscale Edge U-Net pathways are combined with the output from Inception-V4. The ensemble approach, achieved through averaging these predictions, ensures a comprehensive and accurate delineation of the GI tract regions in the input images. This integration leverages the strengths of 2.5D data processing, grayscale data processing, and initial classification with Inception-V4, enhancing the accuracy and robustness of the segmentation. ### Data Preprocessing Our data preprocessing pipeline, as shown in Figure 2 encompasses two distinctive processes tailored to enhance the model's adaptability and generalization. #### 3.2.1 Spatial Augmentation Process The initial preprocessing step involves a spatial augmentation process designed to standardize and enhance the model's adaptability. Input images undergo resizing, ensuring uniformity at a resolution of 320x384 pixels through interpolation algorithms. Following resizing, a series of augmentation operations are applied to augment the dataset. These operations include horizontal flipping to simulate left-right mirror transformations, image rotation for improved directional robustness, elastic transformation mimicking non-linear distortions during image capture, and coarse random dropout simulating occlusions or missing data. The output of this process yields grayscale images, reducing color data dimensions for subsequent analysis and model training. #### 3.2.2 Intensity Augmentation Process - Grayscale In addition to the spatial augmentation process, grayscale images undergo a dedicated intensity augmentation process. After resizing to a consistent resolution of 320x384 pixels, these grayscale images are subjected to a set of intensity adjustments. These adjustments aim to enhance the model's sensitivity to variations in pixel intensity, providing a nuanced understanding of grayscale features. The resulting grayscale images, enriched with enhanced intensity information, contribute to the overall diversity and robustness of the dataset for subsequent analysis and model training. Figure 2: Data processing pipeline #### 3.2.3 2.5D Image Processing The second preprocessing process adopts 2.5D image processing techniques, enhancing traditional 2D image processing methods by introducing additional depth information. In this step, three consecutive MRI slices are stacked to generate 2.5D images simulating 3D volumetric data. This technique, validated in previous studies, preserves flat features while introducing spatial context between slices, providing richer contextual information for the model. Similar to the first preprocessing process, these images undergo the same augmentation operations to increase dataset diversity and improve model training effectiveness. Due to 16-bit RLE-encoded masks in the training labels, visual analysis is challenging. Masks are converted to pixels, and marked regions are highlighted, as shown in Figure 3. ### Model Architectures #### 3.3.1 Inception-V4 for Initial Classification Inception-V4, a product of Google's research team, assumes the role of the initial classifier in our model. The architecture prioritizes accuracy while mitigating computational costs through the utilization of smaller CNN sequences in lieu of more intricate structures. The initial block comprises filters of varying sizes (1x1, 3x3, and 5x5) and a max-pooling layer with a 2x2 filter size. Furthermore, the incorporation of batch normalization and residual connections contributes to additional performance optimization. #### 3.3.2 U-Net++ with VGG19 Encoder The image segmentation model combines the U-Net architecture with VGG19 as the encoder, creating a synergistic U-Net++ framework. In this configuration, VGG19 functions as the encoder, extracting features from the input image, while U-Net++ orchestrates the image segmentation process. Within U-Net++, the encoder comprises convolutional layers from VGG19, and the decoder consists of multiple convolutional and upsampling layers, intricately connected to the encoder through skip connections. These skip connections play a crucial role in enabling the decoder to leverage the high-resolution features from the encoder for precise image segmentation. #### 3.3.3 Edge U-Net Edge U-Net integrates edge-aware segmentation by substituting U-Net's encoder convolutional blocks with MB-conv blocks, as shown in Figure 4. Additional skip connections are introduced to capture edge data from the input image. Edge detection is performed using the Holistically-Nested Edge Detection (HED) method, generating multi-scale edge images that are fused with the final edge features after upsampling. Figure 3: Label Visualization ### Evaluation Metrics #### 3.4.1 Dice Coefficient (DC) The Dice Coefficient, a fundamental metric in segmentation tasks, quantifies the similarity between predicted masks (PM) and ground truth masks (OM). It is defined as: \\[DC(PM,OM)=\\frac{2\\times|PM\\cap OM|}{|PM|+|OM|} \\tag{1}\\] This metric provides a comprehensive assessment of the overlap between predicted and ground truth masks, offering insights into the segmentation accuracy. #### 3.4.2 3D Hausdorff Distance To evaluate the spatial dissimilarity between two 3D masks, we employ the 3D Hausdorff Distance. This metric measures the maximum separation between corresponding pixels in the predicted mask (PM) and the ground truth mask (OM): \\[HD(PM,OM)=\\max\\left(\\max_{pm,om}\\min(|pm-om|)\\right) \\tag{2}\\] The 3D Hausdorff Distance serves as a crucial metric for capturing spatial discrepancies between predicted and ground truth masks, providing insights into segmentation robustness. #### 3.4.3 Composite Score The final score is computed as a weighted combination of the Dice Coefficient and 3D Hausdorff Distance: \\[\\text{Score}=0.4\\times\\text{Dice Coefficient}+0.6\\times 3\\text{D Hausdorff Distance} \\tag{3}\\] This composite score leverages both metrics, offering a balanced evaluation that considers both overlap accuracy and spatial dissimilarity. The combination provides a holistic assessment of the segmentation performance, capturing nuances that individual metrics might overlook. ## 4 Experimental Results Our experiments aimed at evaluating different network models with diverse encoders for GI tract image segmentation on both grayscale and 2.5D preprocessed datasets. The findings and conclusions from these experiments are outlined below: Figure 4: Edge U-Net Architecture ### Grayscale Image Segmentation In the context of grayscale images, a thorough evaluation of various network models employing distinct encoders was conducted. The validation scores are summarized in Table 1. The results underscore that Edge UNet exhibits the most effective performance for grayscale image segmentation, outperforming other models in this domain. ### 2.5D Image Segmentation In the realm of 2.5D images, the focus was on UNet++ with various encoders, with the following validation scores, as shown in Table 2. The validation scores affirm that UNet++ with VGG19 as the encoder excels in the domain of 2.5D images, showcasing superior segmentation performance. ## 5 Conclusion This paper introduces a novel approach to automate the segmentation of gastrointestinal (GI) tract regions in MRI scans for radiotherapy planning. The proposed model, integrating advanced deep learning architectures such as Inception-V4, UNet++ with VGG19 encoder, and Edge UNet, addresses the manual and time-consuming segmentation process in current radiotherapy planning. The model's ability to provide automated, accurate, and efficient GI tract segmentation marks a significant advancement in the field, offering a valuable tool for clinicians to streamline the planning process and improve patient care. The experimental results demonstrate the effectiveness of the proposed model, with Edge UNet performing exceptionally well in grayscale image segmentation, and UNet++ with VGG19 excelling in the domain of 2.5D images. These findings highlight the versatility and robustness of the model in handling different types of input data. Overall, this work contributes to the ongoing efforts in medical image segmentation, specifically addressing challenges related to GI tract segmentation, and provides a promising solution for enhancing radiotherapy planning efficiency. \\begin{table} \\begin{tabular}{|c|c|c|} \\hline **Model** & **Encoder** & **Validation Score** \\\\ \\hline UNet & ResNet50 & 0.71599 \\\\ UNet & Inception-V4 & 0.71002 \\\\ UNet & Xception & 0.73761 \\\\ UNet & EfficientNet-B0 & 0.68033 \\\\ UNet & VGG19 & 0.78925 \\\\ \\hline UNet++ & ResNet50 & 0.7899 \\\\ UNet++ & Inception-V4 & 0.80095 \\\\ UNet++ & Xception & 0.79711 \\\\ UNet++ & EfficientNet-B0 & 0.71372 \\\\ UNet++ & VGG19 & 0.80717 \\\\ \\hline Edge UNet & - & 0.84046 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Grayscale Image Segmentation Results \\begin{table} \\begin{tabular}{|c|c|c|} \\hline **Model** & **Encoder** & **Validation Score** \\\\ \\hline UNet++ & ResNet50 & 0.80138 \\\\ UNet++ & Xception & 0.7961 \\\\ UNet++ & VGG19 & 0.84984 \\\\ \\hline \\end{tabular} \\end{table} Table 2: 2.5D Image Segmentation Results ## References * [1]K. Kocak, A. H. Yardimci, S. Yuzkan, A. Keles, O. Altun, E. Bulut, O. N. Bayrak, and A. A. Okumus (2023-06) Transparency in artificial intelligence research: a systematic review of availability items related to open science in radiology and nuclear medicine. Academic Radiology30 (10), pp. 2254-2266. Cited by: SS1. * [2]Z. Chen, J. Li, Y. Zou, and T. Wang (2023-06) Etu-net: edge enhancement-guided u-net with transformer for skin lesion segmentation. Physics in Medicine & Biology69 (1), pp. 015001. Cited by: SS1. * [3]Z. Chen, D. Liu, Y. Zou, and T. Wang (2023-06) Supercon: supervised contrastive learning for imbalanced skin lesion classification. arXiv preprint arXiv:2202.05685. Cited by: SS1. * [4]Z. Chen, X. Li, Y. Zou, and T. Wang (2023-06) ETU-net: edge enhancement-guided u-net with transformer for skin lesion segmentation. Physics in Medicine & Biology69 (1), pp. 015001. Cited by: SS1. * [5]C. Chen, J. Li, Y. Zou, and T. Wang (2023-06) ETU-net: edge enhancement-guided u-net with transformer for skin lesion segmentation. Physics in Medicine & Biology69 (1), pp. 015001. Cited by: SS1. * [6]C. Chen, D. Zhuang, and J. M. Chang (2022-06) Supercon: supervised contrastive learning for imbalanced skin lesion classification. arXiv preprint arXiv:2202.05685. Cited by: SS1. * [7]C. Chen, D. Zhuang, and J. M. Chang (2022-06) Supercon: supervised contrastive learning for imbalanced skin lesion classification. arXiv preprint arXiv:2202.05685. Cited by: SS1. * [8]C. Chen, D. Zhuang, and J. M. Chang (2022-06) Supercon: supervised contrastive learning for imbalanced skin lesion classification. arXiv preprint arXiv:2202.05685. Cited by: SS1. * [9]C. Chen, D. Zhuang, and J. M. Chang (2022-06) Supercon: supervised contrastive learning for imbalanced skin lesion classification. arXiv preprint arXiv:2202.05685. Cited by: SS1. * [10]C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818-2826. Cited by: SS1. * [11]C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818-2826. Cited by: SS1. * [12]C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818-2826. Cited by: SS1. * [13]J. Lin and Z. Li (2020) Deep-learning enabled accurate bruch's membrane segmentation in ultrahigh-resolution spectral domain and ultrahigh-speed swept source optical coherence tomography. PhD Thesis, Massachusetts Institute of Technology. Cited by: SS1. * [14]J. Lin, W. Liu, N. Wu, and Z. Wente (2022-06) Decoupled modeling methods and systems. External Links: 2202.05685 Cited by: SS1. * [15]J. Lin (2022) Deep-learning enabled accurate bruch's membrane segmentation in ultrahigh-resolution spectral domain and ultrahigh-speed swept source optical coherence tomography. PhD Thesis, Massachusetts Institute of Technology. Cited by: SS1. * [16]J. Lin, A. Kot, T. Guha, and V. Sanchez (2020-06) Attention selective network for face synthesis and pose-invariant face recognition. In [2020 IEEE International Conference on Image Processing (ICIP)], pp. 748-752. Cited by: SS1. * [17]J. Lin, X. Wang, Z. Liao, and T. Xiao (2023-06) M-gcn: multi-scale graph convolutional network for 3d point cloud classification. In [2023 IEEE International Conference on Multimedia and Expo (ICME)], pp. 924-929. Cited by: SS1. * [18]J. Lin, W. Liu, and Z. Wente (2020-06) Deep learning for imbalanced skin lesion segmentation. In [2020 IEEE International Conference on Computer-Assisted Intervention (ICME)], pp. 1000-1004. Cited by: SS1. * [19]J. Lin, W. Liu, and Z. Wente (2020-06) Decoupled modeling methods and systems. External Links: 2002.05685 Cited by: SS1. [MISSING_PAGE_POST] * [20] Chen, S., Potsaid, B., Li, Y., Lin, J., Hwang, Y., Moult, E. M., Zhang, J., Huang, D., and Fujimoto, J. G., \"High speed, long range, deep penetration swept source oct for structural and angiographic imaging of the anterior eye,\" _Scientific reports_**12**(1), 992 (2022). * [21] Chen, S., Abu-Qamar, O., Kar, D., Messinger, J. D., Hwang, Y., Moult, E. M., Lin, J., Baumal, C. R., Witkin, A., Liang, M. C., et al., \"Ultrahigh resolution oct markers of normal aging and early age-related macular degeneration,\" _Ophthalmology Science_**3**(3), 100277 (2023). * [22] Quintana, C. L., Chen, S., Lin, J., Fujimoto, J. G., Li, Y., and Huang, D., \"Anterior topographic limbal demarcation with ultrawide-field oct,\" _Investigative Ophthalmology & Visual Science_**63**(7), 1195-A0195 (2022). * [23] Wang, L. and Xia, W., \"Power-type derivatives for rough volatility with jumps,\" _Journal of Futures Markets_**42**(7), 1369-1406 (2022).
Gastrointestinal (GI) tract cancers pose a global health challenge, demanding precise radiotherapy planning for optimal treatment outcomes. This paper introduces a cutting-edge approach to automate the segmentation of GI tract regions in magnetic resonance imaging (MRI) scans. Leveraging advanced deep learning architectures, the proposed model integrates Inception-V4 for initial classification, UNet++ with a VGG19 encoder for 2.5D data, and Edge UNet for grayscale data segmentation. Meticulous data preprocessing, including innovative 2.5D processing, is employed to enhance adaptability, robustness, and accuracy. This work addresses the manual and time-consuming segmentation process in current radiotherapy planning, presenting a unified model that captures intricate anatomical details. The integration of diverse architectures, each specializing in unique aspects of the segmentation task, signifies a novel and comprehensive solution. This model emerges as an efficient and accurate tool for clinicians, marking a significant advancement in the field of GI tract image segmentation for radiotherapy planning. Semantic Segmentation, Medical Image Segmentation, Inception-V4, UNet++, Edge UNet, 2.5D Data Processing
Give a concise overview of the text below.
229
arxiv-format/1604_06121v2.md
# Evaluation of Zika Vector Control Strategies Using Agent-Based Modeling Chathika Gunaratne Corresponding author: [email protected] Adaptive Systems Laboratory Mustafa Ilhan Akbas 1Complex Adaptive Systems Laboratory Ivan Garibay 1Complex Adaptive Systems Laboratory Ozlem Ozmen 1Complex Adaptive Systems Laboratory ## 1 Introduction Zika, first identified in Central Africa as a sporadic epidemic disease, has grown into a pandemic with cases reported from every continent within the span of a year. South America is currently most heavily hit with over 4200 suspected cases reported in Brazil itself [1]. The primary vector of the Zika virus is the Aedes Aegypti mosquito also responsible for the spread of yellow fever, dengue, malaria and chikungunya. At the time of writing, the Centers for Disease Control has issued several reports warning the public of the potential devastation of public health that Zika poses in the US. Florida's warm and humid environment, in particular, providesan excellent breeding ground for A. Aegypti. Public health administration departments like the Florida Keys Mosquito Control District have been monitoring and controling mosquito populations in the region. In addition to the traditional population control methods such as destruction of breeding ground through public cleanups, DDT/insecticide spraying, etc. two biological methods have gained popularity in recent years. The first, the Release of Insects carrying a Dominant Lethal gene (RIDL), involves the release of a large number of genetically engineered mosquitoes into the wild [2]. RIDL uses a'suicidal' gene which prevents the offspring of the genetically modified mosquito from maturing into adulthood. The second method, an incompatible insect technique (IIT), involves the release of mosquitoes infected with the intracellular bacteria, Wolbachia pipentis, which occurs naturally in insects. However at high concentrations, Wolbachia has been proven to reduce the adult lifespan of A. aegypti by up to 50%[3]. Both vector control techniques have potential long-term difficulties despite their ability to reduce mosquito numbers upon release. The inability of offspring resulting from RIDL to survive into adulthood also means that the Dominant Lethal gene will not be inherited [4]. Therefore, regular releases must be made to maintain long-term sustainability of this approach. On the other hand, Wolbachia infection may transmitted from parent to child through reproduction and remain in the population throughout generations. Yet, spatial and climatic constraints may limit Wolbachia-infected adults from finding mates in the wild or result in infected females being killed off prior to oviposition. The production of large volumes of RIDL or Wolbachia-infected A. aegypti may be costly. Attempts to establish a sustained infection of Wolbachia within A. aegypti populations in the wild have been attempted [5]. Therefore, identifying the long-term sustainability and required release volumes of mosquitoes is important. Despite the difficulty of suppressing the mosquito population as a whole, A. aegypti is quite vulnerable to climatic and spatial conditions on an individual scale. In particular, the fetal/aquatic lifespan (time spent in egg, larval and pupal stages), adult lifespan, mortality rates and probability of emergence are highly sensitive to variations in the temperature. The Key West, despite having a tropical climate with a yearly average temperature range of 10 \\({}^{\\circ}\\)C, has been shown to have a reasonable fluctuation in mosquito population throughout the year. In addition to climatic variations, mosquito survival is heavily dependent on abundance of vegetation, human hosts and breeding sites. The male mosquito depends on vegetation for food, while it is the female mosquito that feeds on the blood of mammals. The female mosquito is attracted to hosts by CO_2 and pheromone emissions and can detect hosts from upto 30m away [6, 7]. Vegetation zones must be within reasonable proximity of host locations in order for males to be able to reach females for mating. Finally, there must be an abundance of breeding sites (exposed stagnant water) upon which females must oviposition (lay eggs). In an effort to identify the sustainability of the two vector control techniques, we use agent-based modeling to simulate the yearly fluctuation of mosquito pop ulation dynamics in the Key West. A suburban neighborhood is selected and segmented into vegetation, houses(\\(CO_{2}\\) zones) and breeding zones to capture the spatial constraints experienced by the local mosquito population. Satellite imagery of the neighborhood is processed to identify the exact location of these zones. In addition to the spatial constraints, the monthly temperature variation of the Key West is also simulated as a climatic constraint. Mosquito agents are released into this environment and their population characteristics are observed throughout time. After validating the yearly adult population fluctuation produced by this model, we use it to simulate and compare the two vector control strategies mentioned. ## 2 Background Modeling and simulation have been used to study environmental and animal monitoring problems [8][9][10]. For the mosquito population dynamics modeling, there is a variety of approaches in the literature including analytical models, differential equation models and ABMs. One of the more prominent mosquito dynamics models in the literature is CIMSim [11], uses dynamic life-table modeling of life-stage durations of the aedes gonotrophic cycle, as influenced by environmental conditions such as temperature and humidity. Despite its lack of spatial properties, CIMSim has been recognized as the standard mosquito population dynamics model by the UNFCCC (United Nations Framework Convention on Climate Change). Other similar models include DyMSim [12], TAENI2 [13] and the use of Markov chain modeling in [14]. A spatially explicit version of CIMSim, Skeeter Buster is also commonly used for mosquito population estimation [15]. ABMs differ from the other models by capturing the spatial interactions among individuals which emerge into macro scale results of small changes in individual characteristics or behaviour of the agents. Our approach employs a spatial model of the A. aegypti population by integrating an ABM with geographic information. Spatial models are used in epidemiology to study population dynamics or to evaluate methods for population control. Evans and Bishop [2] propose a spatial model based on cellular automata to simulate pulsed releases and observe the effects of different mosquito release strategies in Aedes aegypti population control. The results of the model show the importance of release pulse frequency, number of release sites and the threshold values for release volume. Another spatial approach for simulating Aedes aegypti population is SimPopMosq [16], an ABM of representative agents for mosquitoes, some mammals and objects found in urban environments. SimPopMosq is used to study the active traps as a population control strategy and includes no sterile insect agents or techniques. The framework by Arifin et al. [8] integrates an ABM with a geographic information system (GIS) to provide a spatial system for exploring epidemiological landscape changes (distribution of aquatic breeding sites and households) and their effect on spatial mosquito population distribution. Lee et al. [17] also investigate the influence of spatial factors such as the release region size on population control. The method uses a mathematical model to study the relation among the location related parameters. Isidoro et al. [18] used LAIS framework to evaluate the RIDL for Aedes aegypti population. The ABM includes independent decision-making agents for mosquitoes and pre-determined rule based elements for environmental objects such as oviposition spots. However the model lacks important factors such as a realistic map or temperature effects. An observation in most of these studies is the lack incorporation of male mosquito dynamics and their requirement to travel between vegetation for nutrition and mates. There are also approaches integrating the mosquito population control models with epidemic models. Deng et al. [19] proposed an ABM to simulate the spread of dengue, the main vector of which is Aedes aegypti as well. The mobility of the mosquitoes in this model are defined by a utility function, which is affected by the population, wind and landscape features. However, the model lacks a granular spatial discretization and only a small number of agents are used. Moulay and Pigne [20] studied Chikungunya epidemic with a metapopulation network model representing both mosquito and human dynamics on an island. The model is created by considering both the density and mobility of populations and their effects on the transmission of the disease. ## 3 Methodology We model the population dynamics of mosquitoes in an agent-based model implemented in RePast Simphony [21] consisting of agents embodying the behavior of A. aegypti and feeding and breeding off of designated zones in a geographical environment with monthly changes in average temperature. The distribution of the zones provided spatial constraints on the total population while changing temperature applied climatic constraints. Zones were either locations with CO\\({}_{2}\\) (human hosts), vegetation or breeding sites. The distribution of these zones were determined using geographical analysis of a suburban neighborhood in Key West, Florida. The monthly average temperature in Key West was obtained from [22]. ### Life Stages, Processes, Circadian Rhythm and Behavior Modes A. aegypti has four life stages and undergoes metamorphosis between these stages. The first three life stages (egg, larva and pupa) are spent in water while the final stage (adult) is spent as an airborne insect. Adult females feed on blood of mammal hosts, while males gain nutrition from vegetation. Female mosquitoes are attracted to hosts through \\(CO_{2}\\) and pheromones upon which the perform a process known as klinotaxis to reach their host. The female A. aegypti prefers to lay eggs closer to urban areas and is considered a domestic pest. The life stages of A. aegypti are simulated in our model. The lifecycle of the simulated mosquito agents is described in Fig. 1. For the purpose of our study, the egg, larva, and pupa stages were considered as a single stage, FETAL, and considered to be inanimate. During the FETAL stage the mosquito remains within the confines of the breeding site. A FETAL has a probability of dying \\(M_{F}\\) (mortality rate, probability of maturing: \\(P_{M}=1-M_{F}\\)). Once the FETAL agent has survived for \\(D_{f}\\) days, it emerges into an adult. Emergence is probabilistic and there is \\(M_{E}\\) chance of death during emergence (probability of emergence: \\(P_{E}=1-M_{E}\\)). Emerged ADULTs live for \\(D_{A}\\) days. ADULTs may die during their life processes or due to old age at a rate of \\(M_{A}\\). New FETALs are created through reproduction with a probability of \\(P_{R}\\). \\(P_{R}\\) depends on an individual adults ability to find food sources, feed, seek mates, mate successfully, seek breeding zones and oviposition. These processes are constrained by the spatial distribution of the zones and restriction of \\(D_{A}\\) due to temperature. Further, mating success is probabilistic (probability of successful mating: \\(p_{m}\\)). Adult females may be killed by human hosts while feeding (daily probability of female being killed by human host: \\(p_{h}\\)). \\(M_{A}\\) and \\(P_{R}\\), are therefore, subject to various factors and highly variable depending on the individual mosquito's sex, location in relation to other mosquitoes, location in relation to zones and the temperature of the environment. However, precalculation of \\(M_{A}\\) and \\(P_{R}\\) are not required due to the computational nature of agent-based modeling. In our model, all adult mosquitoes emerge from the FETAL process into the FOOD_SEEKING process. As shown in Figure 2, when in range of an appropriate food source, the agent switches to the FOOD_ENCOUNTERED process. The female mosquito searches for blood meals by seeking out CO\\({}_{2}\\) sources within the environment, while males seek out vegetation zones. After a period of feeding, the mosquitoes enter the mating phases. The female mosquito agents transition Figure 1: Lifecycle of the mosquito agent. to RESTING until fertilization, upon which she enters into the OVIPOSITIONING phase. Meanwhile, male mosquitoes enter the MATING phase and seek out potential mates, until their energy is depleted upon which they enter the RESTING phase. This completes the daily rhythm of the mosquito. There are certain conditions of satisfaction for the mosquito agents to transition from one process to another as described in Figure 2. In order for a mosquito to enter into any of the processes described above other than the FETAL process, it must be in the ADULT phase. In order for a female to produce eggs, it must have enough energy or be fed. To enter OVIPOSITIONING, the female must also be fertilized by a male mate. The adult mosquito agents in the model follow a daily rhythmic behavior depending on their current state. A. aegypti circadian rhythms reported by Chadee [23] demonstrated that blood feeding, oviposition, sugar feeding and copulation occurred mostly between 06-09 hours and between 16-18 hours. The mosquitoes rest for the remaining time of the day except atypical biting. Hence, the daily time was partitioned into eight equal segments in our model. Following the information given by Chadee [23], oviposition was allowed during the second and fifth segments of the day while feeding was allowed during the second, third, fifth and sixth segments of the day. Climatic constraints on mosquitoes are considered to occur through varying monthly temperature. Field studies of A. aegypti in the wild and laboratory experiments have established the relationships of average temperature and mortality rates, probability of adult emergence from pupa and life stage durations. There are several studies in the literature [24, 25], which have fit mathematical models relating aquatic/fetal mortality, adult mortality, fetal duration, adult duration and probability of emergence with temperature. Accordingly the model allows for temperature dependency of several parameters effecting the mosquito lifecycle including FETAL and ADULT mortality rates and durations and oviposition rate. Figure 2: State diagram for the adult mosquito agent. ### Geographical Environment The simulations were run on a suburban neighborhood (Lat: -81.78095, Lon: 24.55350) in the Key West, FL. An area of 29584 \\(m^{2}\\) was simulated consisting of two blocks of housing. Satellite imagery was obtained through Google Earth and processed using QGIS (Fig. 3(top-left)). After geomapping of the satellite image and noise cancelation, the image was converted to grayscale and segmented through a k-means unsupervised learning algorithm searching for two classes by pixel intensity (Fig. 3(top-right)). The resulting polygons were then overlain with a regular grid of points. Each point having 10m spacing between them. The points were then classified according to which class of polygon they intersected on the map image. The result was a representation of the distribution of vegetation zones and urban areas in this neighborhood (Fig. 3(bottom-left)). The point layer was then imported into Repast as an ESRI shapefile. Each point was then made the center of a circular vegetation zone or CO\\({}_{2}\\) source with radius (\\(R_{C}\\)) or (\\(R_{V}\\)), respectively. The prevalence of breeding zones depended on the house index (breeding sites per house per week) in the region. The average house index as reported by Figure 3: Satellite imagery of the suburban neighborhood simulated in the study being processed and converted to zones simulation in RePast. FKMCD was approximately 20% in 2010[26, 27]. Accordingly, 20% of the CO\\({}_{2}\\) zones were, randomly, also designated as breeding zones with radius (\\(R_{B}\\)). An example of the distribution of zones within the simulated region is shown in (Fig. 3)(bottom-right). ### Vector Control Strategies Superinfection of mosquito populations in the wild with the naturally occurring intracellular bacteria, Wolbachia (also referred to as Incompatible Insect Technique (IIT)) result in Cytoplasmic Incompatibility. Crosses between infected males and uninfected females result in no offspring and has been used in suppression of mosquito populations in the wild [28]. Most pathogens transmitted by mosquitoes require a development period before they can be transmitted to a human host [3]. The time period from pathogen ingestion to potential infectivity, the extrinsic incubation period (EIP), is about 10 days for Zika. Wolbachia has been shown to reduce the lifespan of A. aegypti by upto 50% [3]. Reduced life time of adult female mosquitoes leads to a reduction in the probability of adult female mosquitoes biting humans and resultantly mitigates the transmission of vector-borne disease such as Zika. Sustained Wolbachia infection has been induced in wild mosquito populations by releasing infected females (crosses between infected females and uninfected or infected males results in Wolbachia infected offspring) [29, 30, 3]. On the other hand, RIDL depends on the artificial genetic alteration of the mosquito to become dependent on tetracycline. Mosquitoes reared in the laboratory are provided on tetracycline and then released into the wild. The resulting offspring die before reaching adulthood due to the absence of tetracycline in the wild. RIDL mosquitoes are usually male, to avoid increasing human-biting mosquitoes by releasing females [31]. Further, unlike Wolbachia infection, female release is unnecessary since a sustained introduction of RIDL cannot be maintained as all offspring are killed. Potential disadvantages of RIDL have been discussed in [4] Mosquito agents in the model could be infected with Wolbachia. Mating between uninfected females and infected males results in \\(D_{A}=0\\) for all offspring. Mating between infected females and uninfected/infected males results in all offspring being infected with Wolbachia. \\(D_{A}\\) of these offspring will be halved. Mosquito agents may carry the RIDL gene. Only released RIDL mosquitoes will be able to survive in the environment as adults. All children resulting from a RIDL parent will inherit RIDL and set \\(D_{A}=0\\). Finally, for RIDLs \\(p_{m}=0.5\\) as reported in [4]. ## 4 Experiments The agent-based model was used to estimate the Aedes aegypti population in the Key West. The monthly population flucuation matched that shown in catch rates from the FKMCD [26, 27]. Populations were lowest during late winter and highest during the summer and late Fall months. The model was then used to evaluate the two control strategies (RIDL and Wolbachia infection) over a period of three years. For each experiment, the simulation was allowed to run for 2 simulation years prior to data collection, in order to allow the agents to fit the constraint patterns of the environment. Data collection was performed after the \\(2^{nd}\\) simulation year and performed for 3 simulation years. FKMCD [26, 27] findings indicate the mean maximum of Aedes Aegypti caught in traps set up near households is 20 per trap per night. Hence, our simulations were initialized with 20 larvae in each breeding site. Values of the other parameters used in all simulation experiments and their sources are listed in table 1. ### Population Estimation Using the model described above we were able to make estimations on the Aedes aegypti population in the Key West neighborhood considered. It was seen that the adult populations closely followed the monthly temperature fluctuations (Figure 4). As shown in figures 5 and 6, adult populations were highest during October and lowest during March. The male population slightly exceeded the female population. During October, the mean count of females was 986 (95% \\begin{table} \\begin{tabular}{c l r r} \\hline \\(Parameter\\) & \\(Definition\\) & \\(Value\\) & \\(Source\\) \\\\ \\hline \\(spd\\) & displacement speed & 0.5 - 1 m/s & [16, 6] \\\\ \\(D_{f1}\\) & Mean duration of egg stage & \\(f(T)\\) & [32] \\\\ \\(D_{f2}\\) & Mean duration of larval and pupal stages & \\(f(T)\\) & [24] \\\\ \\(D_{F}\\) & Mean duration of FETAL stage & \\(D_{f1}+D_{f2}\\) & \\\\ \\(D_{A}\\) & Mean duration of ADULT stage & \\(f(T)\\) & [24] \\\\ \\(M_{F}\\) & FETAL mortality rate & 0.3 & [24] \\\\ \\(m_{l}\\) & Probability of successful emergence & 0.3 & [24] \\\\ \\(r_{c}\\) & Detection range for CO\\({}_{2}\\) zones & 30 m & [16, 6] \\\\ \\(r_{v}\\) & Detection range for vegetation zones & 30 m & [16, 6] \\\\ \\(r_{v}\\) & Detection range for breeding zones & 30 m & [16, 6] \\\\ \\(r_{m}\\) & Detection range for mates & 30 m & [16, 6] \\\\ \\(r_{m}\\) & Number of mates per male per day & 5 & [33] \\\\ \\(r_{m}\\) & Probability of successful mating & 0.7 & [16] \\\\ \\(r_{m}\\) & Number of times a female can lay eggs in one lifetime & 5 & [33] \\\\ \\(r_{m}\\) & Eggs laid in one oviposition & 63 & [2] \\\\ \\(r_{m}\\) & Duration of one oviposition & 3-4 days & [2] \\\\ \\(d_{w}\\) & ADULT duration decrease due to Wolbachia & 50\\% & [3] \\\\ \\(d_{l}\\) & ADULT duration decrease due to lethal gene & 100\\% & [4] \\\\ \\(d_{l}\\) & Mating success of RIDL males & 50\\% & [4] \\\\ \\hline \\end{tabular} \\end{table} Table 1: Parameters used in the model (\\(T:\\) Monthly Temperature)Figure 4: Monthly temperature fluctuation over three years Figure 5: Simulation results for female adult population over three years Figure 6: Simulation results for male adult population over three years CI: [979, 993]) and the mean count of males was 1031 (95% CI: [1024, 1039]). In March, the mean count of females was 316 (95% CI: [313, 319]) and the mean count of males was 333 (95% CI: [330, 336]). ### Simulating Vector Control Strategies We adopted vector control strategies from field trials for both the Wolbachia technique and RIDL. Attempts have been made to establish a sustained Wolbachia infection in the Aedes aegypti population in Machans Beach, Australia [5]. We simulated the same release quantities per urban zone in our model on the Key West, to reflect the release quantities used in the field trial. This resulted in two releases being simulating. The first release consisted of 253 males and females each, weekly, over a period of 15 weeks. In the second release 138 males were released weekly, over a period of 10 weeks. Releases were performed at every fourth urban zone (as in the field trials) and initiated in the first week of April. A total of 8970 adults were released. The release strategy for for RIDL was adopted from field trials conducted in the Cayman Islands [31]. To allow for comparison a total of 8970 adults were released over 25 weeks in each simulation run. 368 males were released over the first 24 weeks and 138 in the last week. Again releases were performed at every fourth urban zone. For both the Wolbachia and RIDL cases, data was aggregated over 90 runs. As release periods for both cases was 25 weeks, the final release was on the first week of October in both cases. For the purpose of this study, we observed the Figure 7: Simulation results for the number of adults with Wolbachia infection over three years prevalence of Wolbachia infected adults and adults carrying the dominant lethal gene, during and after the release period, for each case. Figure 7 demonstrates the aggregate abundance of Wolbachia infected adults within the population. It can be seen that Wolbachia infection remained within the population even when the total mosquito population dropped during the colder months. As seen in figure 8 around 11% of the runs did manage to sustain Wolbachia infection within the population for 2 years after the release period. As seen in figure 9, the number of mosquitoes carrying the dominant lethal gene dropped back to zero as soon as releases were discontinued and the released generation had died out. ## 5 Conclusion We have designed an agent-based model of the mosquito population in the Key West, Florida in an effort to address the control of the Zika pandemic. The primary vector of Zika, Aedes aegypti was modeled on a geographical space representing a suburban neighborhood. Satellite imagery was used to capture the spatial distribution of households (\\(CO_{2}\\) zones), vegetation zones and breeding sites. Additionally, the monthly variation in temperature in the Key West was simulated. Using these spatial and climatic constraints the annual cycle of the mosquito population was replicated by the model to match trends demonstrated by weekly catch rates reported in field studies. It was shown that the spatial and climatic constraints in the Key West allowed for a maximum of approximately 986 (95% CI: [979, 993]) females and 1031 (95% CI: [1024, 1039])females in the late Fall, while in the late winter the population remained at a low of 316 (95% CI: [313, 319]) females and 333 (95% CI: [330, 336]) males. Figure 8: Percentage of simulation runs which sustained wolbachia infection over the three year period Two vector control strategies were simulated using the described ABM. The first strategy, the release of Wolbachia infected mosquitoes, involved releasing male and female mosquitoes with high levels of Wolbachia infection. The release strategy, including release quantities, ratios and frequency, followed a field trial performed in Machans Beach, Australia [5]. Infected males that mated with uninfected females would result in dead offspring, while infected females would produced offspring with Wolbachia infection. The second strategy, Release of Insects carrying a Dominant Lethal gene (RIDL), involved releasing males that would produce offspring that could not survive into adulthood. If these males competed successfully with wild males for mates, then the population would reduce as a result. The RIDL release strategy followed a field trial performed in the Cayman Islands [31]. The total volume of RIDL males released was equal to the total Wolbachia infected mosquitoes released to allow for comparison. It was observed that Wolbachia infection could be established within a population of Aedes aegypti in the Key West. However, the low probability of establishing sustained infection (approximately 11%) suggested that infection was highly susceptible to uncertainty of the environment. One of the major factors of uncertainty in the model was the spatial orientation of the breeding sites. Therefore, there is evidence to believe that the spatial orientation of the breeding sites has an impact on where releases must be performed in order to maintain Wolbachia infection within the population. Similar observations have been made in the field [5], however, further analysis must be performed in order to confirm this conclusion. Figure 9: Simulation results for the number of adults carrying the dominant lethal gene over three years Contrastingly, the model also demonstrated the inability of the RIDL technique to be established within the population. This result is expected as the dominant lethal gene is not inherited into future generations due to the death of all progeny of the released mosquitoes. Finally, we have shown that this model can be used to simulate what-if scenarios and experiment with the release volumes and frequencies of vector control strategies for A. aegypti. The spatial and climatic constraints captured in this model allow it to closely represent the distribution of A. aegypti in Key West and the same technique can be applied for any geographical location. ## References * suspected zika cases in brazil,\" [http://portalsaude.saude.gov.br](http://portalsaude.saude.gov.br), accessed: 2016-03-01. * [2] T. P. O. Evans and S. R. Bishop, \"A spatial model with pulsed releases to compare strategies for the sterile insect technique applied to the mosquito aedes aegypti,\" _Mathematical biosciences_, vol. 254, pp. 6-27, 2014. * [3] C. J. McMeniman, R. V. Lane, B. N. Cass, A. W. Fong, M. Sidhu, Y.-F. Wang, and S. L. O'Neill, \"Stable introduction of a life-shortening wolbachia infection into the mosquito aedes aegypti,\" _Science_, vol. 323, no. 5910, pp. 141-144, 2009. * [4] T. Shelly and D. McInnis, \"Road test for genetically modified mosquitoes,\" _Nature biotechnology_, vol. 29, no. 11, pp. 984-985, 2011. * [5] T. H. Nguyen, H. Le Nguyen, T. Y. Nguyen, S. N. Vu, N. D. Tran, T. Le, Q. M. Vien, T. Bui, H. T. Le, S. Kutcher _et al._, \"Field evaluation of the establishment potential of wmelpop wolbachia in australia and vietnam for dengue control,\" _Parasites & vectors_, vol. 8, no. 1, pp. 1-14, 2015. * [6] B. Cummins, R. Cortez, I. M. Foppa, J. Walbeck, and J. M. Hyman, \"A spatial model of mosquito host-seeking behavior,\" _PLoS Comput Biol_, vol. 8, no. 5, p. e1002500, 2012. * [7] M. Gillies and T. Wilkes, \"The range of attraction of animal baits and carbon dioxide for mosquitoes. studies in a freshwater area of west africa,\" _Bulletin of Entomological Research_, vol. 61, no. 03, pp. 389-404, 1972. * [8] S. Arifin, R. R. Arifin, D. d. A. Pitts, M. S. Rahman, S. Nowreen, G. R. Madey, and F. H. Collins, \"Landscape epidemiology modeling using an agent-based model and a geographic information system,\" _Land_, vol. 4, no. 2, pp. 378-412, 2015. * [9] M. R. Brust, M. I. Akbas, and D. Turgut, \"Multi-hop localization system for environmental monitoring in wireless sensor and actor networks,\" _Concurrency and Computation: Practice and Experience_, vol. 25, no. 5, pp. 701-717, 2013. * [10] M. I. Akbas and D. Turgut, \"Lightweight Routing with QoS support in wireless sensor and actor networks,\" in _Global Telecommunications Conference (GLOBECOM)_. IEEE, 2010, pp. 1-5. * [11] D. A. Focks, D. Haile, E. Daniels, and G. A. Mount, \"Dynamic life table model for aedes aegypti (diptera: Culicidae): analysis of the literature and model development,\" _Journal of medical entomology_, vol. 30, no. 6, pp. 1003-1017, 1993. * [12] C. W. Morin and A. C. Comrie, \"Modeled response of the west nile virus vector culex quinquefasciatus to changing climate using the dynamic mosquito simulation model,\" _International Journal of Biometeorology_, vol. 54, no. 5, pp. 517-529, 2010. * [13] S. A. Ritchie and C. L. Montague, \"Simulated populations of the black salt march mosquito (aedes taeniorhynchus) in a florida mangrove forest,\" _Ecological modelling_, vol. 77, no. 2, pp. 123-141, 1995. * [14] M. Otero, H. G. Solari, and N. Schweigmann, \"A stochastic population dynamics model for aedes aegypti: formulation and application to a city with temperate climate,\" _Bulletin of mathematical biology_, vol. 68, no. 8, pp. 1945-1974, 2006. * [15] K. Magori, M. Legros, M. E. Puente, D. A. Focks, T. W. Scott, A. L. Lloyd, and F. Gould, \"Skeeter buster: a stochastic, spatially explicit modeling tool for studying aedes aegypti population replacement and population suppression strategies,\" _PLoS Negl Trop Dis_, vol. 3, no. 9, p. e508, 2009. * [16] S. J. de Almeida, R. P. M. Ferreira, A. E. Eiras, R. P. Obermayr, and M. Geier, \"Multi-agent modeling and simulation of an aedes aegypti mosquito population,\" _Environmental modelling & software_, vol. 25, no. 12, pp. 1490-1507, 2010. * [17] S. S. Lee, R. Baker, E. Gaffney, and S. White, \"Modelling aedes aegypti mosquito control via transgenic and sterile insect techniques: Endemics and emerging outbreaks,\" _Journal of theoretical biology_, vol. 331, pp. 78-90, 2013. * [18] C. Isidoro, N. Fachada, F. Barata, and A. Rosa, \"Agent-based model of aedes aegypti population dynamics,\" in _Progress in Artificial Intelligence_. Springer, 2009, pp. 53-64. * [19] C. Deng, H. Tao, and Z. Ye, \"Agent-based modeling to simulate the dengue spread,\" in _Sixth International Conference on Advanced Optical Materials and Devices_. International Society for Optics and Photonics, 2008, pp. 71 431O-71 431O. * [20] D. Moulay and Y. Pigne, \"A metapopulation model for chikungunya including populations mobility on a large-scale network,\" _Journal of theoretical biology_, vol. 318, pp. 129-139, 2013. * [21] M. J. North, N. T. Collier, J. Ozik, E. R. Tatara, C. M. Macal, M. Bragen, and P. Sydelko, \"Complex adaptive systems modeling with repast simphony,\" _Complex adaptive systems modeling_, vol. 1, no. 1, pp. 1-26, 2013. * [22] (2016, Apr.) U.s. climatedata. [Online]. Available: [http://www.usclimatedata.com/climate/key-west/florida/united-states/usfl0244](http://www.usclimatedata.com/climate/key-west/florida/united-states/usfl0244) * [23] D. D. Chadee, \"Resting behaviour of aedes aegypti in trinidad: with evidence for the re-introduction of indoor residual spraying (irs) for dengue control,\" _Parasit Vectors_, vol. 6, p. 255, 2013. * [24] H. Yang, M. Macoris, K. Galvani, M. Andrighetti, and D. Wanderley, \"Assessing the effects of temperature on the population of aedes aegypti, the vector of dengue,\" _Epidemiology and infection_, vol. 137, no. 08, pp. 1188-1202, 2009. * [25] O. J. Brady, M. A. Johansson, C. A. Guerra, S. Bhatt, N. Golding, D. M. Pigott, H. Delatte, M. G. Grech, P. T. Leishham, R. Maciel-de Freitas _et al._, \"Modelling adult aedes aegypti and aedes albopictus survival at different temperatures in laboratory and field settings,\" _Parasites & vectors_, vol. 6, no. 1, pp. 1-12, 2013. * [26] F. K. M. C. District, \"Florida keys mosquito control operations report,\" FKMCD, Tech. Rep., nov 2013. * [27] A. Leal, \"Mosquito control measures for aedes aegypti and aedes albopictus,\" Florida Keys Mosquito Control District, Tech. Rep., nov 2013. * [28] S. Zabalou, A. Apostolaki, I. Livadaras, G. Franz, A. Robinson, C. Savakis, and K. Bourtzis, \"Incompatible insect technique: incompatible males from a ceratitis capitata genetic sexing strain,\" _Entomologia Experimentalis et Applicata_, vol. 132, no. 3, pp. 232-240, 2009. * [29] D. A. Joubert, T. Walker, L. B. Carrington, J. T. De Bruyne, D. H. T. Kien, N. L. T. Hoang, N. V. V. Chau, I. Iturbe-Ormaetxe, C. P. Simmons, and S. L. ONeill, \"Establishment of a wolbachia superinfection in aedes aegypti mosquitoes as a potential approach for future resistance management,\" _PLoS Pathog_, vol. 12, no. 2, p. e1005434, 2016. * [30] A. A. Hoffmann, I. Iturbe-Ormaetxe, A. G. Callahan, B. L. Phillips, K. Billington, J. K. Axford, B. Montgomery, A. P. Turley, and S. L. O'Neill, \"Stability of the wmel wolbachia infection following invasion into aedes aegypti populations,\" _PLoS Negl Trop Dis_, vol. 8, no. 9, p. e3115, 2014. * [31] A. F. Harris, D. Nimmo, A. R. McKemey, N. Kelly, S. Scaife, C. A. Donnelly, C. Beech, W. D. Petrie, and L. Alphey, \"Field performance of engineered male mosquitoes,\" _Nature biotechnology_, vol. 29, no. 11, pp. 1034-1037, 2011. * [32] M. Otero, N. Schweigmann, and H. G. Solari, \"A stochastic spatial dynamical model for aedes aegypti,\" _Bulletin of mathematical biology_, vol. 70, no. 5, pp. 1297-1325, 2008. * [33] W. Choochote, P. Tippawangkosol, A. Jitpakdi, K. L. Sukontason, B. Pitasawat, K. Sukontason, and N. Jariyapan, \"Polygamy: the possibly significant behavior of aedes aegypti and aedes albopictus in relation to the efficient transmission of dengue virus.\" _The Southeast Asian journal of tropical medicine and public health_, vol. 32, no. 4, pp. 745-748, 2001.
Aedes Aegypti is the vector of several deadly diseases, including Zika. Effective and sustainable vector control measures must be deployed to keep A. aegypti numbers under control. The distribution of A. Aegypti is subject to spatial and climatic constraints. Using agent-based modeling, we model the population dynamics of A. aegypti subjected to the spatial and climatic constraints of a neighborhood in the Key West. Satellite imagery was used to identify vegetation, houses (CO\\({}_{2}\\) zones) both critical to the mosquito lifecycle. The model replicates the seasonal fluctuation of adult population sampled through field studies and approximates the population at a high of 986 (95% CI: [979, 993]) females and 1031 (95% CI: [1024, 1039]) males in the fall and a low of 316 (95% CI: [313, 319]) females and 333 (95% CI: [330, 336]) males during the winter. We then simulate two biological vector control strategies: 1) Wolbachia infection and 2) Release of Insects carrying a Dominant Lethal gene (RIDL). Our results support the probability of sustained Wolbachia infection within the population for two years after the year of release. egies, our approach provides a realistic simulation environment consisting of male and female Aedes aegypti, breeding spots, vegetation and CO\\({}_{2}\\) sources.
Give a concise overview of the text below.
312
arxiv-format/2303_06511v2.md
# Need for Speed: Fast Correspondence-Free Lidar-Inertial Odometry Using Doppler Velocity David J. Yoon\\({}^{1}\\), Keenan Burnett\\({}^{1}\\), Johann Laconte\\({}^{1}\\), Yi Chen\\({}^{2}\\), Heethesh Vhavle\\({}^{2}\\), Soeren Kammel\\({}^{2}\\), James Reuther\\({}^{2}\\), and Timothy D. Barfoot\\({}^{1}\\) \\({}^{1}\\)University of Toronto Institute for Aerospace Studies (UTIAS), 4925 Dufferin St, Ontario, Canada. \\({}^{2}\\)Aeva Inc., Mountain View, CA 94043, USA. {david.yoon, keenan.burnett, johann.laconte}@robotics.utias.utoronto.ca, {ychen, heethesh, soeren, jreuther}@aeva.ai, [email protected] ## I Introduction Lidar sensors have proven to be a reliable modality for vehicle state estimation in a variety of applications such as self-driving, mining, and search & rescue. Modern lidars are long range, high resolution, and relatively unaffected by lighting conditions. State-of-the-art estimation is achieved by algorithms that geometrically align lidar pointclouds through an iterative process of nearest-neighbour data association (i.e., Iterative Closest Point (ICP)-based methods [1, 2]). However, alignment relying on scene geometry can fail in degenerate environments such as tunnels, bridges, or long highways with a barren landscape. Frequency-Modulated Continuous Wave (FMCW) lidar is a recent type of lidar sensor that additionally measures per-return relative radial velocities via the Doppler effect (see Figure 1). Incorporating these _Doppler measurements_ into ICP-based methods has recently been demonstrated to substantially improve estimation robustness in these difficult scenarios [3, 4]. ICP-based methods perform accurately, but are relatively expensive in computation due to the iterative data association. More computationally efficient odometry may be desirable to leave compute available for other processes in an autonomous navigation pipeline (e.g., localization, planning, control). We present an efficient lidar odometry method that leverages the Doppler measurements and does not perform data association as it is a major computational bottleneck. In this work, we propose a lightweight odometry method that estimates for the 6-degrees-of-freedom (DOF) vehicle velocity, which can afterward be numerically integrated into a \\(SE(3)\\) pose estimate. Velocity, rather than pose, is estimated because the Doppler measurement model is linear with respect to the vehicle velocity, permitting a linear continuous-time estimation formulation. A caveat is that the vehicle velocity is not fully observable with a single FMCW lidar. We address this problem by also using the gyroscope measurements from an Inertial Measurement Unit (IMU), conveniently built into the Aeries I FMCW lidar that we used for testing. The resulting method produces pose estimates at an average wall-clock time of 5.64ms on a single thread, which is substantially lower than the 100ms time budget required for the 10Hz operating rate of the lidar. In comparison to the state of the art, we believe we offer a compelling trade-off between accuracy and performance. The following are the main contributions of this paper: * A lightweight, linear estimator for the vehicle velocity using Doppler and gyroscope measurements. * An observability study of the vehicle velocity estimated using Doppler measurements, showing in theory that the Doppler measurements from multiple FMCW lidars can result in the observability of all 6 degrees of freedom (encouraging future research on this topic). * Experimental results on real-world driving sequences for our proposed odometry method and comparisons to state-of-the-art ICP-based methods. The remainder of the paper is as follows: Section II Fig. 1: A 2D illustration of our method. We use the Doppler measurements from a FMCW lidar and the angular velocity measurements from a gyroscope to efficiently estimate 6-DOF vehicle motion without data association. presents relevant literature; Section III presents the odometry methodology; Section IV presents the observability study; Section V presents the results and analysis; and finally, Section VI presents the conclusion and future work. ## II Related Work ### _ICP-based Lidar Odometry_ ICP estimates the relative transformation between two pointclouds by iteratively re-associating point measurements via nearest-neighbour search [1, 2]. Lidar odometry methods that achieve state-of-the-art performance apply this simple-but-powerful concept of nearest-neighbour data association in a low-dimensional space (e.g., Cartesian). In this paper, we refer to these algorithms as ICP-based. LOAM [5], a top contender in the publicly available KITTI odometry benchmark [6], extracts edge and plane features, and iteratively matches them via nearest-neighbour association. SuMa [7] matches measurements using projective data association (i.e., azimuth-elevation space) and leverages GPU computation to perform this operation quickly. Modern lidars output high-resolution, three-dimensional (3D) pointclouds by mechanical actuation. Consequently, pointclouds acquired from a moving vehicle will be motion distorted, similar to a rolling-shutter effect. One can motion-compensate (de-skew, or undistort) the data as a preprocessing step [8, 9]. Alternatively, data can be incorporated at their exact measurement times by estimating a continuous-time trajectory [10, 11, 12]. Continuous-time ICP-based methods have been successfully demonstrated in several works [13, 14]. State-of-the-art lidar odometry methods address the motion compensation problem and are capable of achieving highly accurate, real-time performance [15, 14, 9]. Pan et al. [15] extract low-level geometric features to apply multiple error metrics in their ICP optimization. Dellenbach et al. [14] use a sparse voxel data structure for downsampling and nearest-neighbour search in a single-threaded implementation. Vizzo et al. [9] demonstrate faster performance with comparable accuracy by proposing a simplified registration pipeline that requires few tuning parameters in a multi-threaded implementation. Recently, FMCW lidars have been demonstrated to be beneficial in improving odometry. Hexsel et al. [3] incorporate Doppler measurements into ICP to improve estimation in difficult, geometrically degenerate locations. Wu et al. [4] improve upon this by using a continuous-time estimator, not requiring motion compensation as a preprocessing step. A major bottleneck in lidar odometry is data association due to (i) the vast amount of data, and (ii) the need for iterative data association. With the introduction of Doppler measurements from FMCW lidars, we propose a more efficient odometry method that avoids data association entirely. ### _Inertial Measurements and Lidar_ Lidar odometry algorithms have proven to be highly accurate in nominal conditions, but will struggle to perform in geometrically degenerate environments (e.g., long tunnels, barren landscapes). Using IMU data is a way of handling these difficult scenarios, with the added benefit of being able to use the IMU to motion-compensate pointclouds as a preprocessing step. Loosely coupled methods may only use the IMU data for motion compensation [16], but can also fuse the pose estimates from pointcloud alignment with IMU data downstream [17]. Zhao et al. [18] implement an odometry estimator for each sensor modality, where each estimator uses the outputs of the other estimators as additional observations. Chen et al. [19] combine their pose estimates from ICP with IMU data using a hierarchical geometric observer. Tightly coupled methods incorporate IMU data into the pointcloud alignment optimization directly, which has been shown using an iterated extended Kalman filter [20, 21] and factor graph optimization over a sliding window [8, 22]. Our work differs from existing lidar-inertial methods in both motivation and implementation. We propose an approach that does not require data association by using the Doppler measurements of a FMCW lidar. Our motivation for using IMU data is to compensate for the degrees of freedom not observable from the Doppler measurements of a single FMCW lidar. We only require gyroscope data (i.e., angular velocities) and exclude the accelerometer1, permitting a linear continuous-time formulation. We do not require pre-integration of the gyroscope data [23], and instead efficiently incorporate data at their exact measurement times. Footnote 1: The gravity vector would require a nonlinear estimator for orientation. ### _Radar Odometry_ FMCW is a relatively new technology for lidar, but not for radar. Radar, in contrast to FMCW lidar, returns two-dimensional (2D) detections (azimuth and range). Similar to our proposed method, Kellner et al. [24] estimate vehicle motion using radar Doppler measurements without data association. Using a single radar, they estimate a 2-DOF vehicle velocity (forward velocity and yaw rotation) by applying a kinematic constraint on the lateral velocity to be zero. Using multiple radars allowed them to estimate a 3-DOF vehicle velocity [25]. As radars produce 2D data in lesser quantities compared to lidar, Kellner et al. [24, 25] limited their experiments to driven sequences that were a few hundred meters in length. Our FMCW lidar produces thousands of 3D measurements at a rate of 10Hz. We take advantage of the richer data by efficiently applying them in a continuous-time linear estimator, and demonstrate reasonably accurate odometry over several kilometers. Kramer et al. [26] estimate the motion of a handheld sensor rig by combining radar Doppler measurements and IMU data. Park et al. [27] estimate for 6-DOF motion by first estimating the 3D translational velocities, then loosely coupling them with IMU data in a factor graph optimization. We similarly use IMU data to help estimate 6-DOF motion. However, we only use the gyroscope data to keep the estimator linear and efficient. We also present an observability study, demonstrating in theory how multiple FMCW sensors constrain all degrees of freedom of the velocity. ## III Methodology ### _Problem Formulation_ We formulate our odometry as linear continuous-time batch estimation for the 6-DOF vehicle body velocities using a Maximum A Posteriori (MAP) [28] objective. This is possible due to how the Doppler velocity and gyroscope measurement models are both linear with respect to the vehicle velocity. A continuous-time formulation allows each measurement to be applied at their exact measurement times efficiently. The relative pose estimate can be computed via numerical integration as a final step. The proposed method is extremely lightweight as the estimation problem is linear and we do not require data association for the lidar data. We can apply our method online by incrementally marginalizing out all past velocity state variables (e.g., a linear Kalman filter that handles measurements asynchronously (continuous-time)). ### _Motion Prior_ We apply the continuous-time estimation framework of Barfoot et al. [12] to estimate the trajectory as a Gaussian process (GP). We model our vehicle velocity prior as White-Noise-on-Acceleration (WNOA) [28], \\[\\dot{\\mathbf{\\varpi}}(t)=\\mathbf{w}(t),\\quad\\mathbf{w}(t)\\sim\\mathcal{GP}(\\mathbf{ 0},\\mathbf{Q}_{c}\\delta(t-t^{\\prime})), \\tag{1}\\] where \\(\\mathbf{w}(t)\\) is a (stationary) zero-mean GP with a power spectral density matrix, \\(\\mathbf{Q}_{c}\\). In addition, we found it beneficial to incorporate vehicle kinematics by penalizing velocities in specific dimensions. We center our vehicle frame at the rear-axle of the vehicle and orient it such that the \\(x\\)-axis points forward, \\(y\\)-axis points left, and \\(z\\)-axis points up. We can penalize velocities in the lateral, vertical, roll, and pitch dimensions: \\[\\mathbf{e}_{k}^{\\text{kin}}=\\mathbf{H}\\mathbf{\\varpi}_{k}, \\tag{2}\\] where a constant \\(\\mathbf{H}\\) extracts the dimensions of interest. ### _Measurement Models_ We use the same Doppler measurement model as presented by Wu et al. [4]. The (scalar) linear error model is \\[e_{\\text{dop}}^{i}=y_{\\text{dop}}^{i}-\\frac{1}{(\\mathbf{q}_{s}^{i\\,T}\\,\\mathbf{ q}_{s}^{i\\,\\})^{\\frac{1}{2}}}\\left[\\mathbf{q}_{s}^{i\\,T}\\quad\\mathbf{0} \\right]\\!\\!\\mathcal{T}_{sv}\\mathbf{\\varpi}(t_{i})-h(\\mathbf{\\psi}^{i}), \\tag{3}\\] where \\(y_{\\text{dop}}^{i}\\) is the \\(i^{\\text{th}}\\) Doppler measurement, \\(\\mathbf{q}_{s}^{i}\\in\\mathbb{R}^{3}\\) are the corresponding point coordinates in the sensor frame, \\(\\mathbf{\\mathcal{T}}_{sv}\\in\\text{Ad}\\left(SE(3)\\right)\\) is the (known) extrinsic adjoint transformation between the sensor and vehicle frames, and \\(\\mathbf{\\varpi}(t_{i})\\in\\mathbb{R}^{6}\\) is our continuous-time vehicle velocity queried at the corresponding measurement time. We are simply projecting the vehicle velocity into the radial direction of the measurement in the sensor frame (see Figure 1). We identified a non-zero bias in the Doppler measurements through experimentation. Wu et al. [4] calibrated this bias using stationary lidar data collected from a flat wall at a fixed distance. Measurements were partitioned into (approximately) uniform bins of azimuth and elevation, and a constant bias was calibrated for each bin. In our recent work, we further identified that the bias has an approximately linear dependance on the range measurement. Therefore we instead model the Doppler velocity bias using a linear regression model, \\(h(\\mathbf{\\psi})\\), with input feature vector, \\(\\mathbf{\\psi}=[1\\quad(\\mathbf{q}^{T}\\mathbf{q})^{\\frac{1}{2}}]^{T}\\), for each azimuth-elevation bin. We will investigate other input features (e.g., incidence angle, intensity) and nonlinear regression models (if the need arises) in future work to improve performance. Figure 2 demonstrates a before-and-after comparison of applying our learned regression models on a real-world test sequence2. Footnote 2: Evaluated using the velocity estimates from an Applanix POS-LV as groundtruth. Partitioning the lidar data by azimuth and elevation also has the effect of downsampling since we keep one measurement per azimuth-elevation bin. We uniformly partition along the azimuth by \\(0.2^{\\circ}\\). The data are already partitioned in elevation by the scan pattern of the sensor, which produces 80 horizontal sweeps. This downsampling effectively projects each lidar frame into a \\(80\\times 500\\) image. In practice, a raw lidar frame has approximately 100,000 measurements, which is then downsampled to 10,000 to 20,000 depending on the scene geometry3. Footnote 3: This is typically an order of magnitude more data than what is used in state-of-the-art ICP-based methods [14, 4]. In addition to the Doppler measurements, we use gyroscope data to compensate for the degrees of freedom not observed by the Doppler velocities of a single lidar (see Section IV for an algebraic study). The error model is \\[\\mathbf{e}_{\\text{gyro}}^{j}=\\mathbf{y}_{\\text{gyro}}^{j}-\\mathbf{R}_{sv}\\mathbf{ \\varpi}(t_{j}), \\tag{4}\\] where \\(\\mathbf{y}_{\\text{gyro}}^{j}\\in\\mathbb{R}^{3}\\) is the \\(j^{\\text{th}}\\) angular velocity measurement in the sensor frame, \\(\\mathbf{R}_{sv}\\in SO(3)\\) is the known extrinsic rotation between the sensor frame and vehicle frame, and \\(\\mathbf{\\varpi}(t_{j})\\) is our continuous-time velocity of the vehicle frame queried at the corresponding measurement time. The constant \\(3\\times 6\\) projection matrix \\(\\mathbf{D}\\) removes the translational elements of the body velocity, leaving only the angular elements. Similar to the Doppler model, this function is linear with respect to the vehicle velocity. We verified empirically that the gyroscope Fig. 2: A plot of the root-mean-square Doppler error versus measurement range aggregated over a driven test data sequence. Discretizing the azimuth-elevation domain and learning a linear regression model in each bin with range as the input feature noticeably improves the error. Note that the linear dependance on range is not visible in this plot as we aggregate over all azimuth-elevation bins. bias is reasonably constant. We apply an offline calibration for a constant bias on training data. In future work, we plan on improving this aspect of the implementation by including the bias as part of the state. ### _Estimation_ The measurement factors of our estimation problem are \\[\\phi^{i}_{\\text{dop}}=\\frac{1}{2}(e^{i}_{\\text{dop}})^{2}\\,R_{\\text{dop}}^{-1} \\tag{5}\\] for the Doppler measurements and \\[\\phi^{j}_{\\text{gyro}}=\\frac{1}{2}{\\bf e}^{j}_{\\text{gyro}}\\,{}^{T}{\\bf R}^{-1 }_{\\text{gyro}}{\\bf e}^{j}_{\\text{gyro}} \\tag{6}\\] for the gyroscope measurements, where \\(R_{\\text{dop}}\\) is the Doppler measurement variance and \\({\\bf R}_{\\text{gyro}}\\) is the gyroscope covariance. The motion prior factor of our WNOA prior is \\[\\phi^{k}_{\\text{wnoa}}=\\frac{1}{2}(\\mathbf{\\varpi}_{k}-\\mathbf{\\varpi}_{k-1})^{T}{\\bf Q }^{-1}_{k}(\\mathbf{\\varpi}_{k}-\\mathbf{\\varpi}_{k-1}) \\tag{7}\\] for the set of discrete states, \\(\\mathbf{\\varpi}_{k}\\), that we estimate in our continuous-time trajectory. We space these discrete states uniformly in time, corresponding to the start and end times of each lidar frame, making \\({\\bf Q}_{k}=(t_{k}-t_{k-1}){\\bf Q}_{k}\\) constant. This prior conveniently results in linear interpolation in time for our velocity-only state [28]: \\[\\mathbf{\\varpi}(\\tau)=(1-\\alpha)\\mathbf{\\varpi}_{k}+\\alpha\\mathbf{\\varpi}_{k+1},\\quad\\alpha =\\frac{\\tau-t_{k}}{t_{k+1}-t_{k}}\\in[0,1], \\tag{8}\\] with \\(\\tau\\in[t_{k},t_{k+1}]\\). The vehicle kinematics factor is \\[\\phi^{k}_{\\text{kin}}=\\frac{1}{2}({\\bf H}\\mathbf{\\varpi}_{k})^{T}{\\bf Q}^{-1}_{z} ({\\bf H}\\mathbf{\\varpi}_{k}), \\tag{9}\\] where \\({\\bf Q}_{z}\\) is the corresponding covariance matrix4. Footnote 4: \\({\\bf R}_{\\text{gyro}}\\), \\({\\bf Q}_{c}\\), and \\({\\bf Q}_{z}\\) were empirically tuned as diagonal matrices. All noise parameter values will be accessible from our implementation. Our MAP objective function is \\[J=\\sum_{k}(\\phi^{k}_{\\text{wnoa}}+\\phi^{k}_{\\text{kin}})+\\sum_{i}\\phi^{i}_{ \\text{dop}}+\\sum_{j}\\phi^{j}_{\\text{gyro}}. \\tag{10}\\] Differentiating this objective with respect to the state and setting it to zero for an optimum will result in a linear system \\(\\mathbf{\\Sigma}^{-1}\\mathbf{\\varpi}^{*}={\\bf b}\\), where \\(\\mathbf{\\varpi}=[\\mathbf{\\varpi}_{1}^{T}\\quad\\mathbf{\\varpi}_{2}^{T}\\quad\\dots\\quad\\mathbf{ \\varpi}_{K}^{T}]^{T}\\) is a stacked vector of all the vehicle velocities, \\(\\mathbf{\\Sigma}^{-1}\\) is the corresponding block-tridiagonal inverse covariance, and \\(\\mathbf{\\varpi}^{*}\\) can be computed using a sparse solver. Figure 3 illustrates the states and factors in our online problem. For the latest lidar frame, \\(k\\), the Doppler measurements are incorporated at their measurement times using our continuous-time interpolation scheme. The gyroscope measurements are similarly handled at their respective measurement times. We incrementally marginalize5 out older state variables, \\(\\mathbf{\\varpi}_{i}\\), where \\(i<k\\), and estimate the latest velocity, \\(\\mathbf{\\varpi}_{k}\\) (i.e., a filter implementation). Footnote 5: In the interest of space, see Barfoot [28] for the details. Footnote 6: The \\((\\cdot)^{\\wedge}\\) here is for \\(\\mathbb{R}^{6}\\) and is an overloading of the \\((\\cdot)^{\\wedge}\\) for \\(\\mathbb{R}^{3}\\). See Barfoot [28] for the details. Footnote 7: Our vehicle frame is oriented such that the \\(x\\)-axis points forward, \\(y\\)-axis points left, and \\(z\\)-axis points up. After the vehicle velocity is estimated, we approximate the relative pose estimate by numerically sampling \\(\\mathbf{\\varpi}(t)\\) with a small timestep, \\(\\triangle t\\), and creating a chain of \\(SE(3)\\) matrices spanning the time interval [11]: \\[{\\bf T}_{k,k-1}\\approx\\exp(\\triangle t\\,\\mathbf{\\varpi}(t_{k})^{\\wedge })\\dots \\tag{11}\\] \\[\\times\\exp(\\triangle t\\,\\mathbf{\\varpi}(t_{k-1}+2\\triangle t)^{\\wedge })\\exp(\\triangle t\\,\\mathbf{\\varpi}(t_{k-1}+\\triangle t)^{\\wedge}),\\] where \\(\\exp(\\cdot)\\) is the exponential map6, \\(\\mathbf{\\varpi}(t)\\) is the vehicle velocity interpolated between boundary velocities \\(\\mathbf{\\varpi}_{k-1}\\) and \\(\\mathbf{\\varpi}_{k}\\), and we have divided the time interval by \\(S\\) steps, making \\(\\triangle t=(t_{k}-t_{k-1})/S\\). In practice, our vehicle pose estimate drifts while the vehicle is stationary (e.g., no movement due to traffic). This is easily detected by checking a tolerance (we use 0.03m/s) on the forward translational speed estimate. If the speed of a boundary estimate is less than the tolerance, we set that boundary estimate \\(\\mathbf{\\varpi}_{i}={\\bf 0}\\) before interpolation to mitigate pose drift. Footnote 6: The \\((\\cdot)^{\\wedge}\\) here is for \\(\\mathbb{R}^{6}\\) and is an overloading of the \\((\\cdot)^{\\wedge}\\) for \\(\\mathbb{R}^{3}\\). See Barfoot [28] for the details. ### _RANSAC for Outlier Rejection_ Outliers in the Doppler measurements are often caused by erroneous reflections and moving objects in the environment. Fortunately, each lidar frame is dense and, in practice, the majority of the measurements are of the stationary environment (inliers). Similar to Kellner et al. [24], we found RANSAC [29] to be a suitable method for outlier filtering. We classify between inliers and outliers using a constant threshold (0.2m/s) on the Doppler error model (3). We run RANSAC on each lidar frame independently. We assume the vehicle velocity is constant throughout each frame and solve for it using two randomly sampled Doppler measurements. The solve is made observable by enforcing vehicle kinematic constraints, i.e., we solve for a 2-DOF velocity7\\(\\mathbf{\\varpi}=[v\\quad 0\\quad 0\\quad 0\\quad\\omega]\\). We could include the gyroscope measurements and solve for more dimensions, but found the benefits in performance to be minor. In practice, 20 iterations of RANSAC was sufficient for each lidar frame. Fig. 3: An illustration of the factors involved in the online velocity estimation problem. The Doppler and gyroscope measurements are applied at their exact measurement times using our continuous-time interpolation scheme. There is no data association for the lidar measurements. We marginalize out past state variables (i.e., \\(\\mathbf{\\varpi}_{k-1}\\)), resulting in a filter for the latest velocity, \\(\\mathbf{\\varpi}_{k}\\). ## IV Observability Study ### _Observability Study - Multiple FMCW Lidars_ We present an observability study for the 6-DOF vehicle velocity using Doppler measurements from multiple FMCW lidars. In order to simplify the proof, we focus on estimating the vehicle velocity over the interval of one lidar frame, assuming that the data from multiple lidars are synchronized and each have \\(m\\) measurements. We also remove the continuous-time aspect of the problem by assuming the vehicle velocity is constant throughout the frame duration. For the \\(i^{\\text{th}}\\) measurement seen by the \\(j^{\\text{th}}\\) sensor, \\[\\begin{split}\\mathbf{c}_{\\text{dop}}^{ij}&=y_{\\text{ dop}}^{ij}-\\frac{1}{(\\mathbf{q}_{j}^{ijT}\\mathbf{q}_{j}^{ij})^{\\frac{1}{2}}}\\left[\\mathbf{q}_{j}^{ ijT}\\quad\\mathbf{0}\\right]\\mathbf{\\mathcal{T}}_{jv}\\mathbf{\\varpi}\\\\ &=y_{\\text{dop}}^{ij}-\\frac{1}{(\\mathbf{q}_{j}^{ijT}\\mathbf{q}_{j}^{ij})^{ \\frac{1}{2}}}\\left[\\mathbf{q}_{j}^{ijT}\\mathbf{R}_{jv}\\quad\\mathbf{q}_{j}^{ijT}\\mathbf{t}_{j}^{ ij\\wedge}\\mathbf{R}_{jv}\\right]\\mathbf{\\varpi}\\\\ &=y_{\\text{dop}}^{ij}-\\mathbf{c}_{ij}^{T}\\mathbf{\\varpi},\\end{split} \\tag{12}\\] where the additional superscript \\(j\\) indicates the sensor8, Footnote 8: \\(\\mathbf{q}_{j}^{ij}\\) are the coordinates of the \\(i^{\\text{th}}\\) point from sensor \\(j^{\\text{th}}\\), in frame \\(f\\). \\[\\mathbf{c}_{ij}^{T}=\\left[\\hat{\\mathbf{\\mathrm{d}}}_{v}^{ijT}\\quad\\hat{\\mathbf{\\mathrm{d} }}_{v}^{ijT}\\mathbf{t}_{v}^{vj\\wedge}\\right],\\quad\\hat{\\mathbf{\\mathrm{d}}}_{v}^{ij}= \\frac{\\mathbf{R}_{jv}^{T}\\mathbf{\\mathrm{d}}_{j}^{ij}}{\\|\\mathbf{\\mathrm{d}}_{j}^{ij}\\|}. \\tag{13}\\] Note how the measurement model does not depend on the magnitude (range) of \\(\\mathbf{\\mathrm{q}}\\) since \\(\\hat{\\mathbf{\\mathrm{q}}}\\) are unit vectors. We define the stacked quantity \\(\\mathbf{C}_{j}=[\\mathbf{c}_{1j}\\cdots\\mathbf{c}_{mj}]^{T}\\) for sensor \\(j\\). In the case of \\(N\\) sensors, we have \\[\\mathbf{C}^{T}\\mathbf{C}=\\sum_{j=1}^{N}\\mathbf{C}_{j}^{T}\\mathbf{C}_{j}=\\sum_{j}\\begin{bmatrix} \\mathbf{Q}_{j}&\\mathbf{Q}_{j}\\mathbf{t}_{v}^{vj\\wedge}\\\\ \\mathbf{t}_{v}^{vj\\wedge T}\\mathbf{Q}_{j}&\\mathbf{t}_{v}^{vj\\wedge T}\\mathbf{Q}_{j}\\mathbf{t}_{v}^{ vj\\wedge}\\end{bmatrix}, \\tag{14}\\] where \\(\\mathbf{Q}_{j}=\\sum_{i}\\hat{\\mathbf{\\mathrm{d}}}_{v}^{ij}\\hat{\\mathbf{\\mathrm{d}}}_{v}^{ijT}\\) is the sum of the outer product of the points seen by the \\(j^{\\text{th}}\\) sensor. The velocity is fully observable from a single lidar frame if and only if \\(\\mathbf{C}^{T}\\mathbf{C}\\) is full rank, or equivalently that the nullspace of \\(\\mathbf{C}^{T}\\mathbf{C}\\) has dimension zero [28]. In the following, we assume \\(\\mathbf{Q}_{j}\\) to be full rank (best case scenario), meaning that the unit velocities seen by the \\(j^{\\text{th}}\\) sensor are not all contained in a line or a plane. In the case of a 3D lidar sensor, \\(\\mathbf{Q}_{j}\\) will always be full rank regardless of the environment geometry. **Lemma 1**: _Let \\(\\mathbf{A}\\) and \\(\\mathbf{B}\\) be two symmetric positive semidefinite matrices. Then, we have_ \\[\\mathrm{null}\\left(\\mathbf{A}+\\mathbf{B}\\right)=\\mathrm{null}\\left(\\mathbf{A}\\right)\\cap \\mathrm{null}\\left(\\mathbf{B}\\right).\\] _A proof of this lemma is given in Appendix A._ First, note that each member can be factorized as \\[\\mathbf{C}_{j}^{T}\\mathbf{C}_{j}=\\underbrace{\\left[\\mathbf{1}_{v}^{vj\\wedge T}\\mathbf{1}_{ \\text{full rank}}\\right]}_{\\text{full rank}}\\underbrace{\\begin{bmatrix}\\mathbf{Q}_{j}& \\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{0}\\end{bmatrix}}_{\\text{PSD, rank}=3}\\underbrace{\\begin{bmatrix} \\mathbf{t}_{v}^{vj\\wedge}\\\\ \\mathbf{0}&\\mathbf{1}\\end{bmatrix}}_{\\text{full rank}}, \\tag{15}\\] thus being positive semidefinite. Using Lemma 1, we find the nullspace of \\(\\mathbf{C}^{T}\\mathbf{C}\\) using the nullspace of each member of the sum. The nullspace of \\(\\mathbf{C}_{j}^{T}\\mathbf{C}_{j}\\) is \\[\\begin{split}\\mathrm{null}\\left(\\mathbf{C}_{j}^{T}\\mathbf{C}_{j}\\right)& =\\mathrm{null}\\left(\\begin{bmatrix}\\mathbf{Q}_{j}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{0}\\end{bmatrix}\\begin{bmatrix}\\mathbf{1}&\\mathbf{t}_{v}^{vj\\wedge}\\\\ \\mathbf{0}&\\mathbf{1}\\end{bmatrix}\\right)\\\\ &=\\left\\{\\begin{bmatrix}\\mathbf{1}&\\mathbf{t}_{v}^{vj\\wedge}\\\\ \\mathbf{0}&\\mathbf{1}\\end{bmatrix}^{-1}\\begin{bmatrix}\\mathbf{0}\\\\ \\mathbf{k}\\end{bmatrix},\\mathbf{k}\\in\\mathbb{R}^{3}\\right\\}\\\\ &=\\left\\{\\begin{bmatrix}-\\mathbf{t}_{v}^{vj\\wedge}\\mathbf{k}\\\\ \\mathbf{k}\\end{bmatrix},\\mathbf{k}\\in\\mathbb{R}^{3}\\right\\}.\\end{split} \\tag{16}\\] Thus for one sensor, \\(\\mathbf{C}^{T}\\mathbf{C}\\) is rank deficient by 3. For two sensors, the nullspace of \\(\\mathbf{C}_{1}^{T}\\mathbf{C}_{1}+\\mathbf{C}_{2}^{T}\\mathbf{C}_{2}\\) is \\[\\begin{split}\\left\\{\\begin{bmatrix}-\\mathbf{t}_{v}^{v1\\wedge}\\mathbf{k} \\\\ \\mathbf{k}\\end{bmatrix},\\mathbf{k}\\in\\mathbb{R}^{3}\\right\\}\\cap\\left\\{\\begin{bmatrix}- \\mathbf{t}_{v}^{v2\\wedge}\\mathbf{l}\\\\ \\mathbf{l}\\end{bmatrix},\\mathbf{l}\\in\\mathbb{R}^{3}\\right\\}\\\\ &=\\left\\{\\begin{bmatrix}\\left[\\alpha\\mathbf{t}_{v}^{vv2}\\mathbf{\\cdot} \\mathbf{t}_{v}^{v1}\\right],\\alpha\\in\\mathbb{R}\\right\\}&\\text{if }\\mathbf{t}_{v}^{v1}\ eq\\mathbf{t}_{v}^{v2}\\\\ &\\left\\{\\begin{bmatrix}-\\mathbf{t}_{v}^{v1\\wedge}\\mathbf{k}\\\\ \\mathbf{k}\\end{bmatrix},\\mathbf{k}\\in\\mathbb{R}^{3}\\right\\}&\\text{if }\\mathbf{t}_{v}^{v1}=\\mathbf{t}_{v}^{v2},\\end{split} \\tag{17}\\] therefore being of dimension 1 if the two sensors are not at the same position, and dimension 3 otherwise. Adding a third sensor, we obtain \\[\\begin{split}\\left\\{\\begin{bmatrix}\\alpha\\mathbf{t}_{v}^{v2 \\wedge}\\mathbf{t}_{v}^{v1}\\\\ \\alpha(\\mathbf{t}_{v}^{v2}-\\mathbf{t}_{v}^{v1})\\end{bmatrix},\\alpha\\in\\mathbb{R} \\right\\}\\cap\\left\\{\\begin{bmatrix}-\\mathbf{t}_{v}^{v3\\wedge}\\mathbf{k}\\\\ \\mathbf{k}\\end{bmatrix},\\mathbf{k}\\in\\mathbb{R}^{3}\\right\\}=\\\\ \\left\\{\\begin{bmatrix}\\alpha\\mathbf{t}_{v}^{v2\\wedge}\\mathbf{t}_{v}^{v1}\\\\ \\alpha(\\mathbf{t}_{v}^{v2}-\\mathbf{t}_{v}^{v1})\\end{bmatrix}\\right|\\mathbf{t}_{v}^{v2 \\wedge}\\mathbf{t}_{v}^{v1}\\!=\\!-\\!\\mathbf{t}_{v}^{v3\\wedge}\\!\\left(\\mathbf{t}_{v}^{v2}-\\mathbf{t} _{v}^{v1}\\right),\\alpha\\in\\mathbb{R}\\right\\}.\\end{split} \\tag{18}\\] Looking at the condition, we remark that \\(\\mathbf{t}_{v}^{v2\\wedge}\\mathbf{t}_{v}^{v1},-\\mathbf{t}_{v}^{v3\\wedge}\\left(\\mathbf{t}_{v}^{v2 }-\\mathbf{t}_{v}^{v1}\\right)\\in\\mathbf{t}_{v}^{v2\\perp}\\cap\\mathbf{t}_{v}^{v3\\perp}\\) is necessary, where \\((\\cdot)^{\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! \\!\\!leading to \\[\\boldsymbol{C^{\\prime}}^{T}\\boldsymbol{C}^{\\prime}=\\underbrace{\\begin{bmatrix} \\boldsymbol{1}&\\boldsymbol{0}\\\\ \\boldsymbol{t}v_{v}^{j}{}^{\\wedge T}&\\boldsymbol{1}\\\\ \\end{bmatrix}}_{\\text{full rank}}\\underbrace{\\begin{bmatrix}\\boldsymbol{Q}& \\boldsymbol{0}\\\\ \\boldsymbol{0}&\\boldsymbol{1}\\\\ \\end{bmatrix}}_{\\text{full rank}}\\underbrace{\\begin{bmatrix}\\boldsymbol{1}& \\boldsymbol{t}v_{v}^{j}{}^{\\wedge T}\\\\ \\boldsymbol{0}&\\boldsymbol{1}\\\\ \\end{bmatrix}}_{\\text{full rank}}. \\tag{21}\\] As such, the system becomes fully observable with only one lidar sensor and one gyroscope. ## V Experiments ### _Experiment Setup_ An image of our data collection vehicle, _Boreas_, is shown in Figure 4. The vehicle was previously used for the Boreas dataset [30] and demonstrating STEAM-ICP with Doppler velocity measurements [4]. Boreas is equipped with an Aeva Aeries I FMCW lidar sensor, which has a horizontal field-of-view of \\(120^{\\circ}\\), a vertical field-of-view of \\(30^{\\circ}\\), a 300m maximum operating range, and a sampling rate of 10Hz. The lidar includes a Bosch BM160 IMU, which we use for our experiments. We use the post-processed estimates from an Applanix POS LV as our groundtruth. We collected 5 data sequences near the University of Toronto Institute for Aerospace Studies. Sequences 1 to 4 follow the _Glen Shields_ route of the Boreas dataset [30]. Sequence 5 is a different route collected in the same area. Following existing work, we evaluate odometry using the KITTI error metric, which averages errors over path lengths that vary from 100m to 800m in 100m increments. We present results for two variants of our method: an offline batch implementation and an online filter implementation. ### _Implementation_ We ran all experiments using the same compute hardware9. Our C++ filter implementation10 is single-threaded. Average wall-clock times of the main modules in our pipeline are: Footnote 9: Lenovo Thinkpad P53, Intel Core i7-9750H CPU. Footnote 10: Code for the C++ implementation: [https://github.com/utiasASRL/doppler_odom](https://github.com/utiasASRL/doppler_odom). #### V-B1 Preprocessing (4.19ms) Downsampling by azimuth and elevation (most expensive computation) and evaluation of the linear regression model (negligible computation). #### V-B2 RANSAC (0.95ms) RANSAC for the latest lidar frame to filter out outliers. #### V-B3 Solve (0.49ms) Solving for the latest velocity estimate corresponding to the last timestamp of the latest lidar frame. #### V-B4 Numerical integration (0.01ms) Approximate the latest pose estimate using numerical integration (100 steps). The total average wall-clock time for processing the latest lidar frame is 5.64ms. This is well under 100ms, which is the requirement for processing Aeva Aeries I lidar data in real time. We note that the Aeries I firmware does not support outputting the raw azimuths, elevations, and range of the lidar points, requiring re-calculation of these values from the coordinates. These calculations result in an additional 1.5ms on average in the preprocessing step, which we have included in the above timing results. Outputting the raw azimuths and elevations is supported in later sensor models (i.e., Aeries II), which will further reduce our compute time. ### _Results_ We compare our proposed methods (filter and batch) to the current state of the art for ICP-based lidar odometry and demonstrate the trade-offs between computation and accuracy. We present results using STEAM-ICP [4] as a representative of an accurate method that is not optimized for real-time application, while CT-ICP [14] is a real-time capable method that is still very accurate. As our methods require training for the linear regression models and constant gyroscope bias, we train on one sequence and show test results on the remaining four. Table I shows the results for all five possible folds and compares them to STEAM-ICP and CT-ICP. STEAM-ICP performs better than our filter by a factor of \\(\\sim\\)\\(5\\) in translation, but runs slower by a factor of \\(\\sim\\)\\(120\\) on a single thread. A multi-threaded implementation of STEAM-ICP runs slower than our filter (single-threaded) by a factor of \\(\\sim\\)\\(36\\). CT-ICP performs better than our filter by a factor of \\(\\sim\\)\\(4\\) in translation, Fig. 4: Our data collection platform, _Boreas_, was established for the Boreas dataset [30]. It was recently equipped with an Aeva Aeries I FMCW lidar by Wu et al. [4]. Fig. 5: A qualitative plot of the estimated odometry paths on sequence 1 of our collected dataset. Our proposed filter runs at 5.64ms on average for each lidar frame on a single thread. See Figure 6 for an example of sequence 5. but runs slower by a factor of \\(\\sim\\)\\(17\\) on a single thread. A multi-threaded implementation of CT-ICP runs slower than our filter (single-threaded) by a factor of \\(\\sim\\)6. Figure 5 and 6 show a qualitative plot of the estimated paths on sequences 1 and 5, respectively. Our method operates on approximately 10,000 to 20,000 measurements for each lidar frame, while STEAM-ICP and CT-ICP operate on approximately 2,000 to 4,000 measurements. We believe our method presents a compelling trade-off between accuracy and computational cost. As seen in Figure 6, our filter performs reasonably over several kilometers such that a loop-closure algorithm could detect points of intersection. This poses a potential application where our lightweight odometry operates at a fast rate, while a slower ICP-based optimization can execute in parallel at a slower rate for localization and/or loop closure. As an additional benefit, our odometry can motion-compensate the lidar data such that rigid ICP can be applied instead of a motion-compensated implementation. Our method is fast on a single thread, leaving threads available for other important tasks in an autonomy pipeline (e.g., localization, planning, control). ## VI Conclusion We presented a continuous-time linear estimator for the 6-DOF vehicle velocity using FMCW lidar and gyroscope measurements. As our method is linear and does not require data association, it is efficient and capable of operating at an average wall-clock time of 5.64ms for each lidar frame. We demonstrate our method on real-world driving sequences over several kilometers, presenting a compelling trade-off in computation versus accuracy compared to existing state-of-the-art lidar odometry. As demonstrated in our observability study, and experimentally by Kellner et al. [25] for 3-DOF motion, multiple FMCW sensors are required for the Doppler measurements to fully constrain the vehicle motion. We plan on using multiple FMCW lidars to estimate vehicle motion without the built-in gyroscope. We will further investigate the Doppler measurement bias and work on improving the regression model by introducing more input features (e.g., angle of incidence). We will also incorporate these features into learning a feature-dependant Doppler noise variance. ### _Proof of Lemma 1_ Assume \\(\\mathbf{x}\\in\\operatorname{null}\\left(\\mathbf{A}\\right)\\cap\\operatorname{null} \\left(\\mathbf{B}\\right)\\). It is straightforward to see that \\(\\mathbf{x}\\in\\operatorname{null}\\left(\\mathbf{A}+\\mathbf{B}\\right)\\), hence \\(\\operatorname{null}\\left(\\mathbf{A}+\\mathbf{B}\\right)\\supseteq\\operatorname{null} \\left(\\mathbf{A}\\right)\\cap\\operatorname{null}\\left(\\mathbf{B}\\right)\\). Then, assume \\(\\mathbf{x}\\in\\operatorname{null}\\left(\\mathbf{A}+\\mathbf{B}\\right)\\), we have \\[\\mathbf{x}^{T}(\\mathbf{A}+\\mathbf{B})\\mathbf{x}=\\underbrace{\\mathbf{x}^{T}\\mathbf{A}\\mathbf{x}}_{\\geq 0} +\\underbrace{\\mathbf{x}^{T}\\mathbf{B}\\mathbf{x}}_{\\geq 0}=0\\] Recall that for any symmetric PSD matrix \\(\\mathbf{M}\\), we have \\(\\mathbf{M}=\\mathbf{S}^{T}\\mathbf{S}\\). As such, \\(\\mathbf{x}^{T}\\mathbf{M}\\mathbf{x}=0\\Leftrightarrow\\left(\\mathbf{S}\\mathbf{x}\\right)^{T}\\mathbf{S}\\mathbf{x} =0\\Rightarrow\\mathbf{x}\\in\\operatorname{null}\\left(\\mathbf{S}\\right)\\Rightarrow\\mathbf{x} \\in\\operatorname{null}\\left(\\mathbf{M}\\right)\\). Therefore, \\begin{table} \\begin{tabular}{l|c c c c c c c c|c c c c c} \\hline \\hline & Compute [ms] & \\multicolumn{8}{c|}{Translation Error [\\%]} & \\multicolumn{8}{c}{Rotation Error [\\({}^{\\circ}\\)/(100m)]} \\\\ \\hline **ICP-based** & & 01 & 02 & 03 & 04 & 05 & **AVG** & 01 & 02 & 03 & 04 & 05 & **AVG** \\\\ \\hline STEAM-ICP [4] & 678.74 & **0.24** & **0.24** & **0.23** & **0.21** & **0.25** & **0.23** & **0.087** & **0.083** & **0.083** & **0.080** & **0.084** & **0.083** \\\\ CT-ICP [14] & 93.78 & 0.28 & 0.39 & 0.26 & 0.25 & 0.30 & 0.29 & 0.099 & 0.104 & 0.098 & 0.094 & 0.100 & 0.099 \\\\ \\hline **Train on 01** (Ours) & & 01 & 02 & 03 & 04 & 05 & **AVG** & 01 & 02 & 03 & 04 & 05 & **AVG** \\\\ \\hline Batch & – & 0.94 & 1.20 & 1.22 & 1.03 & 1.03 & 1.12 & 0.317 & 0.403 & 0.392 & 0.356 & 0.376 & 0.382 \\\\ Filter & **5.64** & 1.10 & 1.26 & 1.22 & 1.16 & 1.06 & 1.17 & 0.386 & 0.463 & 0.440 & 0.415 & 0.397 & 0.429 \\\\ \\hline **Train on 02** (Ours) & & 01 & 02 & 03 & 04 & 05 & **AVG** & 01 & 02 & 03 & 04 & 05 & **AVG** \\\\ \\hline Batch & – & 1.20 & 1.00 & 1.02 & 0.96 & 0.94 & 1.03 & 0.387 & 0.344 & 0.357 & 0.340 & 0.344 & 0.357 \\\\ Filter & **5.64** & 1.34 & 1.10 & 1.18 & 1.12 & 1.08 & 1.18 & 0.448 & 0.410 & 0.439 & 0.407 & 0.391 & 0.421 \\\\ \\hline **Train on 03** (Ours) & & 01 & 02 & 03 & 04 & 05 & **AVG** & 01 & 02 & 03 & 04 & 05 & **AVG** \\\\ \\hline Batch & – & 1.11 & 1.09 & 0.97 & 0.91 & 0.85 & 0.99 & 0.368 & 0.380 & 0.335 & 0.324 & 0.325 & 0.349 \\\\ Filter & **5.64** & 1.22 & 1.21 & 1.12 & 1.12 & 1.06 & 1.15 & 0.414 & 0.450 & 0.415 & 0.403 & 0.383 & 0.412 \\\\ \\hline **Train on 04** (Ours) & & 01 & 02 & 03 & 04 & 05 & **AVG** & 01 & 02 & 03 & 04 & 05 & **AVG** \\\\ \\hline Batch & – & 1.07 & 1.06 & 0.98 & 0.89 & 0.85 & 0.99 & 0.354 & 0.369 & 0.340 & 0.323 & 0.328 & 0.348 \\\\ Filter & **5.64** & 1.19 & 1.18 & 1.12 & 1.10 & 1.03 & 1.13 & 0.408 & 0.442 & 0.419 & 0.401 & 0.383 & 0.413 \\\\ \\hline **Train on 05** (Ours) & & 01 & 02 & 03 & 04 & 05 & **AVG** & 01 & 02 & 03 & 04 & 05 & **AVG** \\\\ \\hline Batch & – & 1.03 & 1.03 & 1.04 & 0.92 & 0.89 & 1.01 & 0.346 & 0.354 & 0.347 & 0.326 & 0.332 & 0.343 \\\\ Filter & **5.64** & 1.17 & 1.14 & 1.15 & 1.09 & 1.00 & 1.14 & 0.408 & 0.425 & 0.424 & 0.398 & 0.378 & 0.414 \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE I: Quantitative results using the KITTI odometry error metrics on our data sequences. For our methods, we train on one sequence and test on the others. All folds are presented with the training sequence in grey (not counted in the average). The average wall-clock time (single-threaded) per lidar frame is shown in brackets for each online method. Multi-threading reduces the runtimes to 201.69ms and 33.91ms for STEAM-ICP and CT-ICP, respectively. Best results are in bold font. ## Acknowledgments We would like to thank the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Ontario Research Fund: Research Excellence (ORF-RE) program for supporting this work. ## References * [1]P. Besl and N. D. McKay (1992) A method for registration of 3-d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence14 (2), pp. 239-256. Cited by: SSI. * [2]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [3]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [4]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [5]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [6]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [7]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [8]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [9]P. Furgale, F. Colas, and R. Siegwart (2015) A review of point cloud registration algorithms for mobile robotics. Cited by: SSI. * [10]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [11]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [12]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [13]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [14]P. Furgale, F. Colas, and R. Siegwart (2015) A review of point cloud registration algorithms for mobile robotics. Cited by: SSI. * [15]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [16]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [17]P. Furgale, F. Colas, and R. Siegwart (2015) A review of point cloud registration algorithms for mobile robotics. Cited by: SSI. * [18]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [19]P. Furgale, F. Colas, and R. Siegwart (2015) A review of point cloud registration algorithms for mobile robotics. Cited by: SSI. * [20]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [21]P. Furgale, F. Colas, and R. Siegwart (2015) A review of point cloud registration algorithms for mobile robotics. Cited by: SSI. * [22]P. Furgale, F. Colas, and R. Siegwart (2015) A review of point cloud registration algorithms for mobile robotics. Cited by: SSI. * [23]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [24]P. Furgale, F. Colas, and R. Siegwart (2015) A review of point cloud registration algorithms for mobile robotics. Cited by: SSI. * [25]P. Furgale, T. D. Barfoot, and G. Sibley (2012) Continuous-time batch estimation using temporal basis functions. In ICRA, Cited by: SSI. * [26]P. Furgale, F. Colas, and R. Siegwart (2015) A review of point cloud registration algorithms for mobile robotics. Cited by: SSI. [MISSING_PAGE_POST]
In this paper, we present a fast, lightweight odometry method that uses the Doppler velocity measurements from a Frequency-Modulated Continuous-Wave (FMCW) lidar without data association. FMCW lidar is a recently emerging technology that enables per-return relative radial velocity measurements via the Doppler effect. Since the Doppler measurement model is linear with respect to the 6-degrees-of-freedom (DOF) vehicle velocity, we can formulate a linear continuous-time estimation problem for the velocity and numerically integrate for the 6-DOF pose estimate afterward. The caveat is that angular velocity is not observable with a single FMCW lidar. We address this limitation by also incorporating the angular velocity measurements from a gyroscope. This results in an extremely efficient odometry method that processes lidar frames at an average wall-clock time of 5.64ms on a single thread, well below the 10Hz operating rate of the lidar we tested. We show experimental results on real-world driving sequences and compare against state-of-the-art Iterative Closest Point (ICP)-based odometry methods, presenting a compelling trade-off between accuracy and computation. We also present an algebraic observability study, where we demonstrate in theory that the Doppler measurements from multiple FMCW lidar are capable of observing all 6 degrees of freedom (translational and angular velocity).
Provide a brief summary of the text.
285
isprs/0d1e3a11_632d_473d_ab10_81945278ba47.md
Extracting dimensions and locations of doors, windows, and door thresholds out of mobile lidar data using object detection to estimate the impact of floods S. Van Ackere 1,* J. Verbeurgt 1 L. De Sloover 1 A. De Wulf 1 N. Van de Weghe 1 P. De Maeyer 1 ## 1 Introduction For a variety of applications, like the evaluation of the effect of (architectural) design, various construction methods, and engineering applications on the damage due to flood events, flood damage and risk assessment would benefit from the consideration of the distinctiveness of buildings [1]. In such an effective case-by-case analysis of damage to a building at micro level, building components that resist against flood impacts and are unique to each building need to be taken into account [2]. Therefore, acquiring the dimensions of doors and windows is, among other things, of high importance in flood risk assessment studies on micro level. The locations and dimensions of these open, weak spots in buildings are decisive factors in whether or not the water of a flood can easily penetrate, damage or destroy building contents, and affect inhabitants [1, 3, 4]. Moreover, the information of location and dimensions of doors and windows, and other openings can be taken into account when evaluating local flood protection (e.g., temporary barriers like sand bags). On the other hand, in some cases, openings in load-bearing walls (which for example support the elevated building) are necessary to relieve the pressure of standing or slow-moving water against the structure (called hydrostatic loads) [5]. As a result of these openings, the flood water reaches equal levels on all sides of the construction and thus lesser the potential for damage caused by a difference in hydrostatic loads on opposite sides of the structure. Although it is already possible to extract the dimensions of doors, windows and basement holes from Energy Performance Certificates (EPC) [6], extracting the exact location of these objects or weak spots from these documents is not possible. On the other hand, it is possible to extract the orientation of the normal vector of these doors and windows from EPC documents, thus making it possible to align these doors and windows on walls of the building with the same normal vector orientation. Moreover, information on door threshold dimensions, for example, cannot be extracted from EPC documents. Therefore, an algorithm that can detect the exact location of doors and windows adds enormous value to flood risk management and flood disaster risk reduction in the future. ### Indoor Social Impact Regarding the activity and place of the victims at the time of a flood event, research shows that a significant percentage of fatalities occur indoors [7, 8, 9]. Diakakis, M. (2016) conducted research indicating that from mortality numbers due to flood events in Greece, 14.8% of all victims passed away indoors [7]. Research conducted by Jonckman et al. (2009) showed that even a higher portion of fatal incidents occurred indoors as a result of Hurricane Katherina. In this case, the majority of victims (53%) passed away in individual residences [8]. Important to mention is that fieldwork showed that many of these residential buildings were unleaved or elevated less than three feet, single-story homes [8]. Although a portion of these victims died when their houses collapsed due to the powerful force of the flood, many others drowned in their home due to a high horizontal and rising flood velocity. Flood water can penetrate through the weak spots of buildings (e.g., doors and windows), affecting inhabitants. Therefore, it is crucial to determine the locations and dimensions of these weak spots. It then becomes possible to estimate and assess the flood risk of inhabitants and to calculate the indoor flood characteristics (e.g., indoor horizontal flood velocity, vertical flood velocity, water depth and duration) with specific flood models. In addition to determining the dimensions and location of doors, windows and other weak spots against the force of floods, considering the human impact is also essential for estimating the direct economic impact due to flood events. The dimensions and locations of doors and windows determine the indoor flood characteristics and thus the direct indoor economic impact when a flood permeates these areas. Moreover, the location of doors and windows and the height of door thresholds can exclude houses to be affected by a flood event and enabling emergency services to work in a more effective manner. For example, a recent pluvial flood simulation conducted by engineering company Arcadis for a vulnerability study of the city of Ghent showed that 72-88% of buildings are, in reality, not affected by this specific pluvial flood event (with a return period of 20 years) when a door threshold for every door of 10 cm or 15 cm respectively is assumed (see Table 1). ### LiDAR Data and the Point Cloud Extension LiDAR (Light Detection And Ranging) is an optical remote-sensing technique that uses laser light to produce highly dense and accurate (x, y, z) measurements. Besides containing only x, y and z values, LiDAR sensors can capture dozens of other variables, such as intensity and return number, red, green and blue colour values and return times. Handling LiDAR data is a complex challenge due to the millions of rapidly produced points with large numbers of variables measured on each point by LiDAR sensors. This data must be stored efficiently while allowing quick and convenient access to the stored point cloud data afterwards. Many Lidar Information Systems (LIS), which have a spatial relational database architecture as a core, have been developed over the past years in response to storing difficulties (e.g., Point Cloud extension in PostgreSQL [10], Oracle [11] ). For this research, the Point Cloud extension, together with the PostgreSQL database, is used. The Point Cloud extension, created by Blottere P., stores point clouds into so-called patches of several hundred points each (see Figure 1) [10]. Instead of having a table with billions of points, the table is reduced to tens of millions of rows, which is more tractable. PostgreSQL Pointcloud deals with all this variability by using a so-called schema document to describe the contents of any particular LiDAR point. Each point can contain several variables: X, Y, Z, intensity and return number, red, green, and blue values, return times, etc. The schema document format used by PostgreSQL Pointcloud is the same one as used by the Point cloud Data Abstraction Library (PDAL) library [12]. The PDAL library is a C++ BSD library for translating and manipulating point cloud data quickly and fluently. ## 2 Methodology ### Data Preparation Although some research is conducted on running object detection and semantic segmentation on panorama images, in most scientific studies, spherical images are first converted into a less distorted format. 360deg spherical panorama images are converted to cube boxes via the so-called cube mapping process. Cube mapping is a method of environment mapping that uses the six faces of a cube as the map shape, with every face of the cube consisting of undistorted, perspective images (up, down, left, right, forward and backward), whereas the equirectangular format is a single stitched image of 360deg horizontally and 180deg vertically. Because the cubic format suffers from less distortion than the equirectangular format, it becomes possible to detect objects more accurate. In order to convert the equirectangular projection to cube box projection, the spherical coordinates are used. First, the pixel coordinates of the spherical image are normalised: \\[x=\\frac{2l}{\\langle\\theta\\text{-}l\\rangle}(1)\\] \\[x=\\frac{2l}{\\langle h\\text{-}l\\rangle}\\sigma y=\\frac{\\langle l\\text{-}2l \\rangle}{h}\\] \\[\\text{depending on the position of the pixel}\\] Figure 1: Point cloud of mobile LiDAR is stored in patches of 600 points, Ostend (Belgium) \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline & \\multicolumn{2}{c|}{T20} & \\multicolumn{2}{c|}{T100} & \\multicolumn{2}{c|}{T20 (2050)} \\\\ \\hline Total number of affected houses & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} \\\\ \\(<\\) 0.15 m & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} \\\\ \\(<\\) 0.10 m & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} \\\\ \\hline \\end{tabular} \\end{table} Table 1: Percentage not-affected by a pluvial flood event (for a return period of T20, T100 and T20 for the in situ situation of 2050) buildings due to door threshold consideration Figure 2: Cube mapping of spherical panorama images allows for more accurate detection of objects For the case of Ostend, on average, 70,000 spherical images were first converted into cube boxes (see Figure 2). By labelling thousands and thousands of images, it becomes possible to train a neural network in detecting objects or creating semantic segmentation [22, 23]. These NN approaches learn to perform tasks by considering examples without accounting for predefined features and generally without being programmed with any task-specific rules. There are two different approaches when it comes to facade segmentation: top-down methods [24, 25, 26] and bottom-up methods [27, 28, 29, 30]. The former method, top-down, uses shape grammar to parse a facade into a set of production rules and element attributes [25]. This method starts with the philosophy that building facades are highly structured due to architectural design choices and construction constraints [24, 25, 26]. For example, a door will often only appear on street-level, and windows are not placed randomly but typically at the same height as a vertical ordering. Therefore, this method searches for the best possible derivation of every object, using a specific shape grammar. Unfortunately, until now, grammar-based methods have achieved poor accuracy of pixel-wise classification [25, 31]. Moreover, this method is time inefficient during training and inference [32]. On the other hand, bottom-up methods classify pixels, taking context (e.g., neighbouring pixels) into account [28, 29]. This method employs a pipeline architecture in which each part of the pipeline tries to correct wrongly classified pixels or optimise the segments created by previous iterations. Currently, this method is more efficient and of a higher quality compared to the top-down method. In recent years, much progress has been made on object detection, mainly by the development and use of convolutional neural networks (CNNs). We can consider Faster R-CNN (region-based convolutional neural networks) [33], R-FCN (region-based fully convolutional network) [34] and SSD (single-shot detector) [35]. Overall, the best instance segmentation algorithm depends on desirable accuracy versus speed and its necessary memory (see **Fout! Vervijzinghroni niet verbonet.)**[36]. Important to note is that a false positive object detection could indicate, in this case, a higher socio-economic damage which does not match the reality. As aforementioned, multiple algorithms can be used to train and run an instance segmentation on perspective images (converted from spherical images). For example, He, K. et al. (2017) developed the Mask-RCNN, which detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance (see Figure 6) [37]. Starting from an instance segmentation on perspective images (converted from spherical images) allows for detection of doors and windows in mobile LiDAR data. ### Extraction of Door Dimensions out of Point Clouds Images do not always visualise the whole object of interest (e.g., door or window) because the line of sight is often obstructed by other objects or part of the building itself. This is undoubtedly the case when the point of view of the image is located at a slight angle from the object (see Figure 7). Consequently, automatically extracting the exact dimensions of doors or windows out of the object segmentation is impossible. Therefore, the correct dimensions need to be extracted from the point cloud based on the instance segmentation. Since the instance segmentation algorithm has yet to give desirable results, labelled training data is used to further develop the processing algorithm to extract dimension and location of doors and windows from a point cloud. Figure 5: Accuracy versus speed for an instance segmentation algorithm [36] Figure 6: Examples of outputs from the Mask-RCNN algorithm [37] Figure 7: Training set data is used to further develop the point cloud processing algorithm (door labelled by red polygon) Detecting doors, windows and door thresholds and assessing their locations and dimensions can be done by running a semantic segmentation on point clouds [38, 39, 40]. Unfortunately, for the case of Flanders, the point cloud does not have extra metadata apart from the information about the location (e.g., no intensity or scan direction flag or edge of flight line and no classification). Therefore, a semantic segmentation on the point cloud is challenging or even impossible to perform accurately. Another method is required to detect the locations and dimensions of doors and windows. Research conducted in 2005 showed that it is also possible to create a distance-value-added panoramic image [41], where every pixel holds the distance value measured from the location where the images are taken. Similarly, it is possible to create 'dimension-added-value' panorama images, making it possible to extract the location and dimensions after completing the object detection or semantic segmentation. This method provides the benefit of quickly extracting only relevant point cloud data, whereby point cloud analysis has been reduced to a minimum. Moreover, with the use of multiple 'dimension-added-value' panorama images, it becomes possible to run semantic segmentation of multiple points of views. As a result, it is feasible to detect doors and windows even if an obstacle (e.g., a car or tree) blocks the line of sight from one specific point of view. After detecting the object (e.g., door, window) with a semantic segmentation algorithm on the spherical images, the metadata of the pixels that are classified as a door or window is extracted and stored in a database. Although mobile point clouds can give highly accurate measurements of dimensions, this geometry acquisition method, unfortunately, inevitably includes measurement noise at varying degrees. This noise is caused by signal backscattering of the measured targets and the materials of the targets' surface[42]. The Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is used to deal with the noise of the mobile mapping acquisition. DBSCAN groups points that are closely packed together (points with many nearby neighbours). In order to run the DBSCAN clustering, two parameters are required: maximum distance between points \\(\\varepsilon\\) and the minimum number of points required to form a dense region [43]. First, all so-called core points with a predefined minimum number of points inside the \\(\\varepsilon\\) neighbourhood of every point are selected. Next, a connected component is created of all core points that are in the neighbour graph. Hereafter, every non-core point is assigned to a formed cluster if the non-core point lies within the \\(\\varepsilon\\) distance of a cluster. All remaining points are labelled as noise and can be ignored (see Figure 9). Consequently, all points within the DBSCAN cluster are mutually density-connected, and if one point is density-reachable from any point of the cluster, it is part of the cluster as well. After reducing the noise in the point cloud samples, the detection of a door and window plane is completed. The planes are not created using normal vectors (see Discussion) but rather by calculating the line of best fit in the x,y plane (see Figure 10). Although noise is ignored with the use of DBSCAN clustering, the line of best fit is created by considering the possible existence of outliers. Because of this, it becomes possible to define doors quickly. Figure 8: Point cloud of Ostend, captured from a mobile platform Figure 10: Pixels are automatically selected based on the training data set, whereafter the relevant point cloud dimensions can be used Figure 9: Besides containing the actual measurement of walls, doors and windows, mobile LiDAR data (purple points) contains noise (red circles), which makes it challenging to extract the points that represent doors (red points). Unfortunately, due to a wide range of door shapes (e.g., ornamentation and sculpting on front doors), the proposed algorithm does not give satisfactory results after a visual evaluation. Therefore, more extensive research is needed to improve this proposed algorithm pipeline so that the locations and dimensions of doors and windows can later be used in the flood risk assessment methodology in Flanders. ## 3 Discussion ### Accuracy of the Point Cloud The point cloud of the mobile mapping has an accuracy between 1 to 2 cm, which means that the extracted location and dimensions of doors, windows and door thresholds will be accurate enough to use in flood risk assessment studies [44]. Nevertheless, this accuracy needs to be included and mentioned together with the output of this calculation so that it can be accounted for in the decision-making process of flood risk management. ### Difference in Region-Dependent Appearance While the appearance of doors and windows seem only to slightly change from region to region, sometimes these differences can be significant, resulting in decreased accuracy of instance segmentation algorithms. Thus, when an object detection algorithm is trained, special attention must be given to training the model based on images of doors and windows in the specific region of the application (e.g., Flanders). ### Type of Materials In contrast with algorithms that only use segmentation of LiDAR data to detect doors and windows, this algorithm can provide more information than the dimensions and location of these objects. Because this prototype contains an object detection script, it is possible to incorporate a material detection algorithm, which can detect whether a door or window is made of wood or metal. Although this material detection will remain a rudimentary estimate, this information can be used to estimate the stability of these weak spots for flood events in buildings. Furthermore, the object detection algorithm can also detect the presence of barrier gutters around doors and windows. Moreover, cat doors and mailboxes in front doors can be detected and considered in flood risk assessments. ### Conversion Time Spherical to Cube Box Image The conversion from the spherical image to cube box images take, on average, seven minutes since the algorithm does not support multifreeding on the graphics processing unit (GPU). Instead, everything is purely calculated on central processing units (CPU), which is computation intensive. Fortunately, it was possible to convert the images in parallel using the High-Performance Computing (HPC) infrastructure of Flanders [45]. As a result, it was possible to convert 70,000 images in a few hours, instead of 34 days. Nevertheless, changing this conversion script into a script that supports multifreeding on a GPU will be necessary for future use. ### Applicability and Scale of Prototype Although this prototype is tailored for the Flanders region, it can be used in other regions as well, after some additional script is embedded. The spherical panorama images and mobile LiDAR data can be extracted from the Google StreetView panorama images [46, 47]. Cavello M. et al. (2015) suggested a method to reconstruct a point cloud based on multiple different Google StreetView panoramic images along a street [47]. By using the reconstructed point cloud and panorama images from Google StreetView, this prototype can also be used to detect the dimensions and locations of doors, windows and door thresholds. ### Median Clustering of Normal Vectors In the development of the prototype, the median cluster method of normal vectors was not used. With the clustering of normal vectors of a point cloud, it becomes possible to get segmentations of planes. Unfortunately, due to an overload of noise at windows and windows in doors and the lack of LiDAR data of the glass, it is challenging to extract planes of doors and windows by clustering normal vectors. Moreover, not all front doors have a perfectly flat surface. For example, ornamentation and sculpting on front doors make the detection of the door plane extremely challenging. Nevertheless, a combination of the normal vector clustering method and the line of best fit (in the x,y plane) method, could offer an improved, complementary methodology. ### Upgrading the Prototype As cited above, the mentioned script requires further research and development to detect the dimensions and locations of doors, windows and door thresholds automatically. At the moment, the script is not ready for valorisation without further improvement in object detection and the final door and window segmentation. ## 4 Conclusions Consideration of the location and dimensions of doors and windows plays a crucial role in increasing the accuracy of flood risk assessment in Flanders. Until now, there has been a lack of data concerning the design and construction of flood-prone building structures. However, the combination of LiDAR data and panoramic images available in Flanders could be used to provide valuable insight into the matter. With the use of instance segmentation on 360deg images and processing and analysis of point cloud data, it becomes possible to obtain information on weak spots. This paper reports on the current state of research in the areas of object detection and instance segmentation on images to detect doors and windows in mobile LiDAR data. ## References * [1] Amirebrahimi, S.; Rajabifard, A.; Mendis, P.; Ngo, T. D.; Ngo, T. _A data model for integrating GIS and BIM for assessment and 3D visualisation of flood damage to building_; * [2] Pistrika, A. _Flood Damage Estimation based on Flood Simulation Scenarios and a GIS Platform_; 2010; Vol. 30; * [3] Amirebrahimi, S.; Rajabifard, A.; Mendis, P.; Ngo, T. A BIM-GIS integration method in support of the assessment and 3D visualisation of flood damage to a building. _J. Spat. Sci._**2016**, doi:10.1080/14498596.2016.1189365. * _Amirebrahimi et al. (2016)_ Amirebrahimi, S.; Rajabifard, A.; Mendis, P.; Ngo, T. A Planning Decision Support Tool for Assessment and 3D Visualisation of Flood Risk to Buildings. In _2016 Floodplain Management Association National Conference_; 2016; pp. 1-15. * _Openings in Foundation Walls and Walls of Enclosures Below Elevated Buildings in Special Flood Hazard Areas in accordance with the National Flood Insurance Program_; * _Van Ackere et al. (2018)_ 6. Van Ackere, S.; Beullens, J.; De Wulf, A.; De Maeyer, P. Data Extraction Algorithm for Energy Performance Certificates ( EPC ) to Estimate the Maximum Economic Damage of Buildings for Economic Impact Assessment of Floods in Flanders, Belgium. _Int. J. Geo-information_**2018**, \\(7\\), doi:10.3390/ijg7070272. * _Diakakis (2016)_ 7. Diakakis, M. Have flood mortality qualitative characteristics changed during the last decades? The case study of Greece. _Environ. Hazards_**2016**, _15_, 148-159, doi:10.1080/17477891.2016.1147412. * _Jonkman et al. (2009)_ Jonkman, S. N.; Maaskant, B.; Boyd, E.; Levitan, M. L. Loss of life caused by the flooding of New Orleans after hurricane Katrina: Analysis of the relationship between flood characteristics and mortality. _Risk Anal._**2009**, _29_, 676-698, doi:10.1111/j.1539-6924.2008.01190.x. * _Lew et al. (2015)_ 9. Lew, E. O.; Wetli, C. V. Mortality from Hurricane Andrew. _J. Forensic Sci._**2015**, _41_, 13933J, doi:10.1520/jfs13933j. * Paul (2019)_ 10. Paul, B. A PostgreSQL extension for storing point cloud (LIDAR) data Available online: [https://github.com/pgpointcloud/pointcloud](https://github.com/pgpointcloud/pointcloud). * _Ravada et al. (2009)_ 11. Ravada, S.; Kazar, B. M.; Kothuri, R. Query Processing in 3D Spatial Databases: Experiences with Oracle Spatial 11g. In _3D Geo-Information Sciences_; Springer Berlin Heidelberg: Berlin, Heidelberg, 2009; pp. 153-173. * Point Data Abstraction Library - - pdalio Available online: [https://pdal.io/](https://pdal.io/) (accessed on May 27, 2019). * _Zhao et al. (2003)_ 13. Zhao, W.; Chellappa, R.; Phillips, P. J.; Rosenfeld, A. Face recognition. _ACM Comput. Surv._**2003**, _35_, 399-458, doi:10.1145/954339.954342. * _Suarez et al. (2012)_ 14. Suarez, J.; Murphy, R. R. Hand gesture recognition with depth images: A review. In _2012 IEEE ROMAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication_; IEEE, 2012; pp. 411-417. * _Ziaefard and Bergevin (2015)_ 15. Ziaefard, M.; Bergevin, R. Semantic human activity recognition: A literature review. _Pattern Recognit._**2015**, _48_, 2329-2345, doi:10.1016/J.PATCOG.2015.03.006. * _Geronimo et al. (2010)_ 16. Geronimo, D.; Lopez, A. M.; Sapra, A. D.; Graf, T. Survey of Pedestrian Detection for Advanced Driver Assistance Systems. _IEEE Trans. Pattern Anal. Mach. Intell._**2010**, _32_, 1239-1258, doi:10.1109/TPAMI.2009.122. * _Zehang Sun et al. (2006)_ 17. Zehang Sun; Bebis, G.; Miller, R. On-road vehicle detection: a review. _IEEE Trans. Pattern Anal. Mach. Intell._**2006**, _28_, 694-711, doi:10.1109/TPAMI.2006.104. * _Anguelov et al. (2019)_ 18. Anguelov, D.; Koller, D.; Parker, E.; Thrun, S. _Detecting and Modeling Doors with Mobile Robots_; 19. * _Parmar (2019)_ 19. Parmar, R. Detection and Segmentation through ConvNets Available online: [https://towardsdatascience.com/detection-and-segmentation-through-convnets-47aa42de27ea](https://towardsdatascience.com/detection-and-segmentation-through-convnets-47aa42de27ea) (accessed on Jun 11, 2019). * _Stoeter et al. (2012)_ 20. Stoeter, S. A.; Le Mauff, F.; Papanikolopoulos, N. P. Real-time door detection in cluttered environments. In _Proceedings of the 2000 IEEE International Symposium on Intelligent Control. Held jointly with the 8th IEEE Mediterranean Conference on Control and Automation (Cat. No.00CH37147)_; IEEE, pp. 187-192. * _Munoz-Salinas et al. (2003)_ 21. Munoz-Salinas, R.; Aguirre, E.; Garcia-Silvente, M.; Gonzalez, A. _Door-detection using computer vision and fuzzy logic_; 22. * _Cicirelli et al. (2003)_ 22. Cicirelli, G.; D'orazio, T.; Distante, A. Target recognition by components for mobile robot navigation. _J. Exp. Theor. Artif. Intell._**2003**, _15_, 281-297, doi:10.1080/0952813021000039430. * _Cokal and Erden (2013)_ 23. Cokal, E.; Erden, A. Development of an image processing system for a special purpose mobile robot navigation. In _Proceedings Fourth Annual Conference on Mechatronics and Machine Vision in Practice_; IEEE Comput. Soc; 246-252. * _Delo et al. (2013)_ 24. Delo, A. -'; Martinovic'c, M.; Van Gool, L. Bayesian Grammar Learning for Inverse Procedural Modeling._**2013**, doi:10.1109/CVPR.2013.33. * _Teboul et al. (2012)_ 25. Teboul, O.; Kokkinos, I.; Simon, ic; Koutsourakis, P.; Paragios, N. _Shape Grammar Parsing via Reinforcement Learning_; 26. * _Riemenschneider et al. (2012)_ 26. Riemenschneider, H.; Krispel, U.; Thaller, W.; Dongoser, M.; Havemann, S.; Fellner, D.; Bischof, H. Irregular lattices for complex shape grammar facade parsing. In _2012 IEEE Conference on Computer Vision and Pattern Recognition_; IEEE, 2012; pp. 1640-1647. * _Martinovic et al. (2018)_ 27. Martinovic, A.; Mathias, M.; Weissenberg, J.; Gool, L. Van _A Three-Layered Approach to Facade Parsing_; 28. * _Tylecek and Radimsara (2019)_ 28. Tylecek, R.; Radimsara, R. R. _Spatial Pattern Templates for Recognition of Objects with Regular Structure_; 29. * _Yang et al. (2011)_ 29. Yang, M. Y.; Forstner, W. A hierarchical conditional random field model for labeling and classifying images of man-made scenes. In _2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops)_; IEEE, 2011; pp. 196-203. * _Cohen et al. (2014)_ 30. Cohen, A.; Schwing, A. G.; Pollefeys, M. Efficient Structured Parsing of Facades Using Dynamic Programming. In _2014 IEEE Conference on Computer Vision and Pattern Recognition_; IEEE, 2014; pp. 3206-3213. 31. Gadde, R.; Marlet, R.; Nikos, P. Learning Grammars for Architecture-Specific Facade Parsing. _Int. J. Comput. Vis._**2014**, _117_, 290-316. * 32. Kozinski, M.; Gadde, R.; Zagoruyko, S.; Obozinski, G.; Marlet, R. A MRF shape prior for facade parsing with occlusions. In _2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_; IEEE, 2015; pp. 2820-2828. * 33. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal * 34. Dai, J.; Li, Y.; He, K.; Sun, J. R-FCN: Object Detection via Region-based Fully Convolutional Networks. **2016**. * 35. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A. C. SSD: Single Shot MultiBox Detector. **2015**, doi:10.1007/978-3-319-46448-0_2. * 36. Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; Murphy, K. Speed/accuracy trade-offs for modern convolutional object detectors. **2016**. * 37. He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask **2017**. * 38. Qi, C. R.; Su, H.; Mo, K.; Guibas, L. J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation 2017, 652-660. * 39. Koppula, H. S.; Anand, A.; Joachims, T.; Saxena, A. Semantic Labeling of 3D Point Clouds for Indoor 2011, 244-252. * 40. Hackel, T.; Wegner, J. D.; Schindler, K. Fast semantic segmentation of 3D point clouds with strongly varying density., doi:10.5194/sspressan-III-3-177-2016. * 41. Verbree, E.; Zlatanova, S.; Dijkman, S. _Distance-value-added panoramic images as the base data model for 3D-GIS_; 2011, * 42. Guan, H.; Yu, Y.; Li, J.; Liu, P.; Zhao, H.; Wang, C. Automated extraction of manhole covers using mobile LiDAR data. _Remote Sens. Lett._**2014**, \\(5\\), 1042-1050, doi:10.1080/2150704X.2014.994716. * 43. Schubert, E.; Sander, J.; Ester, M.; Kriegel, H. P.; Xu, X. DBSCAN Revisited, Revisited: Why and How You Should (Still) Use DBSCAN. _ACM Trans. Database Syst._**2017**, _42_, 1-21. doi:10.1145/3068335. * 44. Teccon Mobile mapping in cijfers Available online: [https://teccon.be/diensen/mobile-mapping/visualiseren/](https://teccon.be/diensen/mobile-mapping/visualiseren/) (accessed on Jun 28, 2019). * 45. Vlaams Supercomputer Centrum (VSC) Supercomputing \\(|\\) VSC \\(|\\) Vlaams Supercomputer Centrum \\(|\\) Flanders Available online: [https://www.vscentrum.be/](https://www.vscentrum.be/) (accessed on Jun 13, 2019). * 46. nscomputer Creating point clouds with Google Street View \\(-\\) nscomputer \\(-\\) Medium Available online: [https://medium.com/@nocomputer/creating-point-clouds-with-google-street-view-185fad94ee](https://medium.com/@nocomputer/creating-point-clouds-with-google-street-view-185fad94ee) (accessed on May 27, 2019). * 47. Cavallo, M. _3D City Reconstruction From Google Street View_;
Increasing urbanisation, changes in land use (e.g., more impervious area) and climate change have all led to an increasing frequency and severity of flood events and increased socio-economic impact. In order to deploy an urban flood disaster and risk management system, it is necessary to know what the consequences of a specific urban flood event are to adapt to a potential event and prepare for its impact. Therefore, an accurate socio-economic impact assessment must be conducted. Unfortunately, until now, there has been a lack of data regarding the design and construction of flood-prone building structures (e.g., locations and dimensions of doors and door thresholds and presence and dimensions of basement ventilation holes) to consider when calculating the flood impact on buildings. We propose a pipeline to detect the dimension and location of doors and windows based on mobile LiDAR data and 360deg images. This paper reports on the current state of research in the domain of object detection and instance segmentation of images to detect doors and windows in mobile LiDAR data. The use and improvement of this algorithm can greatly enhance the accuracy of socio-economic impact of urban flood events and, therefore, can be of great importance for flood disaster management. F 1 (2019) 1
Condense the content of the following passage.
248
isprs/222d8f34_dd4d_4b4c_8af7_3f66cf80fd81.md
Stitching Large Maps from Videos taken by a Camera Moving Close Over a Plane Using Homography Decomposition E. Michaelsen Fraunhofer IOSB, Gutleuthausstrasse 1, 76275 Ettlingen, Germany [email protected] ## 1 Introduction ### Intended Applications In particular underwater robot vision is restricted to keep the distance between a structure to be monitored and the platform on which the camera is mounted short. There are ideas to enlarge the allowable distance by using gated viewing devices [9], but in the waters found where the application is supposed to be located there will always be a maximal distance where no considerable image quality is allowed due to floating obfuscation. Good image quality can often be expected from imagery taken at distances such as one meter. On the other hand the structure to be mapped may well have an extension of several hundred meters. Here we restrict ourselves to roughly and locally planar structures - such as retaining walls, harbour structures or underwater biotopes. The goal is to stitch a kind of orthophoto from a long video sequence. Under water the drift problem - as outlined in section 1.2 - is very serious. But it also occurs in unmanned aerial vehicle mapping, where the platform may be cruising in about a hundred meter height over a roughly and locally planar world of much larger extension, e.g. several kilometres in extension. ### Problem The standard state-of-the-art method for stitching of an image sequence into a larger panorama is driven by successive planar homography estimation from image to image correspondences between interest points. Most often it is assumed tacitly or explicitly that the camera should only rotate round the input pupil and not move around in space. If the scene is strictly planar, there is - in principle - no difference between the image obtained by a wide-angle view from close up (in pin-hole geometry and taken normally) and a view from further away, or even an ortho-normal map. So the stitching of large views using homographies should be equivalent to taking an ortho-normal map. However, deviations from the planar scene form, e.g. when a retaining wall is only locally planar but cylindrical in its global shape, cannot be treated this way. Moreover, if the first frame of the video sets the reference - as is often done - it may well not be exactly normal to the scene. Then there exists a distance in which the plane through the camera location and normal to its focal axis will intersect the scene plane at a line somewhere. Points on this line will be mapped to infinity if the homography estimation were precise - and points beyond this line would appear on the opposite end of the panorama. If we are only one meter away from a structure of hundreds of meters this is to be expected. More seriously, the homography sequence approach accumulates the inevitable errors in large chains of matrix multiplications. Such drift may contain un-biased parts from uncertainty in the interest point locations, but it also may contain biased parts. E.g. homography estimation tends to hide un-modelled lens distortions in the rotational part of the homography [2]. ### Related Work Many panorama stitching software packages are commercially available or can be downloaded for free from the web such as HUGIN [1]. The theory of optimal estimation of homogenous entities, such as planar homography matrices, with more entries than degrees of freedom from image to image correspondences with proper uncertainty propagation has reached a high level of sophistication [5]. RANSAC methods for robust estimation of such entities are standard today [4, 7] but there are also alternatives such as iterative reweighting or GoodSAC [11]. Under water panorama stitching has been addressed e.g. by [2] with particular emphasis on the lens distortion induced drift. ## 2 Stitching Local Pannos into a Large Map ### Homography Estimation A planar homography is a mapping \\(x\\)'\\(-\\)_h(x)_ from one image into the other keeping straight lines straight. Here \\(x\\) and \\(x\\)' respectively are the points in the images. Homographies form an algebraic group with the identity as one-element. Using homogenous coordinates the homographies turn out to be linear: \\(x^{*}\\!=\\!Hx\\). Where \\(H\\) is a 3x3 matrix whose entries depend on the choice of the coordinate system in the images. This linear description hides the highly non-linear nature of homographies in the division when transforming x' back into inhomengous image coordinates. Thus homographies may map a finite point into infinity and they are not invariant for statistically important entities such as centre of gravity or normal distributions. Still, there is consensus today that homographies can be estimated from a set of four or more correspondences of interest points using the linear matrix equation provided that 1) a coordinate system is used that balances the entries into the equation system such that signs have equal frequencies and absolutes are close to unity [7], and 2) \\(H\\) is not too far away from the unity matrix (in particular the \"projective\" entries \\(H_{jl}\\) and \\(H_{l2}\\) should be small). In a video sequence 1) can be forced and 2) can be assumed. Thus, we follow the usual procedure using an interest points, correlation for correspondence inspection, and RANSAC [4] as robust estimator. The activity diagram in figure 1) gives the details of the procedure. In each frame of the video a set of interest points (\\(p_{u}:i\\!-\\!1\\), \\(k_{J}\\)) is extracted using the well-known squared averaged gradient operator in its Kothe variant [6, 8]. These are tracked back in the previous frame also using standard functions - here optical flow including image pyramids from open CV base [12]. Among these a consensus set is selected and simultaneously on optimal homography using linear estimation and RANSAC on the correspondences of the \\(p_{u}\\) in coordinates transformed accordingly [7]. Homographies cannot only be estimated for successive video frames but also for frames further apart from each other as long as there is sufficient overlap. However, if there is no sufficient overlap anymore the homographies must be chained in a sequence - by successive multiplication of matrices. Since there is uncertainty in the entries of this product there will be a drift - also in the \"projective\" entries \\(H_{jl}\\) and \\(H_{l2}\\). Sooner or later points from an image far away from the first frame will thus be mapped to infinity. ### Homography Decomposition and Rectification Here \\(H\\) must be given in the normalized form, i.e. with the image coordinate system transformed such that the focal length equals unity and the principle point of the camera equals the origin of the coordinate system. So focal length and principle point should be known in good approximation. The standard decomposition of the matrix \\(H\\) in the form \\[H\\ =\\ \\lambda R\\ +\\ t n^{\\tau} \\tag{1}\\] is known since [3]. Here \\(R\\) is the rotation matrix of the camera between the images, \\(t\\) is a translation vector, \\(n\\) is the surface normal of the planar scene, and \\(\\lambda\\) a scalar factor. \\(t\\) can also be interpreted as homogenous entity. Than it is the image of the other camera, the epipole. \\(n\\) can also be interpreted as homogenous line equation. Than it is the line at infinity or the horizon of the scene. This is the most important result here. The application demands that a proper - close to orthonormal - mapping of the scene should yield \\(n_{l}\\!-\\!n_{l}\\!-\\!n_{2}\\!-\\!0\\) i.e. the normal identical to the viewing direction. After decomposition of \\(H\\) this can be achieved by applying appropriate rotations round the \\(x\\) and \\(y\\) axes: \\[\\begin{array}{l}\\left[\\begin{array}{ccc}1&0&0\\\\ 0&\\cos\\alpha&\\sin\\alpha\\\\ 0&-\\sin\\alpha&\\cos\\alpha\\end{array}\\right]\\ \\mbox{and}\\ \\left[\\begin{array}{ccc}\\cos\\beta&0&-\\sin \\beta\\\\ 0&1&0\\\\ \\sin\\beta&0&\\cos\\beta\\end{array}\\right]\\end{array} \\tag{2}\\] where \\(\\beta\\!-\\!atan(n_{l}\\!-\\!n_{l}\\!-\\!n_{l}\\!-\\!n_{l}\\!-\\!n_{l}\\!-\\!n_{l}\\!-\\!n_{l}\\!)\\) after the rotation round the \\(y\\) axis. With this transformation the view should be rectified. We refer to [10] for a detailed analysis of the decomposition. There also a purely analytical solution to the decomposition can be found using only roots. Here the classical singular value decomposition version is used decomposing \\(H\\) into a product \\(H\\!-\\!U\\!D\\). The entries of the central diagonal matrix \\(d_{l1}\\), \\(d_{l2}\\), \\(d_{l3}\\) are the critical parts. They must be of sufficiently different sizes. Their differences are used as denominator while solving the quadratic equation system. Two significantly different solutions appear among which we pick the one with \\(n\\) closest to (0,0,1)\\({}^{T}\\). The other solutions are flipped sign versions of no interest. But if the two solutions are equally close to (0,0,1)\\({}^{T}\\) or if the singular values are too similar the decomposition fails (resulting in a failure branch in the flow in Figure 2). ### Stitching the Local Patches into a Large Panorama It is our intention to treat all rectified panorama patches equal. Neither any projective distortion should be applied to them anymore - since this was corrected by homography estimation, decomposition and rectification, nor any shearing - since this is excluded by sensor construction, nor any scaling - since we assume that the platform is capable of sensing, controlling and keeping its distance to the scene plane. The rotations round the \\(x\\) and \\(y\\) axes where fixed in the rectification step. We will also assume that the camera is not rotating round the \\(z\\) axis on the long run by means of appropriate other sensors on-board the Figure 1: Activity diagram for partial homography chain estimation platform - e.g. gravity sensor under water or compass on a UAV. The only two remaining degrees of freedom are shift in \\(x\\) and \\(y\\) direction. This translation can easily be obtained by averaging the shift between the \\(p_{i,\\mathit{last}}\\) and \\(p_{i,\\mathit{first}}\\) of two successive patches. Recall, that the first image of a new patch is identical with the last image of the previous patch. Running the interest operator with the same parameters on the same image will give the same number of interest points in the same sequence. Such algorithms are deterministic. \\(p_{i,\\mathit{last}}\\) and \\(p_{i,\\mathit{first}}\\) of two successive patches are subject to different homographies, \\(p_{i,\\mathit{last}}\\) as result of a chained homography estimation plus a rectification and \\(p_{i,\\mathit{thyr}}\\) as result of only a rectification. So there will be a resolution in this averaging process, which quantifies the success of the approach. But there cannot be any outliers. Again a UML activity diagram gives an overview over this procedure (Figure 2). Here the stitching of a local panorama patch - i.e. the estimation of a partial homography chain as given in Figure 1 is hidden in one node. This is still dead reckoning - since there is a possibly long sum of successive vectors with uncertainty drift - but it is much more stable than the multiplicative drift of the matrix chain. It is impossible that image points will ever be mapped to infinity ### Resampling a Panorama from a Video The main output of both a local estimation for patches as well as the global estimation for a panorama is a chain of homographies. So for each frame \\(i\\) of the video there is a homography \\(h_{i}\\) mapping a location - i.e. a line- and column index - of the panorama \\((h_{i}\\), \\(c_{i})^{T}\\) to a location in the \\(i\\)-th video frame \\((d_{i+}\\), \\(c_{i})^{T}\\) - \\(h_{i}\\)\\(d_{i}\\)\\(p_{i}\\), \\(c_{i})^{T}\\). However, a homography is a function mapping continuous coordinates into continuous coordinates. So if the panorama has similar or higher resolution than the video some type of interpolation will be required in order to fill the panorama with gray-values or colours from the video. Here the panorama is usually of lower resolution. So the coordinates in the video frame can be obtained simply by rounding \\((h_{i+}\\), \\(c_{i})^{T}\\). Moreover, several frames of the video may contribute to the gray-value or colour to be displayed in one panorama pixel. The following possibilities are discussed: * Averaging the value from all accessible frame locations \\((l_{u+}\\), \\(c_{u})^{T}\\); \\(l\\leq l_{u}\\leq l_{\\mathit{max}}\\) and \\(l\\leq c_{u}\\leq c_{\\mathit{max}}\\). This treats all information equally, but may give fuzzy results. * Maximizing the gray-value over the index \\(i\\). This is fast and easy, because all the non accessible positions either yield zero or NAN, but it has a bias towards brighter areas. * Minimizing the distance to the centre \\((l_{\\mathit{f}}\\), \\(c_{d})^{T}\\) using any metric \\(d_{i}=d(l_{u+}\\), \\(c_{u})^{T}\\), \\((l_{e}\\), \\(c_{d})^{T}\\)) picks the gray-value from one particular frame. Here faults in the estimation may show up as sharp edges. We used this option here in order to explicitly show such problems. * but leads to best and seamless results. ## 3 Experiments Some experiments where done outside of the water with a video taken by an Olympus PEN E-P1 camera with the standard zoom lens set to the extreme wide angle f=14mm. At this setting the lens shows considerable distortions giving slightly bending wall grooves (see Figure 3). This was not calibrated or modelled. The camera moves in a distance of about 0.7 meter along a wall constructed from large roughly fixed stones. The scene is roughly planar with deviations of about three centimetres. The camera was kept mostly normal to the surface - but free handed. This fairly well mimics the kind of videos that could be expected from an underwater vehicle cruising along a retaining wall. On the other hand, outside of the water we can easily step back and take a groundtruth picture with a longer focal length and less distortion. The one presented in Figure 6 was taken with a PentaxistD S using a standard SMC 1:2 f=35mm lens. Still this is not calibrated - however it is sufficiently free of distortions since this is not a zoom lens, and it can also be used Figure 2: Activity diagram for global homography chain composition on the larger 35mm film frame. Moreover, only a section from the image centre is used. From the HD video taken with the Olympus PEN local panorama patches were stitched using the standard flow outlined in section 2.1 (Figure 1). Rather arbitrarily we set the number of frames to be composed into one patch to one hundred. One such patch can be seen in Figure 4. While the view seems fairly normal on the left hand side - where the initializing frame was - it is evident that the projective drift effects already start at the right hand side of the patch (the moment that no overlap is given). We can see the kind of problem homography stitching has by using our knowledge that the stones are truly rectangular - see groundtruth in the upper picture of Figure 6. A certain drift - in particular in the \"projective\" entries \\(H_{3l}\\) and \\(H_{2z}\\) is inevitable. Figure 3 shows the rectification of this patch using the decomposition method described in Section 2.2 on the homography corresponding to the patch. It finds a reasonable compromise correcting the mistakes. In particular the rectangular structure is reproduced better. Some shear drift remains. This rectified patch is than part of the larger panorama displayed as lower picture in Figure 6, which was obtained by the method indicated in Section 2.3. It can be seen that a beginning drift is sometimes corrected by force introducing considerable non-continuous steps into the homography chain. ## 4 Conclusion and Outlook Here we could only present a very preliminary overview of the intended system. It was mainly tested on videos from outside water with mild distortions and rather good quality. Less favourable data can be expected from under water platforms. On such data often nothing can be seen. If something is seen the lighting may well be quite inhomogeneous, there will probably be floating clutter in front of the interesting scene, and the lens may be out of focus - autofocus should be off in order not to be disturbed by the clutter. Lately, we obtained such a video, and one frame of it is presented in Figure 7. The processing chain as indicated in the activity diagrams above hast to be adjusted to such situations. The same parameter values, e.g. for the interest operator, as applied to the test sequence of section 3 (just the usual default settings) give less than four interest points sometimes and on many other occasions RANSAC still fails to come up with a plausible solution. The computational flow only took the \"else\" path of the partial homography chain diagram once in more than thousand images of the example video, while persistently staying on that side for hundreds of (successive) frames of the Figure 4: Panorama patch from a hundred frames using standard homography estimation Figure 5: Rectified panorama patch using homography decomposition following [3] Figure 3: One frame of the test video Figure 6: Lower: A panorama stitched from 5 patches, i.e. 500 frames, leading far away from the original frame on the leftmost side; Upper: Groundtruth picture of the same scene taken from further away underwater video with the default parameters. Leaving the default settings in direction to more liberal ones of course gives less stable behavior of the whole thing. Still, a preliminary result displayed in Figure 8 indicates sufficient stability to cope with such data. It is a good advice for underwater inspection to steer the vehicle as close to the structure of interest as possible. It also becomes evident that the projective drift problem occurring when large sequences of such close-up videos are stitched can be mitigated by allowing full homography only on a local scale and keeping the global transform fixed to simple 2D-translation. The decomposition of the homography between the first and last frame of a patch giving an estimate of the surface orientation of the scene turns out to be an important help for rectification and subsequent joining of the patches. Obviously there is a trade-off between large patches from long camera movements allowing a stable decomposition with little error on the surface normal and epipole estimation on the one hand and the indicated projective drift problems that immediately begin to occur when there is no sufficient overlap anymore. Setting this parameter to a hundred frames can only be a first guess that has to be replaced by a mathematical investigation searching for the optimal patch size. Of course we look forward to making more experiments with challenging under water videos in the future. There remains a lot of room for improvement in all steps of the method. ## References * [1] d'Angelo, P. HUGIN 2010.4.0, free panorama stitcher, [http://hugin.sourceforge.net/download/](http://hugin.sourceforge.net/download/) (accessed 28 Apr. 2011) * [2] Elibol, A., Moeller, B., Garcia, R., October 2008. Perspectives of Auto-Correcting Lens Distortions in Mosaic-Based Underwater Navigation. _Proc. of 23rd IEEE Int. Symposium on Computer and Information Sciences (ISCIS '08)_, Istanbul, Turkey, pp. 1-6. * [3] Faugeras, O., Lustman, F., 1988. Motion and structure from motion in a piecewise planar environment. _International Journal of Pattern Recognition and Artificial Intelligence_, 2(3), pp. 485-508. * [4] Fischler, M. A., Bolles, R. C., 1981. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. _Communications of the Association for Computing Machinery_, 24(6), pp. 381-395. * [5] Foerstner, W., 2010. Minimal Representations for Uncertainty and Estimation in Projective Spaces. _ACCV_ (2), pp. 619-632 * [6] Harris, C., Stephens, M., 1988. A combined corner and edge detector. _Proceedings of the 4th Alvey Vision Conference_. pp. 147-151. [http://www.bmva.org/bmvc/1988/avc-88-023.pdf](http://www.bmva.org/bmvc/1988/avc-88-023.pdf) * [7] Hartley, R., Zisserman, A., 2000. _Multiple View Geometry in Computer Vision_. Cambridge University Press, Cambridge. * [8] Kothe, U., 2003. Edge and Junction Detection with an Improved Structure Tensor. In: Michaelis, B., Krell, G. (Eds.): _Pattern Recognition, Proceedings_ 25\\({}^{\\text{th}}\\)_-D4GM_, Springer LNCS 2781, Berlin, pp. 25-32 * [9] Laser Optronics, Underwater Gated Viewing Cameras, [http://www.laseroptronix.se/gated/aaly.html](http://www.laseroptronix.se/gated/aaly.html), (accessed 28 Apr. 2011) * [10] Malis, E., Vargas M., Sep. 2007. Deeper understanding of the homography decomposition for vision-based control. INRIA report no. 6303, Sophia Antipolis, France. [http://hal.inria.fi/docs/00/17/47/39/PDF/RR-6303.pdf](http://hal.inria.fi/docs/00/17/47/39/PDF/RR-6303.pdf) (accessed 28 Apr. 2011) * [11] Michaelsen, E., von Hansen, W., Kirchof, M., Meidow, J., Stilla, U., 2006. Estimating the Essential Matrix: GOODSAC versus RANSAC. In: Foerstner, W., Steffen, R. (eds) Proceedings Photogrammetric Computer Vision and Image Analysis. _International Archives of Photogrammetry, Remote Sensing and Spatial Information Science_, Vol. XXXVI Part 3. * [12] Open CV Sources and DLLs, in particular [http://opencv.willowgarage.com/documentation/cpp/video_moti](http://opencv.willowgarage.com/documentation/cpp/video_moti) on_analysis_and_object_tracking.html?highlight=opticalflow#c alcOpticalFlowPyt.KL (accessed 26 Jun. 2011) * [13] Ren, Y., Chua, C.-S., Ho, Y.-H., 2003. Statistical background modeling for non-stationary camera. Pattern Recognition Letters (24), pp. 183-196 Figure 8: An underwater panorama stitched from 6 patches, i.e. 600 frames Figure 7: One frame of a typical underwater video
For applications such as underwater monitoring a platform with a camera will be moving close to a large roughly planar scene. The idea to map the scene by stitching a panorama using planar homographies is nearby. However, serious problems occur with drift caused by uncertainty in the estimation of the matrices and un-modelled lens distortions. Sooner or later image points will be mapped to infinity. Instead this contribution recommends using the homographies only for the composition of local patches. Then the homography obtained between the first and the last frame in such patch can be decomposed giving an estimate of the surface normal. Thus the patches can be rectified and finally stitched into a global panorama using only shift in \\(x\\) and \\(y\\). The paper reports about experiments carried out preliminarily with a video taken on dry ground but a first under water video has also been processed. P a
Give a concise overview of the text below.
180
arxiv-format/1906_03050v3.md
# Optimization of light fields in ghost imaging using dictionary learning Chenyu Hu 1,2 Zhishen Tong 1,2 Zhentao Liu 1 Zengfeng Huang 3 Jian Wang3,* and Shensheng Han1,2 1Key Laboratory for Quantum Optics and Center for Cold Atom Physics of CAS, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China 3School of Data Science, Fudan University, Shanghai 200433, China *[email protected] ## 1 Introduction As a novel technique for optical imaging, ghost imaging (GI) was initially implemented with quantum-entangled photons two decades ago [1, 2]. In recent years, owing to its realization with thermal light and other new sources [3, 4, 5, 6, 7, 8], GI has gained new attention and developed applications in various imaging areas, such as remote sensing [9, 10], imaging through scattering media [11, 12], spectral imaging [13, 14], photon-limited imaging [15, 16] and X-ray imaging [17, 18]. Different from conventional imaging techniques that are based on the first-order correlation of light fields, GI extracts information of an object by calculating the second-order correlation between the light fields of the reference and the object arms [4, 5]. Theoretically, calculation of the second-order correlation requires infinite number of samplings of the light fields at both arms. In practice, however, the number of samplings is always finite, which often leads to reconstructed images of degraded signal-to-noise ratio (SNR) [19, 20, 21]. To address this issue, much effort has been made in designing more effective reconstruction methods for GI. On the one hand, approaches improving the second-order correlation have been proposed, which can increase the SNR of reconstructed images with theoretical guarantees [22, 23, 24, 21]. On the other hand, by exploiting sparsity of the objects' images in transform bases (e.g., wavelets [25]), methods built upon the compressed sensing (CS) theory [26, 27, 28] have also been developed [29, 30]. In general, the CS based methods have superior performance over those relying on the second-order correlation, especially for imaging smooth objects [31]. While improving the reconstruction methods has greatly promoted the practical applications of GI, there has been increasing evidence that the reconstruction quality of GI may be fundamentally restricted by the sampling efficiency [32, 33], i.e., how well information of objects is acquired in the samplings. To enable a satisfactory reconstruction from limited number of samplings in GI, a natural way is to enhance the sampling efficiency. In fact, this can be realized byoptimizing the light fields of GI; see [32, 33, 34, 35] and the references therein. Considering an orthogonal sparsifying basis, Xu _et al._[35] optimized the sampling matrix in order for their product, so called the _equivalent sampling matrix_, to have the minimum mutual coherence, which results in much refinement of the imaging quality. Though orthogonal basis is widely suitable for sparse representation of natural images, for images from a specific category, dictionary learning [36, 37] usually produces much sparser representation coefficients, suggesting room for further improvements of the reconstruction quality. Motivated by this, in this paper we propose to optimize the light fields of GI for a sparsifying basis obtained via dictionary learning. By minimizing the mutual coherence of the equivalent sampling matrix, the proposed scheme enhances the sampling efficiency and thus achieves an improved reconstruction quality. In comparison with the state-of-the-art optimization methods for light fields in GI, the superiority of our scheme is confirmed via both simulations and experiments. The main advantages of the proposed scheme is summarized as follows: * Inspired from some previous researches in CS [38, 39], we formulate the problem of minimizing the mutual coherence of the equivalent sampling matrix in GI as a Frobenius-norm minimization problem, which yields a closed-form solution that depends on the sparsifying basis only. To the best of our knowledge, the suggested solution of the light fields is the first closed-form result in the GI optimization field. * The proposed scheme enables successive samplings. In GI, successive samplings means that when more samplings are available (or needed), one can simply augment new rows to the currently optimized sampling matrix in order to form a new one, without the need to perform additional optimization over the entire matrix. Such feature can bring great convenience to the practical applications of GI and was not addressed in previous works. It is worth mentioning that matrix optimization based on dictionary learning has also been studied in the CS literature, see, e.g., [38, 39, 40]. However, the optimizations in [38, 39] were carried out over sampling matrices of fixed sizes, which does not allow successive samplings. Although Duarte's method [40] dealt with the matrix optimization problem of alterable sampling size, it is also not compatible to GI because of the demanding quantization accuracy. Moreover, those methods all fail to cope with the non-negative nature of sampling matrices in GI. ## 2 The Proposed Scheme The detection process in GI can be approximately formulated as [30] \\[\\mathbf{y}=\\mathbf{\\Phi}\\mathbf{x}+\\mathbf{n}, \\tag{1}\\] where \\(\\mathbf{y}\\in\\mathcal{R}^{M}\\) stands for the signal measured by the detector in the object arm, \\(\\mathbf{\\Phi}\\in\\mathcal{R}^{M\\times N}\\) is the sampling matrix consisting of the light-field intensity distribution recorded by the detector in the reference arm, \\(\\mathbf{x}\\in\\mathcal{R}^{N}\\) signifies the object's information to be retrieved, and \\(\\mathbf{n}\\) denotes the detection noise. Let \\(\\mathbf{\\Psi}\\) be the sparsifying basis obtained via dictionary learning, in which \\(\\mathbf{x}\\) can be sparsely represented as \\(\\mathbf{x}=\\mathbf{\\Psi}\\mathbf{z}\\), where \\(\\mathbf{z}\\) is the sparse coefficient vector. Also, consider the equivalent sensing matrix \\(\\mathbf{D}:=\\mathbf{\\Phi}\\mathbf{\\Psi}\\). Then, (1) can be rewritten as \\[\\mathbf{y}=\\mathbf{D}\\mathbf{z}+\\mathbf{n}, \\tag{2}\\] Evidences from the CS theory have revealed that a matrix \\(\\mathbf{D}\\) well preserving information of the sparse vector \\(\\mathbf{z}\\) guarantees a faithful reconstruction [26, 27, 28]. As a powerful measure of information preservation, the mutual coherence \\(\\mu(\\mathbf{D})\\) characterizes how incoherent each column pairs in \\(\\mathbf{D}\\) are [26, 41], namely, \\[\\mu\\left(\\mathbf{D}\\right)=\\max_{1\\leq i<j\\leq K}\\frac{\\left|\\left\\langle \\mathbf{d}_{i},\\mathbf{d}_{j}\\right\\rangle\\right|}{\\left\\|\\mathbf{d}_{i} \\right\\|_{2}\\left\\|\\mathbf{d}_{j}\\right\\|_{2}} \\tag{3}\\]with \\(\\mathbf{d}_{i}\\) being the \\(i\\)-th column of \\(\\mathbf{D}\\), \\(K\\) the number of columns in \\(\\mathbf{D}\\) and \\(\\|\\cdot\\|_{2}\\) the \\(\\ell_{2}\\)-norm. For its simplicity and ease of computation, the mutual coherence \\(\\mu\\left(\\mathbf{D}\\right)\\) has been widely used to describe the performance guarantees of CS reconstruction algorithms. For example, exact recovery of sparse signals via orthogonal matching pursuit (OMP) [42] is ensured by \\(\\mu\\left(\\mathbf{D}\\right)<\\frac{1}{2k-1}\\)[43], where \\(k\\) is the sparsity level of input signals. In this work, with the aim of enhancing the sampling efficiency, we employ \\(\\mu(\\mathbf{D})\\) as the objective function to be minimized in our optimization scheme. In particular, our proposed scheme consists of the following two main steps: * Firstly, an over-complete dictionary \\(\\mathbf{\\Psi}\\) is learned from a collection of images, under the constraint that its first column has identical entries \\(N^{-1/2}\\), while each of the other columns has entries summing to zero. Specifically, given \\(\\mathbf{X}=[\\mathbf{x}^{(1)},\\mathbf{x}^{(2)},\\cdots,\\mathbf{x}^{(K)}]\\in \\mathcal{R}^{N\\times L}\\), in which each column is a reshaped vector of the training image sample, the sparsifying dictionary \\(\\mathbf{\\Psi}\\in\\mathcal{R}^{N\\times K}\\) is learned by solving the following problem: \\[\\min_{\\mathbf{\\Psi},\\mathbf{Z}} \\|\\mathbf{X}-\\mathbf{\\Psi}\\mathbf{Z}\\|_{F}^{2}\\] subject to \\[\\mathbf{\\Psi}_{11}=\\cdots=\\mathbf{\\Psi}_{N1}=N^{-1/2},\\] (4) where \\(\\|\\cdot\\|_{F}\\) and \\(\\|\\cdot\\|_{0}\\) are the Frobenius- and \\(\\ell_{0}\\)-norm, respectively, \\(\\mathbf{Z}=\\left[\\mathbf{z}_{1},\\mathbf{z}_{2},\\cdots,\\mathbf{z}_{L}\\right] \\in\\mathcal{R}^{K\\times L}\\) represents the sparse coefficient matrix of training images, and \\(T_{0}\\) denotes the predetermined sparsity level of vectors \\(\\mathbf{z}_{i}\\). In this work, we shall employ \\(K\\)-SVD as a representative method to perform the dictionary learning task, which results in simultaneous sparse representation of input images in the learned dictionary \\(\\mathbf{\\Psi}\\). Readers are referred to [37] for more details of the \\(K\\)-SVD method. * Secondly, the sampling matrix \\(\\mathbf{\\Phi}\\) is optimized by minimizing the mutual coherence of the equivalent sampling matrix \\(\\mathbf{D}\\). Put formally, \\[\\min_{\\mathbf{\\Phi}}\\mu\\left(\\mathbf{D}\\right)\\quad\\text{subject to}\\quad \\mathbf{\\Phi}_{ij}\\geq 0\\text{ and }\\mathbf{D}=\\mathbf{\\Phi}\\mathbf{\\Psi}.\\] (5) The non-negative constraint \\(\\mathbf{\\Phi}_{ij}\\geq 0\\) is imposed due to the fact that the intensity of light fields is always non-negative. We now proceed to solve the optimization problem in (5). Without loss of generality, assume that matrix \\(\\mathbf{D}\\) has \\(\\ell_{2}\\)-normalized columns, that is, \\(\\|\\mathbf{d}_{i}\\|_{2}=1\\) for \\(i=1,\\cdots,K\\). Then, \\[\\mu(\\mathbf{D})=\\max_{1\\leq i<j\\leq K}|\\langle\\mathbf{d}_{i},\\mathbf{d}_{j} \\rangle|. \\tag{6}\\] To optimize \\(\\mu(\\mathbf{D})\\), it suffices to minimize the off-diagonal entries of the Gram matrix \\(\\mathbf{D}^{\\top}\\mathbf{D}\\), each of which corresponds to the coherence between two different columns in \\(\\mathbf{D}\\) (i.e., \\((\\mathbf{D}^{\\top}\\mathbf{D})_{ij}=|\\langle\\mathbf{d}_{i},\\mathbf{d}_{j} \\rangle|\\), \\(i\ eq j\\)). In particular, we would like the Gram matrix to be as close to the identity matrix as possible, namely, \\(\\mathbf{\\Psi}^{\\top}\\mathbf{\\Phi}^{\\top}\\mathbf{\\Phi}\\mathbf{\\Psi}\\approx \\mathbf{I}\\). Since replacing the identity matrix with \\(\\mathbf{\\Psi}^{\\top}\\mathbf{\\Psi}\\) yields a sampling matrix robust to the sparse representation error of images [44], we propose to optimize \\(\\mathbf{\\Phi}\\) via \\[\\min_{\\mathbf{\\Phi}}\\left\\|\\mathbf{\\Psi}^{\\top}\\mathbf{\\Phi}^{\\top}\\mathbf{ \\Phi}\\mathbf{\\Psi}-\\mathbf{\\Psi}^{\\top}\\mathbf{\\Psi}\\right\\|_{F}^{2}. \\tag{7}\\] By multiplying \\(\\mathbf{\\Psi}\\) and \\(\\mathbf{\\Psi}^{\\top}\\) on the left- and right-hand sides of both terms inside the Frobenius norm, respectively, one has \\[\\min_{\\mathbf{\\Phi}}\\left\\|\\mathbf{\\Psi}\\mathbf{\\Psi}^{\\top}\\mathbf{\\Phi}^{ \\top}\\mathbf{\\Phi}\\mathbf{\\Psi}\\mathbf{\\Psi}^{\\top}-\\mathbf{\\Psi}\\mathbf{\\Psi }^{\\top}\\mathbf{\\Psi}\\mathbf{\\Psi}^{\\top}\\right\\|_{F}^{2}. \\tag{8}\\]After substituting \\(\\mathbf{\\Psi\\Psi}^{\\top}\\) with its eigenvalue decomposition \\(\\mathbf{V\\Lambda V}^{\\top}\\), and also denoting \\(\\mathbf{W}:=\\mathbf{\\Lambda V}^{\\top}\\mathbf{\\Phi}^{\\top}\\), (8) can be rewritten as \\[\\min_{\\mathbf{W}}\\left\\|\\mathbf{VWW}^{\\top}\\mathbf{V}^{\\top}-\\mathbf{V\\Lambda}^ {2}\\mathbf{V}^{\\top}\\right\\|_{F}^{2}, \\tag{9}\\] or equivalently, \\[\\min_{\\mathbf{W}}\\left\\|\\mathbf{\\Lambda}^{2}-\\sum_{i=1}^{M}\\mathbf{w}_{i} \\mathbf{w}_{i}^{\\top}\\right\\|_{F}^{2}\\text{ where }\\mathbf{W}=\\left[\\mathbf{w}_{1},\\cdots, \\mathbf{w}_{M}\\right]. \\tag{10}\\] Denoting \\(\\mathbf{\\Lambda}=\\left[\\mathbf{r}_{1},\\cdots,\\mathbf{r}_{N}\\right]\\), (10) further becomes \\[\\min_{\\mathbf{W}}\\left\\|\\sum_{j=1}^{N}\\mathbf{r}_{j}\\mathbf{r}_{j}^{\\top}- \\sum_{i=1}^{M}\\mathbf{w}_{i}\\mathbf{w}_{i}^{\\top}\\right\\|_{F}^{2}. \\tag{11}\\] Clearly problem (11) has the solution \\(\\widehat{\\mathbf{W}}=\\mathbf{\\Lambda}_{1}^{\\top}\\), where \\(\\mathbf{\\Lambda}_{1}\\) is the matrix consisting of the first \\(M\\) columns of \\(\\mathbf{\\Lambda}\\), which is obtained by setting \\(\\mathbf{w}_{k}=\\mathbf{r}_{k}\\), \\(k=1,\\cdots,M\\). Recalling that \\(\\mathbf{W}:=\\mathbf{\\Lambda V}^{\\top}\\mathbf{\\Phi}^{\\top}\\), the optimized sampling matrix \\(\\mathbf{\\Phi}\\) can be simply calculated as \\[\\widehat{\\mathbf{\\Phi}}=\\widehat{\\mathbf{W}}^{\\mathrm{T}}\\Big{(}\\mathbf{ \\Lambda}^{-1}\\Big{)}^{\\mathrm{T}}\\mathbf{V}^{\\mathrm{T}}=\\mathbf{\\Lambda}_{1 }\\Big{(}\\mathbf{\\Lambda}^{-1}\\Big{)}^{\\mathrm{T}}\\mathbf{V}^{\\mathrm{T}}= \\left[\\begin{array}{cc}\\mathbf{I}_{M\\times M}&\\mathbf{0}\\end{array}\\right] \\left[\\begin{array}{c}\\mathbf{V}_{1}^{\\mathrm{T}}\\\\ \\mathbf{V}_{2}^{\\mathrm{T}}\\end{array}\\right]=\\mathbf{V}_{1}^{\\mathrm{T}}, \\tag{12}\\] where matrix \\(\\mathbf{V}_{1}\\) consists of the first \\(M\\) columns of \\(\\mathbf{V}\\). When more samplings become available, interestingly, it suffices to update \\(\\widehat{\\mathbf{\\Phi}}\\) by augmenting more rows of \\(\\mathbf{V}^{\\top}\\) to the previous one, thereby enabling a successive sampling. As aforementioned, the feature of successive sampling is of vital importance to the practical applications of GI. Furthermore, due to the fact that the intensity of light fields is always non-negative, additional treatments are needed to make sure that elements of the sampling matrix are non-negative (NN). To the end, we propose a NN lifting, which adds a constant matrix to the optimized matrix \\(\\widehat{\\mathbf{\\Phi}}\\) in (12) as \\[\\widehat{\\mathbf{D}}=\\big{(}\\widehat{\\mathbf{\\Phi}}+c\\mathbf{1}_{M\\times N} \\big{)}\\mathbf{\\Psi}, \\tag{13}\\] where \\(\\mathbf{1}_{M\\times N}\\) is an \\(M\\)-by-\\(N\\) matrix with entries being ones and \\[c:=\\begin{cases}-\\min_{i,j}\\widehat{\\mathbf{\\Phi}}_{ij}&\\text{if }\\min_{i,j} \\widehat{\\mathbf{\\Phi}}_{ij}<0,\\\\ 0&\\text{if }\\min_{i,j}\\widehat{\\mathbf{\\Phi}}_{ij}\\geq 0.\\end{cases} \\tag{14}\\] As aforementioned, the first column of \\(\\mathbf{\\Psi}\\) has identical entries and other columns have entries summing to zero. Thus, \\[\\widehat{\\mathbf{D}}\\;=\\;\\widehat{\\mathbf{\\Phi}}\\mathbf{\\Psi}+cN^{-1/2}\\big{[} \\mathbf{1}_{M\\times 1},\\underbrace{\\mathbf{0},\\ldots,\\mathbf{0}}_{M\\times(K-1)} \\big{]}. \\tag{15}\\] It can be noticed that after the NN lifting, \\(\\widehat{\\mathbf{D}}\\) and \\(\\widehat{\\mathbf{\\Phi}}\\mathbf{\\Psi}\\) differ only in the first column. Nevertheless, the mutual coherence \\(\\mu(\\widehat{\\mathbf{D}})\\) is not much affected by the NN lifting, as confirmed by our extensive empirical test. ## 3 Results ### Simulations To evaluate the effectiveness of the proposed optimization scheme, both simulations and experiments are performed. MNIST handwritten digits of size \\(28\\times 28\\) pixels [45] are chosen to be the imaging objects, and the dictionary \\(\\mathbf{\\Psi}\\) is learned based on 20,000 digits randomly selected from the training set. Moreover, the optimized sampling matrix \\(\\widehat{\\mathbf{\\Phi}}\\) is obtained from (12), followed by the NN lifting. A subset of atoms in the learned dictionary \\(\\mathbf{\\Psi}\\) and the optimized light-field intensity distributions \\(\\widehat{\\mathbf{\\Phi}}\\) are shown as Fig. 1(a) and Fig. 1(b), respectively. For comparative purposes, our simulation includes other four methods: 1) Gaussian method, 2) Duarte's method [40], 3) Xu's method [35] and 4) normalized GI (NGI) method [24]. **Table 1** gives a brief summary of the methods under test. In the Gaussian method, the sampling matrices are random Gaussian matrices, whose entries are drawn independently from the standard Gaussian distribution (\\(\\mathbf{\\Phi}_{ij}\\sim\\mathcal{N}(0,1)\\)). To meet the NN constraint \\(\\mathbf{\\Phi}_{ij}\\geq 0\\) of GI, the matrices \\(\\mathbf{\\Phi}\\)'s generated from the Gaussian and Duarte's methods are also inflicted with the NN lifting. For the Gaussian, Duarte's and our proposed methods, the images are retrieved in two steps. Firstly, the sparse coefficient vector \\(\\mathbf{\\hat{z}}\\) of image under the learned dictionary \\(\\mathbf{\\Psi}\\) is obtained by solving the \\(\\ell_{0}\\)-minimization problem: \\[\\mathbf{\\hat{z}}=\\operatorname*{arg\\,min}_{\\mathbf{z}}\\|\\widehat{\\mathbf{D}} \\mathbf{z}-\\mathbf{y}\\|_{2}^{2}\\quad\\text{subject to}\\ \\ \\|\\mathbf{z}\\|_{0}\\leq T_{0} \\tag{16}\\] \\begin{table} \\begin{tabular}{c|c|c|c} \\hline **Method** & **Sampling matrix** & **Dictionary** & \\(\\mathbf{\\Phi}_{ij}\\geq 0\\) \\\\ \\hline \\hline Proposed & Eq. (12) & \\(K\\)-SVD & NN-lifting \\\\ \\hline Gaussian & Random Gaussian & \\(K\\)-SVD & NN-lifting \\\\ \\hline Duarte [40] & Matrix optimization & \\(K\\)-SVD & NN-lifting \\\\ \\hline Xu [35] & Matrix optimization & DCT & Zero-forcing \\\\ \\hline NGI [24] & Random Gaussian & None & NN-lifting \\\\ \\hline \\end{tabular} \\end{table} Table 1: A summary of test methods Figure 1: (Left) A subset of atoms in the learned dictionary \\(\\mathbf{\\Psi}\\); (right) a subset of the optimized light-field intensity distributions via the OMP algorithm [42]. Secondly, the object's image is reconstructed as \\[\\mathbf{\\widehat{x}}=\\mathbf{\\Psi\\widehat{z}}. \\tag{17}\\] For Xu's method, the Discrete Cosine Transform (DCT) basis is chosen as the orthogonal basis. And the images in methods 3) and 4) are reconstructed via approaches proposed in their corresponding references. In our simulation, we first adopt matrices with entries of double-type in MATLAB. Fig. 2 shows the simulation results of different methods. In Fig. 2(a), the reconstructed images at the different sampling ratios (SR's) (i.e., SR = 0.10, 0.20, and 0.51) are displayed, where the SR is computed by dividing the number of samplings by the number of image pixels. Fig. 2(b) Figure 2: Simulation results with sampling matrix of high accuracy. Fig. 2(a) shows the reconstructed images reconstructed via different methods under different SR. Fig. 2(b) and 2(c) illustrate the PSNR and SSIM of reconstructed images via different methods as a function of SR, respectively. and 2(c) depict the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index of the reconstructed images as functions of the SR, respectively. Given the reference image \\(X\\) and the reconstructed image \\(Y\\), the PSNR and SSIM are defined as follows, \\[\\text{MSE}\\left(X,Y\\right)=\\frac{1}{mn}\\sum_{i=1}^{m}\\sum_{j=1}^{n }\\big{[}X\\left(i,j\\right)-Y\\left(i,j\\right)\\big{]}^{2}, \\tag{18a}\\] \\[\\text{PSNR}\\left(X,Y\\right)=10\\log_{10}\\left[\\frac{B^{2}}{\\text{ MSE}\\left(X,Y\\right)}\\right],\\] (18b) \\[\\text{SSIM}\\left(X,Y\\right)=\\frac{\\left(2\\mu_{X}\\mu_{Y}+c_{1} \\right)\\left(2\\sigma_{XY}+c_{2}\\right)}{\\left(\\mu_{X}^{2}+\\mu_{Y}^{2}+c_{1} \\right)\\left(\\sigma_{X}^{2}+\\sigma_{Y}^{2}+c_{2}\\right)}, \\tag{18c}\\] where the pixel size of image is \\(m\\times n\\), \\(B\\) denotes the dynamic range of image pixels, which takes the value 255 in this paper, \\((\\mu_{X},\\sigma_{X})\\) and \\((\\mu_{Y},\\sigma_{Y})\\) are the means and variances of \\(X\\) and \\(Y\\), respectively, \\(\\sigma_{XY}\\) is the covariance of \\(X\\) and \\(Y\\), \\(c_{1}=(0.01B)^{2}\\) and \\(c_{2}=(0.03B)^{2}\\). These two metrics measure the difference and similarity between the reconstructed images and the original ones, respectively. For each SR under test, the PSNR and SSIM are averaged over 500 reconstructed digit images to plot the curves. By comparing them, the reconstruction quality of different methods are compared empirically. From Fig. 2, it can be observed that the reconstruction quality of the proposed scheme is uniformly better than the other methods under test. In particular, it achieves 2dB to 4dB gain of PSNR and up to 10% higher SSIM over the Gaussian method, owing to the optimized light fields. Compared to Xu's method [35], the Gaussian method have a notable advantage in the low SR region, which gradually converges as the SR approaches one. The performance gap is mainly attributed to the utilization of dictionary learning that can better incorporate the sparsity prior of images. Among all test methods, the PSNR and SSIM of the NGI method lie in the lowest level. This is mainly due to the image noise, which often happens to the correlation-based reconstruction methods of GI, especially when the SR is low. It can also be observed from Fig. 2 that Duarte's method [40] performs comparably with the proposed method in the low SR region, but deteriorating dramatically when the SR increases. Such phenomenon seems unreasonable at first glance, but can be interpreted from the condition number perspective. To be specific, the sampling matrix \\(\\mathbf{\\Phi}\\) of Duarte's method has larger condition number as the SR increases (see detailed explanations in Footnote 7 of [40]). Thus, when \\(\\mathbf{\\Phi}\\) multiplies with the representation error \\(\\mathbf{e}=\\mathbf{x}-\\mathbf{\\Psi}\\mathbf{z}\\), it could significantly amplify this error, and eventually degrade the reconstruction quality. Indeed, in this case one would need to reconstruct the sparse vector \\(\\mathbf{z}\\) from the samplings \\(\\mathbf{y}=\\mathbf{\\Phi}\\mathbf{x}+\\mathbf{n}=\\mathbf{\\Phi}\\mathbf{\\Psi} \\mathbf{z}+\\mathbf{\\Phi}\\mathbf{e}+\\mathbf{n}=\\mathbf{D}\\mathbf{z}+(\\mathbf{ \\Phi}\\mathbf{e}+\\mathbf{n})\\), which can be difficult since the largely amplified error \\(\\mathbf{\\Phi}\\mathbf{e}\\) essentially becomes part of noise for the reconstruction. In practice, detectors measure the intensity signals with quantization, which means that \\(\\mathbf{\\Phi}\\) is actually a quantized sampling matrix. Thus, we also simulate the case where the sampling matrices are quantized to 8-bit of precision and plot the results in Fig. 3. Similarly, Fig. 3(b) and 3(c) show the curves of PSNR and SSIM with averaged values over 500 reconstruction trails, respectively. We observe that the overall behavior is similar to that of Fig. 2(b) and 2(c) except that both the PSNR and SSIM curves of Duarte's method [40] fluctuate in the lowest level for the whole SR region. Accordingly, Duarte's method [40] also fails to retrieve the images in Fig. 3(a). This is mainly because Duarte's method [40] is demanding in the quantization accuracy. When large quantization errors are introduced, sparse coefficients of the test images may not be correctly calculated by the reconstruction algorithm. The PSNR and SSIM curves of the NGI method lie in a low level similar to that of Duarte's method, and the images are only vaguely reconstructed, as shown in Fig.3(a). We also observe that the performance of Xu's method [35] becomes worse in the quantized case, although the images can still be retrieved. Overall, our proposed method performs the best for both the high accurate as well as the quantized scenarios. ### Experimental Results We also experimentally compare the proposed method with the Gaussian method, Duarte's method [40] and NGI method [24]. The schematic diagram of experimental setup is shown as Fig. 4. The light-field patterns are first displayed on the digital micro-mirror device (DMD) after preloaded via the computer. Next, light from a light-emitting diode (LED) source is modulated by a Kohler illumination system to be evenly incident on DMD. The light reflected by DMD is then projected onto the imaging object by a lens system. Finally, the whole light reflected from the object is collected by the lens and measured by the detector. In our experiments, the light-field patterns are displayed at a rate of 10Hz to avoid frame dropping of the detector, so that the sampling procedure lasts for one minute or so. After the samplings, the subsequent image Figure 3: Simulation results with 8-bit quantized sampling matrix. Fig. 3(a) shows the reconstructed images via different methods under different SR. Fig. 3(b) and 3(c) show PSNR and SSIM of the reconstructed images via different methods as a function of SR, respectively. retrieval steps for each method are the same as those in the simulation test. The reconstruction was carried out on an industrial computer with 32GB RAM and Intel(R) Core(TM)-I7 2600 CPU @3.4GHz, and the consuming time of matrix optimization and reconstruction for different methods is specified in Table 2. The reconstruction time for each method is given as a time period, since it varies according to the sampling rate. The comparison of reconstructed images by different methods is shown as Fig. 5(a), where the ground truth is obtained by pixel-wise detection and serve as a reference image. By comparing the reconstructed images with the ground truth, again, the PSNR and SSIM are calculated and plotted in Fig. 5(b) and 5(c), respectively. Overall, the experimental results demonstrate that the reconstructed quality of the proposed optimization scheme is superior to that of other methods under test, which well matches our simulation results. \\begin{table} \\begin{tabular}{c|c|c} \\hline \\hline **Method** & **Matrix Optimization (sec.)** & **Reconstruction (sec.)** \\\\ \\hline \\hline Proposed & 0.365 & 0.037 to 0.150 \\\\ \\hline Gaussian & – & 0.037 to 0.158 \\\\ \\hline Duarte [40] & 0.345 & 0.039 to 0.157 \\\\ \\hline Xu [35] & 90.18 (for 100 iterations) & 0.028 to 0.360 \\\\ \\hline NGI [24] & – & 0.001 to 0.007 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Running time of test methods Figure 4: Schematic diagram of experimental setup. Light-field patterns are generated by DMD and projected onto the object, afterwards collected by the detector. ### Discussions We would like to point out some interesting points that arise from the simulation and experimental results. * Firstly, the superiority of the proposed method is mainly owing to two factors: i) optimization of sampling matrix and ii) dictionary learning. Indeed, the proposed optimization scheme outperforms the Gaussian method, even though they share the same spasifying basis obtained by dictionary learning. This is because our method essentially performs a \"global\" optimization of light fields that incorporates the image statistics captured in the dictionary learning process, thereby enhancing the sampling efficiency. Besides, the improvement of the proposed method over Xu's method can be attributed to the use of both dictionary learning and our optimized sampling matrix. Figure 5: Experimental results. Fig. 5(a) shows the reconstructed images via different methods under different SR. Fig. 5(b) and 5(c) show PSNR and SSIM of the reconstructed images via different methods as a function of SR, respectively. * Secondly, the PSNR and SSIM curves of the proposed method tend towards flat after the SR reaches a critical value. This in turn implies that the inherent information of the imaging object acquired at this very SR value already suffices to produce a satisfactory reconstruction. The critical SR can thus be utilized to evaluate the capability of information acquisition and also allows the comparison of different approaches. * Thirdly, we would like to point out a practical limitation of the proposed scheme in handling images of large size due to the use of dictionary learning. Specifically, while dictionary learning in our scheme can bring in some performance gain, it is usually demanding in the requirements of storage and computational cost. Thus the patch size used in dictionary learning should not be large, which, however, poses a limitation to the image size that we can handle. Nevertheless, efficient dictionary learning methods dealing with images of larger scales have recently been proposed [46, 47, 48], in which the handled image size can go beyond \\(64\\times 64\\) pixels. To demonstrate the effectiveness of the proposed method for images of larger size, we carry out simulations over the LFWcrop database [49], which consists of more than \\(13,000\\) images of \\(64\\times 64\\) pixels. The dictionary is trained offline using the algorithm in [47] with \\(12,000\\) images, which takes about \\(20\\) hours in our industrial computer. The results of the proposed method and the Gaussian method, which involve dictionary learning, are shown as Fig. 6 for comparison. * Finally, we mention that if one wish to deal with images of even larger size such that existing dictionary learning methods fail to handle or cannot learn the images offline, then the proposed light-field optimization scheme can still be applied by using explicit dictionaries (e.g., Cropped Wavelets [47]) to incorporate the sparse prior of images. ## 4 Conclusion In this paper, an optimization scheme of light fields has been proposed to improve the imaging quality of GI. The key idea is to minimize the mutual coherence of the equivalent sampling matrix in order to enhance the sampling efficiency. A closed-form solution of the sampling matrix has been derived, which enables successive sampling. Simulation and experimental results have shown that the proposed scheme is very effective in improving the reconstruction quality of images, compared to the state-of-the-art methods for GI. The proposed scheme can thus be used to imaging specific targets with higher quality. We would also like to point out a technical limitation in our scheme. Recall that we have employed a NN lifting to cope with the constraint \\(\\mathbf{\\Phi}_{ij}\\geq 0\\). This operation, however, may severely influence the incoherence of the equivalent sampling matrix in the worst case, though such situation rarely happens as confirmed by our empirical test. Deriving analytical results can better address the non-negative issue while also would require a bit more effort, and our future work will be directed towards this avenue. ## Funding National Key Research and Development Program of China (2017YFB0503303, 2017YFB0503300); National Natural Science Foundation of China (NSFC) (11627811). ## References * [1] T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, \"Optical imaging by means of 2-photon quantum entanglement,\" Phys. Rev. A. At. Mol. & Opt. Phys. **52**, R3429-R3432 (1995). * [2] D. Strekalov, A. Sergienko, D. Klyshko, and Y. Shih, \"Observation of two-photon \"ghost\" interference and diffraction,\" Phys. Rev. Lett. **74**, 3600 (1995). * [3] R. S. Bennink, S. J. Bentley, and R. W. Boyd, \"\"two-photon\" coincidence imaging with a classical source,\" Phys. Rev. Lett. **89**, 113601 (2002). * [4] A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, \"Ghost imaging with thermal light: comparing entanglement and classicalcorrelation,\" Phys. Rev. Lett. **93**, 093602 (2004). * [5] J. Cheng and S. Han, \"Incoherent coincidence imaging and its applicability in x-ray diffraction,\" Phys. Rev. Lett. **92**, 093903 (2004). * [6] D. Zhang, Y.-H. Zhai, L.-A. Wu, and X.-H. Chen, \"Correlated two-photon imaging with true thermal light,\" Opt. Lett. **30**, 2354-2356 (2005). * [7] R. I. Khakimov, B. Henson, D. Shin, S. Hodgman, R. Dall, K. Baldwin, and A. Truscott, \"Ghost imaging with atoms,\" Nature **540**, 100 (2016). * [8] S. Li, F. Cropp, K. Kabra, T. Lane, G. Wetzstein, P. Musumeci, and D. Ratner, \"Electron ghost imaging,\" Phys. Rev. Lett. **121**, 114801 (2018). * [9] M. Malik, O. S. Magana-Loaiza, and R. W. Boyd, \"Quantum-secured imaging,\" Appl. Phys. Lett. **101**, 241103 (2012). * [10] C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, \"Ghost imaging lidar via sparsity constraints,\" Appl. Phys. Lett. **101**, 141123 (2012). * [11] W. Gong and S. Han, \"Correlated imaging in scattering media,\" Opt. Lett. **36**, 394-6 (2011). * [12] M. Bina, D. Magatti, M. Molteni, A. Gatti, L. A. Lugiato, and F. Ferri, \"Backscattering differential ghost imaging in turbid media,\" Phys. Rev. Lett. **110**, 083901 (2013). * [13] Y. Wang, J. Suo, J. Fan, and Q. Dai, \"Hyperspectral computational ghost imaging via temporal multiplexing,\" IEEE Photonics Technol. Lett. **28**, 288-291 (2016). Figure 6: Simulation results of LFWcrop face images. Fig. 6(a) shows the reconstructed images via different methods under different SR. Fig. 6(b) and 6(c) show PSNR and SSIM of the reconstructed images via different methods as a function of SR, respectively. * [14] Z. Liu, S. Tan, J. Wu, E. Li, X. Shen, and S. Han, \"Spectral camera based on ghost imaging via sparsity constraints,\" Sci. Reports **6**, 25718 (2016). * [15] P. A. Morris, R. S. Aspden, J. E. Bell, R. W. Boyd, and M. J. Padgett, \"Imaging with a small number of photons,\" Nat. Commun. **6**, 5913 (2015). * [16] X. Liu, J. Shi, X. Wu, and G. Zeng, \"Fast first-photon ghost imaging,\" Sci. Reports **8**, 5012 (2018). * [17] D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, \"Experimental x-ray ghost imaging,\" Phys. Rev. Lett. **117**, 113902 (2016). * [18] H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, \"Fourier-transform ghost imaging with hard x rays,\" Phys. Rev. Lett. **117**, 113901 (2016). * [19] X. Shen, Y. Bai, T. Qin, and S. Han, \"Experimental investigation of quality of lensless ghost imaging with pseudo-thermal light,\" Chin. Phys. Lett. **25**, 3968 (2008). * [20] B. I. Erkmen and J. H. Shapiro, \"Signal-to-noise ratio of gaussian-state ghost imaging,\" Phys. Rev. A **79**, 023833 (2009). * [21] F. Ferri, D. Magatti, L. Lugiato, and A. Gatti, \"Differential ghost imaging,\" Phys. Rev. Lett. **104**, 253603 (2010). * [22] W. Gong and S. Han, \"A method to improve the visibility of ghost images obtained by thermal light,\" Phys. Lett. A **374**, 1005-1008 (2010). * [23] G. Brida, M. Genovese, and I. R. Berchera, \"Experimental realization of sub-shot-noise quantum imaging,\" Nat. Photonics **4**, 227 (2010). * [24] B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, \"Normalized ghost imaging,\" Opt. Express **20**, 16892-16901 (2012). * [25] M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies, \"Image coding using wavelet transform.\" IEEE Trans. Image Process. **1**, 205-220 (1992). * [26] D. L. Donoho, \"Compressed sensing,\" IEEE Trans. Inform. Theory **52**, 1289-1306 (2006). * [27] E. J. Candes and T. Tao, \"Decoding by linear programming,\" IEEE Trans. Inform. Theory **51**, 4203-4215 (2005). * [28] E. J. Candes and T. Tao, \"Near-optimal signal recovery from random projections: Universal encoding strategies?\" IEEE Trans. Inform. Theory **52**, 5406-5425 (2006). * [29] O. Katz, Y. Bromberg, and Y. Silberberg, \"Compressive ghost imaging,\" Appl. Phys. Lett. **95**, 131110 (2009). * [30] S. Han, H. Yu, X. Shen, H. Liu, W. Gong, and Z. Liu, \"A review of ghost imaging via sparsity constraints,\" Appl. Sci. **8**, 1379 (2018). * [31] W. Gong, Z. Bo, E. Li, and S. Han, \"Experimental investigation of the quality of ghost imaging via sparsity constraints,\" Appl. Opt. **52**, 3510-3515 (2013). * [32] M. Chen, E. Li, and S. Han, \"Application of multi-correlation-scale measurement matrices in ghost imaging via sparsity constraints,\" Appl. Opt. **53**, 2924-2928 (2014). * [33] S. M. Khamoushi, Y. Nosrati, and S. H. Tavassoli, \"Sinusoidal ghost imaging,\" Opt. Lett. **40**, 3452-3455 (2015). * [34] E. Li, M. Chen, W. Gong, H. Yu, and S. Han, \"Mutual information of ghost imaging systems,\" Acta Opt. Sinica **33**, 93-98 (2013). * [35] X. Xu, E. Li, X. Shen, and S. Han, \"Optimization of speckle patterns in ghost imaging via sparse constraints by mutual coherence minimization,\" Chin. Opt. Lett. **13**, 071101 (2015). * [36] B. A. Olshausen and D. J. Field, \"Natural image statistics and efficient coding,\" Network: Comput. Neural Syst. **7**, 333-339 (1996). * [37] M. Aharon, M. Elad, and A. Bruckstein, \"K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,\" IEEE Trans. Signal Process. **54**, 4311-4322 (2006). * [38] M. Elad, \"Optimized projections for compressed sensing,\" IEEE Trans. Signal Process. **55**, 5695-5702 (2007). * [39] V. Abolghasemi, S. Ferdowsi, and S. Sanei, \"A gradient-based alternating minimization approach for optimization of the measurement matrix in compressive sensing,\" Signal Process. **92**, 999-1009 (2012). * [40] J. M. Duarte-Carvajalino and G. Sapiro, \"Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization,\" IEEE Trans. Image Process. **18**, 1395-1408 (2009). * [41] D. L. Donoho and M. Elad, \"Optimally sparse representation in general (nonorthogonal) dictionaries via \\(l1\\) minimization,\" Proc. Natl. Acad. Sci. **100**, 2197-2202 (2003). * [42] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, \"Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition,\" in _Proceedings of 27th Asilomar Conference on Signals, Systems and Computers_, (IEEE, 1993), pp. 40-44. * [43] J. A. Tropp, \"Greed is good: Algorithmic results for sparse approximation,\" IEEE Trans. Inform. Theory **50**, 2231-2242 (2004). * [44] N. Cleju, \"Optimized projections for compressed sensing via rank-constrained nearest correlation matrix,\" Appl. Comput. Harmon. Analysis **36**, 495-507 (2014). * [45] L. Deng, \"The mnist database of handwritten digit images for machine learning research [best of the web],\" IEEE Signal Process. Mag. **29**, 141-142 (2012). * [46] L. Le Magoarou and R. Gribonval, \"Chasing butterflies: In search of efficient dictionaries,\" in _2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),_ (IEEE, 2015), pp. 3287-3291. * [47] J. Sulam, B. Ophir, M. Zibulevsky, and M. Elad, \"Trainlets: Dictionary learning in high dimensions,\" IEEE Trans. Signal Process. **64**, 3180-3193 (2016). * [48] C. F. Dantas, M. N. Da Costa, and R. da Rocha Lopes, \"Learning dictionaries as a sum of kronecker products,\" IEEESignal Process. Lett. **24**, 559-563 (2017). * [49] C. Sanderson and B. C. Lovell, \"Multi-region probabilistic histograms for robust and scalable identity inference,\" in _International Conference on Biometrics,_ (Springer, 2009), pp. 199-208.
Ghost imaging (GI) is a novel imaging technique based on the second-order correlation of light fields. Due to limited number of samplings in practice, traditional GI methods often reconstruct objects with unsatisfactory quality. To improve the imaging results, many reconstruction methods have been developed, yet the reconstruction quality is still fundamentally restricted by the modulated light fields. In this paper, we propose to improve the imaging quality of GI by optimizing the light fields, which is realized via matrix optimization for a learned dictionary incorporating the sparsity prior of objects. A closed-form solution of the sampling matrix, which enables successive sampling, is derived. Through simulation and experimental results, it is shown that the proposed scheme leads to better imaging quality compared to the state-of-the-art optimization methods for light fields, especially at a low sampling rate. osajournal osajournal
Condense the content of the following passage.
170
isprs/a61f0c3b_e92c_4f41_b1bf_9f96513468e5.md
SINGLE COD CAMERA BASED THREE DIMENSIONAL MEASUREMENT SYSTEM FOR NOVING OBJECT Lu Jian, Lin Zongjian,Pas Dongchao Department of Photogrammetry and Remote Sensing Vahan Technical University of Surveying & Mapping 38 Loy Road, Wuhan 420070, CHINA ISPRS Commission V KEY WORDS: Computer Vision, Pattern Recognition, Image Match, Stereo-Photographing, CD Camera, Sub-Pixel Detect, DLT-Direct Linear Transformation. 1. INTRODUCTION Since seventies, the research of the motion analysis system has been very quick developed along with advanced of the computer technic. So far as based technic method, as stated above, to measure and analysis moving objects. The present-day realtime photogrammetry method normal? sdopts two and more TV or CCD cameras to perform stereophotography, and then the 3D-coor-dimates of the object points are derived by image matching. But this method has not yet popularised because of the complexity of the derivation by image match and mass amount of data to be processed. In practice, we can simplify this method according as the requirement for a small-tical task.For example, for simplifies the most complex steps of image matching, the relevant characteristic points and the body are marked by variable patterns, and illuminated by fishing light sources, whose direction of emission is coaxial to TV cameras and the realtime detecting and recording this signing points,The trajecto-ries of little points describe the movement of the body. Since seventies, the research of the motion analysis system has been very quick developed along with advanced of the computer technic. So far as based technic method, as stated above, they are based on the theory of the stereo-pho-tagrometry, which extracted 3D-coordination of the feature points from dynamic images of the moving body. There are typical systems, example SEBNOT system of Swedish(Molting and Marsoliais, 1980), VICON system of England Oxford Company (Jarrett et al, 1976), Elite system of Italy (Perrigo and Pedotti, 1980), and Coda system (Michelson, 1970). The new commercial product using interruptedly push out.These systems have the high-grade headware, and the function of the software has been improved by step.But they are expensive, therefore not yet popularized in the developing countries. In this paper a motion analysis system, which has a simpler compose and a computer function than abovementioned system will be described. In this system a CCD state matrix camera, front which attached a double light-paths collecting device,is used to capture the stereo images of a body. These stereoimages of the moving body can be realize recorded and restored as high quality by a reeerder, and with a A/D converter inputed to : micro-computer image processing system-CVGR system( 'Computer Vision for Graph Reading' be called sth CV2. Lin Zengjian and Lu Jian et al, 1991). Because of stereoimages are got by a CCD camera, this two images were keep absolutely the synchronism that it is benefit to survey for the moring body. CV2 system has multifunctional software, that can used to detect the marked points of the moving body with subpixel accuracy , to correct geometric distortion of the stereo- image, to perform 3D-coordinates derivation, and to analyse for the moving trajectories. This sys- ten now being used for the study of human body motion, and obtain the moving trajectories of the joints on the four limbs, when the human is val- king, running, and jumping. When a field of view is 2.6m, this system can be to observe a complete cycle of human gaits. The accuracy of the sur- verying coordinates can be as point nought five percent on the first test. This system is desi- genced as a low cost system. 2. SYSTEM COMPONENTS This system can be composed of a CV2 system added to a double light-path collecting device and a image recorder. CV2 is designed as a low stereoimage cost system. The main hardware configuration of it comprises : -- A personal computer Super 286 -- A video frame graaber -- WTSM-CV2 board -- A CCD camera - OOOO LR-1002 -- An image color monitor CTX -- A double light path collecting device -- A video image recorder The systems is shown as Fig. 1. consists of 20 targets arranged in a wall and two vertical pillars (shown in Fig. 6). The coordinates of the marked control points are determined by intersection of directions measured by two 2' theodolites located six metres from the test field.Each control point has 30-coordi-mates with accuracy of 0.3 mm. This control field is used for the calibration of CCD camera. 4.1.3 Calibration of Geometric of CCD Camera (1) Calibration of the interior and exterior orientation parameters. The interior and exterior orientation parameters can be determined by DLT or spatial resection. (2) Calibration of lens radial distortion. In order to reduce the lens radial distortion, the following correction function is added in DLT equation : dz = k1 - (x-x0) - r\" dy = k1 - (y-70) - r\" where, r\" = (x-x0)*(y-y0)\", k1 is the lens radial distortion coefficient. By calculation, k1 is obtained about 10\". 4.2 Mark Point Location 4.2.1 Mark Pattern Design In the test selected the mark pattern show Fig. 7 The test results show, that using pattern (a), the accuracy of location is highest.The pattern (b) for seeing it on the moving body is easy and it can located with sub-pixel the precision. The pattern (c) is made of a small plastic ball with diameter 8 m pasted with a reflecting paper. It has strong reference under a general illu-limination by artificial light. On its image the (d) / ratio is higher. The system with uncoded C0 camera has been detected about 0.26 pixel in X direction. The RMS (root mean square) is about 0.08 pixel. In this test we are used four plumb-lines, and detected results in Fig. 6. (a) (b) (c) Fig. 7 mark pattern 4.2.2 Mark Detecte Algorithms The measurement accuracy of the 3d- coordinate is depend on detecte accuracy of feature point in the image. Therefore center coordinates of the mark must be located with the sub-pixel preti- sion. The system has one important feature, that is the mark detection algorithms, which work on the shape and size of the mark rather than on their brightness. IT used Multistage Matching-Pit- ting Method(Fig. 8), which consisted of following algorithms. -- Mark Pattern Match Fig. 6 Line Jitter Detected Results-- Weighted gravity center algorithms. -- Gradient Parabolic Interpolation -- Sample Moment Match -- Least Squares Straight-line Fitting The test results show that, for location mark of the control point using the mark pattern match and least squares straight-line fitting algorithm, the precision of location is higher, for location mark of the feature point on the moving body using the weighted gravity center algorithm times the precision of location is satisfactory and the computing is easy and stable. ``` 1DMASK 2MATCB 3DATCB 4MATCB 5DETDET 6DETDET 7LOCATE 8DETDET 9Fig.8MarkPointDetectedProcedure ``` The equation of 3D-DLTDirectlineartransformation ) can be written as : ``` x:k1+(x-x0)+r2-(L1*L1*L2*L2*L4) (L3* Fig. 9 Trajectories of feature points on a moving human
This paper presents a sub-realtime 3-D coordinate measuring system with single CCD camera for a moving body. The signalised points of a moving body can be realtime-tracked and realtime -recorded in 50 times per second. A double light-paths collecting device which is placed in front of a CCD camera, is used to capture the stereoscopic image of a moving body. For 3-D measurement, the stereoscopic pair can absolutely keep the synchronism of double-images. The double light-paths collecting device can be a simple double-reflecting mirrors or imaging optical-fiber device.The videomegared digitalized with a A/D converter and input a micro-computer image processing system, which is used for the determination of feature points and the derivation of coordinated.A 3-D closed-range testifield is used for the determination of the interior and exterior orientation of the camera and of parameters of lens distorisation.The signalised points on moving body are detected by automatic match algorithm with subpixle accu-racy. This system has been using for the study of human gaits. Using simple reflecting-mirrors device in 6 meters view field, the accuracy of the first test can be as point sought five percent.
Write a summary of the passage below.
250
arxiv-format/2209_13500v1.md
# Dense-TNT: Efficient Vehicle Type Classification Neural Network Using Satellite Imagery Ruikang Luo, Yaofeng Song, Han Zhao, Yicheng Zhang, Yi Zhang, Nanbin Zhao, Liping Huang, and Rong Su, Ruikang Luo is affiliated with Continental-NTU Corporate Lab, Nanyang Technological University, 50 Nanyang Avenue, 639798, Singapore. Email: [email protected] Song is affiliated with School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798. Email: song0223e.ntu.edu.sgHan Zhao is affiliated with School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore Email: ZHAQ0278e.ntu.edu.sgYicheng Zhang is affiliated with Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (ASTAR), 138632, Singapore Email: [email protected] Zhang is affiliated with Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (ASTAR), 138632, Singapore Email: Yi Zhang: [email protected][email protected] Zhao is affiliated with School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798. Email: [email protected] Huang is affiliated with Continental-NTU Corporate Lab, Nanyang Technological University, 50 Nanyang Avenue, 639798, Singapore. Email: [email protected] Su is affiliated with Division of Control and Instrumentation, School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798. Email: [email protected] ## I Introduction Ruikang Luo is affiliated with Continental-NTU Corporate Lab, Nanyang Technological University, 50 Nanyang Avenue, 639798, Singapore. Email: [email protected] Song is affiliated with School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798. Email: song0223e.ntu.edu.sgHan Zhao is affiliated with School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore Email: ZHAQ0278e.ntu.edu.sgYicheng Zhang is affiliated with Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (ASTAR), 138632, Singapore Email: [email protected] Zhang is affiliated with Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (ASTAR), 138632, Singapore Email: Yi Zhang: [email protected][email protected] Zhao is affiliated with School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798. Email: [email protected] Huang is affiliated with Continental-NTU Corporate Lab, Nanyang Technological University, 50 Nanyang Avenue, 639798, Singapore. Email: [email protected] Su is affiliated with Division of Control and Instrumentation, School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798. Email: [email protected] Wehicle type classification is one of the most important parts in intelligent traffic system. Vehicle classification results can be contributive towards traffic parameters statistics, regional traffic demand and supply analysis, time-series traffic information prediction [1] and transportation facilities usage management [2][3]. Combining with appropriate data processing technique, such as missing data imputation and map matching, further traffic management guidance can be provided [4]. Traditional vehicle type classification methods are mainly based on sensors feedback, such as magnetic induction and ultrasonic [5][6]. As the extensive use of UAV surveillance and satellite remote sensing data, image-based solutions towards intelligent traffic system have been rapidly developed. Image processing approaches can be divided into appearance-based methods and deep learning based methods. Appearance-based methods usually generate the 3D parameter model to represent the vehicle for classification. Deep learning based methods apply image recognition algorithms to extract objective features and classify vehicles. Although remarkable efforts have been made in remote sensing classification tasks, these methods are not ideal when applied to real applications. There are three main limitations on processing remote sensing data. Firstly, high-resolution satellite remote sensing images are expensive and most accessible open source datasets are in low-resolution [7]. The poor quality of images constrains the model performance. Secondly, optical remote sensing images are highly affected by weather conditions [8]. Complex weather conditions, such as fog and haze, lead to degraded and blurred images. Thirdly, some modern progressive car design makes distinction boundaries ambiguous. It is essential to determine vehicle types by considering both local and global dependencies, which places higher requirements on the model design. To overcome these issues, existing studies provide solutions in two directions. The first line of work focuses on haze removal or visibility enhancement by utilizing methods, such as Image Super Resolution (ISR) [9], to sharpen edges and further improve the resolution. Some methods adopt denoising operations to remove haze and get clearer processed images [10]. However, due to the pixels degradation caused by inherent statistical features of fog and haze, haze removal methods may not be effective while processing satellite remote sensing images. The other direction of work tries to build the end-to-end outperformed deep learning algorithm. However, as stated, it is quite a challenge to capture both local and global information under comprehensive conditions. This study proposes a novel Dense-TNT model for all-weather vehicle classification. It combines a DenseNet layer with a TNT layer, which was recently introduced for objective detection based on the general transformer architecture, to more effectively extract image features. In summary, there are three main contributions as follows: * The novel Dense-TNT model containing the DenseNet layer and TNT layer is proposed to recognize vehicle type. Based on the existing knowledge, the proposed method has the ability to better understand the global pattern of objectives. * The study performs extensive analysis regarding vehicle type classification over remote sensing images collected from three different regions and validate the higher recognition capability than other baselines. * Apart from three real-world regions data under normal weather condition, appropriate filter is added to simulate the light, medium and heavy haze weather condition. The evaluation results show around 80% classification accuracy even under heavy-foggy condition and around 5%-10% accuracy improvement than baseline algorithms. The feasibility has been verified. In the next several sections, the content is organized as follows. In Section Two, recent research on vehicle classification are introduced for comparison. In Section Three, the proposed Dense-TNT framework is described in detail. In Section Four, experiments settings and the vehicle recognition performance is evaluated under various datasets and weather conditions with other baseline models. In Section Five, we give the conclusion and future plan. ## II Related Work Existing vehicle type classification methods can be divided into three categories: appearance-driven methods [11][12][13][14][15], model-based methods [16][17][18] and the deep learning based methods. Appearance-driven methods focus on vehicle appearance features extraction, and try to classify vehicle types by comparing with known vehicle features. In [19], authors propose a method to extract distinctive invariant features and perform robust matching within the known database based on probability indicating. The quantity and quality of known data seriously determine the classification performance and it is difficult for the model to make accurate recognition when the feature of target object is out of database collection, which is common as car designs differ over eras. Model-based methods put efforts on vehicle 3D parameters computation and try to recover the 3D model to make the classification. In [20], a parameterized framework is designed to represent single vehicle with 12 shape parameters and 3 pose parameters. The local gradient-based method is applied to evaluate the goodness of fit between the vehicle projection and known data. However, similar with appearance-driven methods, the appearance and dimension of vehicles can be disturbed and degraded by poor data collection and complex weather condition [21]. Thus, in this study, we mainly discuss the deep learning based methods, which have the capability to capture more information. For the deep learning based methods, Convolutional Neural Network (CNN) and its variants play a significant role in existing image processing methods [22]. By stacking convolution layer and pooling layer in the classical CNN structure, CNN can automatically learn multi-stage invariant features for the specific objects via trainable kernels [23]. CNN takes the vehicle images as the input and generates each vehicle type probability. However, the pooling operation makes CNN ignore some valuable information without screening the correlation between parts and the entirety carefully [24]. Thus, there has also been a lot of interest in combining convolutional layer with attention mechanism for image classification tasks, due to the unbalanced importance distribution over one image [25]. To increase the interpretability of CNN, some research applies semi-supervised manner by feeding unlabeled data in pre-training process and learning output parameters in a supervised way [26]. Transformer was proposed in 2017 for Natural Language Processing (NLP) tasks. The principle of attention mechanism leads to the quadratic computational cost by directly applying Transformer on image processing issue, because each pixel needs to attend to every other pixel. Therefore, to adopt Transformer-like structure for image processing, adaptive adjustments are made. In [27], self-attention is applied in local neighborhoods to save operations and replace convolutions [28]. In another work, Sparse Transformer [29] uses a scalable filter to global self-attention before processing images. Recently, in the work [30], authors replaced the attention mechanism by a token mixer and remained the general Transformer architecture, called MetaFormer. Even combining pooling layer insider token mixer, it is found the model can realize superior performance. Even though CNNs are the fundamental model in vision applications, transformer has a great potential to alternate CNNs. Vision Transformer (ViT) has been widely used and verified to be efficient in many scenarios, such as object detection, segmentation, pose estimation, image enhancement and video captioning [31]. Canonical ViT structure divides one image into sequence patches and treat each patch as one element input to do the classification. Due to the inherent characteristic of transformer, ViT is good at long-range relationship extraction, but poor at local features capture since 2D Fig. 1: Deep learning algorithms help to capture huge amount of vehicle information in a specific region based on remote sensing data patches are compressed to a 1D vector. Thus, some work tries to improve the local modeling ability [32][33][34] by introducing extra architecture to model inner correlation patch-by-patch and layer-by-layer [35][36]. In [34], authors propose a hybrid token generation mechanism to achieve the local and global information from regional tokens and local tokens. In addition to the effort on enhancing local information extraction capability of ViT, some other directions, such as improving self-attention calculation [37], encoding [38][39] and normalization strategy [40]. In [30], authors achieve qualified performance by simplifying the structure even without attention mechanism. Due to the fact that low-resolution of data source from satellite imagery and the existence of complex real-world noise, local and global information extraction are both significant for the accurate vehicle classification. Instead of embedding nesting structure within transformer due to the complex computation, in this study, we stack suitable CNN and ViT variants to construct the novel efficient architecture for vehicle classification task. It is expected to achieve satisfying recognition performance, even under complex conditions. If realized, this technology can even be deployed on nano-satellites, for other earth or celestial bodies recognition tasks [41]. ## III Methodology ### _Problem Analysis_ In this paper, the main purpose is to build the novel end-to-end vehicle classification model combing selected CNN and ViT variants that takes satellite remote sensing images of various vehicle types from different regions under different weather conditions as inputs, and generates vehicle type classification results after image processing. The principle and detailed architecture of the proposed model is illustrated in this section. ### _Transformer Layer: TNT_ ViT has been successfully applied for wide scenarios and proven to be efficient due to the global long sequence dependencies extraction capability, however, local information aggregation performance is still the gap between ViT and CNNs. Even though some works propose variants that enhance the locality extraction ability, the combination of CNNs and ViT is a more direct method to equip transformer architecture with locality capture. Similarly, after careful comparison from literature, TNT [32] is selected as the variant in our proposed hybrid model. In canonical ViT structure, input images are divided into long sequence patches without local correlation information remained, which it difficult for transformer to catch the relationship simply based on the 2D patch sequence. Comparing with ViT, the main advantages of TNT in terms of this study is introducing another fragment mechanism to create sub-patches within every patch. As the name of TNT, the architecture contains one internal transformer modeling the correlation between sub-patches and one external transformer to propagate information among patches. If the _n-length_ patches sequence, \\(\\mathcal{X}^{i}=\\left[X^{1},X^{2}, ,X^{n}\\right]\\), is regarded as visual sentences, each sentence is further divided into \\(m\\) visual words for embedding: \\[X^{i}\\xrightarrow[]{embedding}Y^{i}=\\left[y^{i,1},y^{i,2}, ,y^{i,m}\\right] \\tag{1}\\] For internal transformers, the data flow can be expressed as: \\[Y^{\\prime i}_{\\ l}=Y^{i}_{l-1}+MSA(LN(Y^{i}_{l-1})) \\tag{2}\\] \\[Y^{i}_{l}=Y^{\\prime i}_{\\ l}+MLP(LN(Y^{\\prime i}_{l})) \\tag{3}\\] where \\(l\\) is the index of visual words; _MSA_ means Multi-head Self-Attention; _MLP_ means Multi-layer Perceptron and _LN_ represents Layer Normalization. Thus, the overall internal transformation are: \\[\\mathcal{Y}_{l}=\\left[Y^{1}_{l},Y^{2}_{l}, ,Y^{n}_{l}\\right] \\tag{4}\\] Further, the sequence is transformed from words: \\[Z^{l}_{l-1}=Z^{i}_{l-1}+FC(Vec(Y^{i}_{l})) \\tag{5}\\] where _FC_ represents Fully-connected layer. Then, the entire sentence embedding sequence is represented as \\(\\mathcal{Z}_{0}=\\left[Z_{class},\\mathcal{I}^{2}_{0},Z^{2}_{0}, ,Z^{n}_{0}\\right]\\), where _Zclass_ is the class token. Finally, the data flow of the external transformer is formulated as: \\[\\mathcal{Z}^{\\prime}_{\\ l}=\\mathcal{Z}_{l-1}+MSA(LN(\\mathcal{Z}_{l-1})) \\tag{6}\\] \\[\\mathcal{Z}_{l}=\\mathcal{Z}^{\\prime}_{\\ l}+MLP(LN(\\mathcal{Z}^{\\prime}_{\\ l})) \\tag{7}\\] In the original paper, authors have shown a better classification performance than other baselines, including ViT. TNT architecture is shown in Fig. 2. ### _Convolutional Layer: DenseNet_ For local information extraction part, as introduced in previous sections, CNNs usually show better local fixed information extraction capability due to the existence of kernel structure and convolutional operation [42]. Thus, convolutional layer is remained when designing the efficient image recognition model. In this research, DenseNet is chosen to serve as the locality information extractor. Comparing with other commonly used CNNs models, such as ResNets and GoogLenet, DenseNets connect each convolutional layer with every other layer using the feed-forward network, instead of sequential connections between layers in ResNets and other models. This Fig. 2: Illustration of TNT model details. manner is called dense connectivity in [43] and the _i-th_ layer is formulated as equation 8: \\[\\mathcal{Z}_{i}^{*}=H_{i}([\\mathcal{Z}_{0},\\mathcal{Z}_{1}, ,\\mathcal{Z}_{i-1}]) \\tag{8}\\] where \\([\\mathcal{Z}_{0},\\mathcal{Z}_{1}, ,\\mathcal{Z}_{i-1}]\\) is the concatenation result of feature maps from all ahead layers; \\(H_{i}(\\cdot)\\) is the composite function combining batch normalization, rectified linear unit and convolution operations. In that case, every layer takes features map from all preceding convolutional layers, and information propagation capability is enhanced avoiding serious dependencies loss issue from distant stages. Moreover, gradient vanishing problem is alleviated. In this paper, due to the complex environment of vehicle remote sensing imagery, such as haze weather and shadowy region, the enhanced feature extraction capability is exactly what we need during training. The illustration of DenseNet layout is shown as Fig. 3. ### _Classifier Layer_ To complete the vehicle classification task, the probability of each type is expected to be calculated based on the output feature maps from previous layers. Thus, the softmax classifier layer is added as the final part of our proposed model to take the output feature vector from TNT layer and generate vehicle type probability vector for choice with highest probability. The learnable linear function modeling the relationship can be expressed as: \\[v=W^{T}\\mathcal{Z}^{*}+b \\tag{9}\\] where \\(x\\in R^{D\\times 1}\\) is the output feature with \\(D\\) dimension from TNT; \\(W\\) is the parameter to be learned; \\(v\\in R^{C\\times 1}\\) is the vehicle type variable, and \\(C\\) is the number of vehicle type. To emphasize the vehicle type with the highest probability, softmax is applied to achieve the final normalized output \\(O=[O_{1},O_{2}, ,O_{C}]^{T}\\): \\[V=\\sum_{i=1}^{C}e^{v_{i}} \\tag{10}\\] \\[O_{i}=\\frac{1}{V}e^{v_{i}} \\tag{11}\\] ### _Dense-TNT Overview Model_ In summary, the novel Dense-TNT model is designed based on DenseNet and TNT as shown in Fig. 4. It contains two parts: 1) the transformer-based layer, which guarantees baseline reasonable performance 2) the convolutional layer, which captures local fixed features. Beneficial from the kernel and convolutional operation, DenseNet is widely used for image recognition and has deeper locality extraction capability than other CNN variants. Further, TNT is adept in global information capture and has better understanding than canonical ViT. We believe that Dense-TNT can process the information propagated through the hybrid structure by extracting some specific local features and further improve the recognition capability even in the complex environment, such as haze and foggy condition. ## IV Experiments To evaluate our Dense-TNT model, we will compare the classification performance of proposed Dense-TNT with sev Fig. 4: The architecture of Dense-TNT neural network. The network contains TNT and DenseNet stages. The classifier layer serves as the recognition layer to compute the type probability of the input vehicle. Fig. 3: DenseNet model structure: Figure shows a 4-layer dense block. Each layer takes all preceding feature-maps as input. The convolutional layers between two adjacent blocks adjust feature-map sizes. eral advanced baselines, including PoolFormer and ViT. The main task of classification is to classify sedans and pickups from pictures taken by remote sensors from 3 different areas. Meanwhile, we will also evaluate the classification ability of Dense-TNT when input pictures are affected by fog. ### _Classification in Normal Weather Condition_ #### Iv-A1 Data Description We use Cars Overhead With Context (COWC) ([http://gdo-dataset.ucllnl.org/cowc/](http://gdo-dataset.ucllnl.org/cowc/)) [44], a remote sensing target detection dataset with a resolution of 15cm per pixel and an image size of 64x64, to do the classification. Remote sensing pictures from three different areas, including Toronto Canada, Selwyn New Zealand and Columbus Ohio, are selected. Details of different area datasets are described in Table I and example pictures of sedans and pickups are shown in Table II. #### Iv-A2 Baseline Settings Baseline models include PoolFormer and ViT. PoolFormer is the specific framework proposed in [30] and achieves the best recognition performance. ViT is widely applied in image processing problems recently as stated in Section Two. #### Iv-A3 Training The model is trained with 50 epochs on a RTX3060 GPU with a max learning rate of \\(lr=2e^{-3}\\). AdamW optimizer is used with weight decay 0.05. The batch size is set to be 0.01 of the size of training data. The size of training data is 0.8 of the size of the whole dataset while the size of test data is 0.2 of the size of the whole dataset. All the training data and the test data are randomly selected from the whole set. Considering the balance of the number of sedans and pickups, we randomly select the same number of sedan and pickup pictures. #### Iv-A4 Evaluation Criteria To evaluate classification outcomes for all models, four criteria are applied to indicate the performance. We first define all classification results in this study as: * True Positive (TP): A sedan is successfully recognized as a sedan. * True Negative (TN): A pickup is successfully recognized as a pickup. * False Positive (FP): A pickup is successfully recognized as a sedan. * False Negative (FN): A sedan is successfully recognized as a pickup. Thus, four criteria are formulated as: \\[Accuracy=\\frac{TP+TN}{TP+TN+FP+FN} \\tag{12}\\] \\[Precision=\\frac{TP}{TP+FP} \\tag{13}\\] \\[Recall=\\frac{TP}{TP+FN} \\tag{14}\\] \\[F1-score=\\frac{2\\times Precision\\times Recall}{Precision+Recall} \\tag{15}\\] The higher the evaluation score, the better the recognition performance. #### Iv-A5 Experiment Result and Analysis We apply Dense-TNT with 2 sizes of parameters, s12 and s24, PoolFormer with 2 sizes of parameters, s12 and s24, and ViT with different numbers of layers, 2 and 12, to do the evaluation. In all the models, 12-layer ViT is equipped with the largest amount of parameters which is about 86M. The parameter numbers of Dense-TNT s24 and PoolFormer s24 are both about 21M. The parameter number of Dense-TNT s12 and PoolFormer s12 are both 12M. Comparing with them, 2-layer ViT has the smallest amount of parameters. Table III shows the results of experiments. Fig. 5 shows the classification results with probabilities after processing under the normal weather condition. Dense-TNT s12, PoolFormer s12 and ViT l2, Dense-TNT can also performs better than the other 2 models. With a larger amount of parameters, Dense-TNT s24 has a relatively better performance than Dense-TNT s12. F1-score is the harmonic mean of precision and recall, which reflects the robustness of model recognition capability. Fig. 6 also shows F1-scores of experiments in histogram and it can be observed that Dense-TNT has prior performance. ### _Classification in Foggy Condition_ #### Iv-B1 Data Preprocessing To achieve vehicle images under different fog conditions, we process the grayscale value of every pixel in the vehicle image. Based on the gray scale value \\(V_{ij}\\) of the pixel on the \\(i\\)-\\(th\\) row and \\(j\\)-\\(th\\) column in the original image, the new different gray scale values \\(V_{ij}^{{}^{\\prime}}\\) are achieved by the parameter \\(\\beta\\): \\[d=-0.04\\times\\sqrt{(i-i_{0})^{2}+(j-j_{0})^{2}}+\\sqrt{max(N_{R},N_{C})} \\tag{16}\\] \\[t_{d}=e^{-\\beta\\times d} \\tag{17}\\] \\[V_{ij}^{{}^{\\prime}}=V_{ij}\\times t_{d}+0.5\\times(1-t_{d}) \\tag{18}\\] where \\(i_{0}\\) and \\(j_{0}\\) refer to the center of each row and column respectively, and \\(N_{R}\\) and \\(N_{C}\\) refer to the number of pixels in each row and column respectively. In the experiment, parameter \\(\\beta\\) is chosen as 0.08, 0.16 and 0.24 to realize 3 types of fog conditions which includes light foggy, medium foggy and heavy foggy. Dataset collected in Selwyn is randomly chosen for the experiments. Table IV are example pictures under different levels of weather impact. #### Iv-B2 Baseline Settings We keep the model size in this experiment the same as the sizes of Dense-TNT models in the 3-area classification in normal weather condition. Models include Dense-TNT s24 and Dense-TNT s12. And other baselines are also kept the same as the previous experiment. #### Iv-B3 Training The training method is also kept unchanged while the input data for model training are replaced with fog affected pictures. The total number of training epoch is 50 on a RTX 3060 GPU. The maximum learning rate is \\(lr=2e^{-3}\\) and the optimizer is AdamW. The batch size is set to be 0.01 of the size of training data. #### Iv-B4 Results In this experiment, we keep evaluation criteria the same as part A and use all the 6 models, Dense-TNT s24, Dense-TNT s12, PoolFormer s24, PoolFormer s12, ViT l12 and ViT l2, to evaluate the classification ability of Dense Fig. 5: Classification results with corresponding probabilities under the normal weather condition. Fig. 6: F1-scores of experimental results for three-region datasets under the normal weather condition. \\begin{table} \\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \\hline \\multirow{2}{*}{**Models Criteria**} & \\multicolumn{3}{c|}{**Selwyn**} & \\multicolumn{3}{c|}{**Columbus Ohio**} & \\multicolumn{3}{c}{**Toronto**} \\\\ \\cline{2-13} & Accuracy & Precision & Recall & F1-score & Accuracy & Precision & Recall & F1-score & Accuracy & Precision & Recall & F1-score \\\\ \\hline **Dense-TNT s24** & **0.8065** & **0.8211** & 0.9558 & **0.8810** & **0.7685** & **0.7876** & 0.9516 & 0.8582 & **0.8009** & 0.8205 & **0.9365** & **0.8734** \\\\ \\hline **Dense-TNT s12** & 0.7971 & 0.8183 & 0.9399 & 0.8722 & 0.7459 & 0.7855 & 0.9109 & 0.8377 & 0.7968 & **0.8389** & 0.9062 & 0.8706 \\\\ \\hline **PoolFormer s24** & 0.7819 & 0.7956 & **0.9559** & 0.8672 & 0.7675 & 0.7835 & **0.9691** & **0.8634** & 0.7584 & 0.7871 & 0.9183 & 0.8469 \\\\ \\hline **PoolFormer s12** & 0.7724 & 0.7977 & 0.9424 & 0.8619 & 0.7507 & 0.7812 & 0.9661 & 0.8431 & 0.7456 & 0.7509 & 0.9254 & 0.8441 \\\\ \\hline **ViT l12** & 0.7462 & 0.7462 & 0.9256 & 0.8252 & 0.7392 & 0.7421 & 0.9543 & 0.8455 & 0.7300 & 0.7349 & 0.9326 & 0.8401 \\\\ \\hline **ViT l2** & 0.7624 & 0.7659 & 0.9435 & 0.8623 & 0.7460 & 0.7486 & 0.9339 & 0.8504 & 0.7510 & 0.7559 & 0.9273 & 0.8560 \\\\ \\hline \\end{tabular} \\end{table} TABLE III: Experiment Results. The table shows the classification accuracy of all the 6 models in the experiments on 3 datasets. TNT when the input data are affected by different weather conditions. Table V shows the results of the experiment. Fig. 7 shows the classification results with probabilities after processing under the foggy weather condition. Fig. 8 also shows F1-scores of the experiment in histogram. Similarly, Dense-TNT, especially Dense-TNT s24 has better performance than other baselines. When the fog is heavier, the model leads other baselines by more. With the data input affected by different levels of foggy weather, there is a certain level of decay in the accuracy of all the 6 models. Even though the decrease of accuracy, Dense-TNT s24 still has a relatively better performance than PoolFormer s24 and ViT l12. Dense-TNT s12 also has a generally better performance than PoolFormer s12 and ViT l2, which means that Dense-TNT can still be useful when dealing with affected input data. \\begin{table} \\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \\hline \\multirow{2}{*}{**Models Criteria**} & \\multicolumn{3}{c|}{**Light-foggy (fog=0.08)**} & \\multicolumn{3}{c|}{**Medium-foggy (fog=0.16)**} & \\multicolumn{3}{c}{**Heavy-foggy (fog=0.24)**} \\\\ \\cline{2-13} & Accuracy & Precision & Recall & F1-score & Accuracy & Precision & Recall & F1-score & Accuracy & Precision & Recall & F1-score \\\\ \\hline **Dense-TNT s24** & **0.7941** & 0.8240 & 0.9215 & **0.8682** & **0.7961** & **0.7934** & 0.9352 & **0.8712** & **0.7692** & 0.7660 & 0.9440 & **0.8787** \\\\ \\hline **Dense-TNT s12** & 0.7907 & **0.8244** & 0.9178 & 0.8671 & 0.7839 & 0.7815 & 0.9510 & 0.8382 & 0.7648 & **0.7748** & 0.9497 & 0.8715 \\\\ \\hline **PoolFormer s24** & 0.7665 & 0.7912 & 0.9243 & 0.8490 & 0.7590 & 0.7630 & 0.9594 & 0.8608 & 0.7535 & 0.7663 & 0.9543 & 0.8635 \\\\ \\hline **PoolFormer s12** & 0.7631 & 0.7630 & 0.9289 & 0.8641 & 0.7500 & 0.7469 & **0.9624** & 0.8543 & 0.7371 & 0.7370 & 0.9601 & 0.8469 \\\\ \\hline **ViT l12** & 0.7533 & 0.7539 & **0.9310** & 0.8569 & 0.7456 & 0.7369 & 0.9449 & 0.8431 & 0.7428 & 0.7400 & 0.9627 & 0.8512 \\\\ \\hline **ViT l2** & 0.7566 & 0.7495 & 0.7297 & 0.8585 & 0.7394 & 0.7402 & 0.9573 & 0.8482 & 0.7383 & 0.7369 & **0.9659** & 0.8471 \\\\ \\hline \\end{tabular} \\end{table} TABLE IV: Experiment images under different weather conditions. The four columns respectively refers to images taken under normal weather condition, light foggy condition, medium foggy condition and heavy foggy condition. Fig. 8: F1-scores of experimental results for Selwyn dataset under differebt foggy condition. Fig. 7: Classification results with corresponding probabilities under the foggy weather condition. ## V Conclusion In this paper, a novel classification neural network called Dense-TNT is firstly proposed to recognize vehicle type based on satellite remote sensing imagery. Dense-TNT is combined with DenseNet layer and TNT layer to capture both locality and global information from input images and is expected to achieve better recognition performance than other widely used methods, especially under complex weather conditions. Related experiments are designed to validate the feasibility of Dense-TNT based on the real-world remote sensing dataset collected from three different regions over multi-environment states. Necessary data preprocessing on the same dataset is executed to imitate the foggy weather condition in three different degrees: light, medium and heavy foggy. The experiment results show that Dense-TNT achieves better recognition performance with around 5%-10% accuracy improvement than baseline algorithms, including PoolFormer and ViT. Under foggy weather conditions, the improvement is even larger. To summarize, the vehicle type classification performance of the proposed Dense-TNT framework under comprehensive weather conditions using remote sensing imagery has been verified. ## Acknowledgment This study is supported by the RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s) and A*STAR by its RIE2020 Advanced Manufacturing and Engineering (AME) Industry Alignment Fund C Pre-Positioning (IAF-PP) (Award A19D6a0053). ## References * [1]R. Luo, Y. Zhang, Y. Zhou, H. Chen, L. Yang, J. Yang, and R. Su (2021) Deep learning approach for long-term prediction of electric vehicle (ev) charging station availability. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pp. 3334-3339. Cited by: SSI. * [2]R. Luo, Y. Song, L. Huang, Y. Zhang, and R. Su (2022) A attribute-augmented spatial-temporal graph informer network for electric vehicle charging station availability forecasting. arXiv preprint arXiv:2209.03356. Cited by: SSI. * [3]R. Luo and R. Su (2020) Traffic signal transition time prediction based on aerial captures during peak hours. In 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV), pp. 104-110. Cited by: SSI. * [4]R. Luo, Y. Zhang, Y. Zhou, H. Chen, L. Yang, J. Yang, and R. Su (2021) Deep learning approach for long-term prediction of electric vehicle (ev) charging station availability. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pp. 3334-3339. Cited by: SSI. * [5]R. Luo, L. Ciampi, F. Falchi, and C. Gennaro (2019) Counting vehicles with deep learning in onboard uv imagery. In 2019 IEEE Symposium on Computers and Communications (ISCC), pp. 1-6. Cited by: SSI. * [6]R. Luo, Y. Song, L. Huang, Y. Zhang, and R. Su (2022) Art-gim: attribute-augmented spatial-temporal graph informer network for electric vehicle charging station availability forecasting. arXiv preprint arXiv:2209.03356. Cited by: SSI. * [7]R. Luo, Y. Song, L. Huang, Y. Zhang, and R. Su (2022) Art-gim: attribute-augmented spatial-temporal graph informer network for electric vehicle charging station availability forecasting. arXiv preprint arXiv:2209.03356. Cited by: SSI. * [8]R. Luo, Y. Song, L. Huang, Y. Zhang, and R. Su (2022) Art-gim: attribute-augmented spatial-temporal graph informer network for electric vehicle charging station availability forecasting. arXiv preprint arXiv:2209.03356. Cited by: SSI. * [9]R. Luo, L. Ciampi, F. Falchi, and C. Gennaro (2019) Counting vehicles with deep learning in onboard uv imagery. In 2019 IEEE Symposium on Computers and Communications (ISCC), pp. 1-6. Cited by: SSI. * [10]R. Luo, Y. Song, L. Huang, Y. Zhang, and R. Su (2022) Art-gim: attribute-augmented spatial-temporal graph informer network for electric vehicle charging station availability forecasting. arXiv preprint arXiv:2209.03356. Cited by: SSI. * [11]R. Luo and R. Su (2020) Traffic signal transition time prediction based on aerial captures during peak hours. In 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV), pp. 104-110. Cited by: SSI. * [12]R. Luo, Y. Song, L. Huang, Y. Zhang, and R. Su (2022) Art-gim: attribute-augmented spatial-temporal graph informer network for electric vehicle charging station availability forecasting. arXiv preprint arXiv:2209.03356. Cited by: SSI. * [13]R. Luo, L. Ciampi, F. Falchi, and C. Gennaro (2019) Counting vehicles with deep learning in onboard uv imagery. In 2019 IEEE Symposium on Computers and Communications (ISCC), pp. 1-6. Cited by: SSI. * [14]R. Luo, L. Ciampi, F. Falchi, and C. Gennaro (2020) Counting vehicles with deep learning in onboard uv imagery. In 2019 IEEE Symposium on Computers and Communications (ISCC), pp. 1-6. Cited by: SSI. * [15]R. Luo and R. Su (2020) Traffic signal transition time prediction based on aerial captures during peak hours. In 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV), pp. 104-110. Cited by: SSI. * [16]R. Luo, Y. Song, L. Huang, Y. Zhang, and R. Su (2022) Art-gim: attribute-augmented spatial-temporal graph informer network for electric vehicle charging station availability forecasting. arXiv preprint arXiv:2209.03356. Cited by: SSI. * [17]R. Luo, Y. Song, L. Huang, Y. Zhang, and R. Su (2022) Art-gim: attribute-augmented spatial-temporal graph informer network for electric vehicle charging station availability forecasting. arXiv preprint arXiv:2209.03356. Cited by: SSI. * [18]R. Luo, Y. Song, L. Huang, Y. Zhang, and R. Su (2022) Art-gim: attribute-augmented spatial-temporal graph informer network for electric vehicle charging station availability forecasting. arXiv preprint arXiv:2209.03356. Cited by: SSI. * [19]R. Luo, L. Ciampi, F. Falchi, and C. Gennaro (2019) Counting vehicles with deep learning in onboard uv imagery. In 2019 IEEE Symposium on Computers and Communications (ISCC), pp. 1-6. Cited by: SSI. * [20]R. Luo, L. Ciampi, F. Falchi, and C. Gennaro (2020) Counting vehicles with deep learning in onboard uv imagery. In 2019 IEEE Symposium on Computers and Communications (ISCC), pp. 1-6. Cited by: SSI. * [21]R. Luo, L. Ciampi, and F. Falchi (2020) Learning interleaved cascade of shrinkage fields for joint image dehazing and denoising. IEEE Transactions on Image Processing29, pp. 1788-1801. Cited by: SSI. * [22]J. Hsieh, S. Yu, Y. Chen, and W. Hu (2006) Automatic traffic surveillance system for vehicle tracking and classification. IEEE Transactions on intelligent transportation systems7 (2), pp. 175-187. Cited by: SSI. * [23]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [24]Y. Song, H. Zhao, R. Luo, L. Huang, Y. Zhang, and R. Su (2022) A two-stage hierarchical network for vehicle charging station availability forecasting. arXiv preprint arXiv:2209.03356. Cited by: SSI. * [25]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [26]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [27]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [28]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [29]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [30]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [31]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [32]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [33]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [34]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [35]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [36]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [37]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [38]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [39]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [40]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [41]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [42]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30 (4), pp. 700-711. Cited by: SSI. * [43]Y. Shan, H. S. Sawhney, and R. Kumar (2008) Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras. IEEE transactions on pattern analysis and machine intelligence30* [31] K. Han, Y. Wang, H. Chen, X. Chen, J. Guo, Z. Liu, Y. Tang, A. Xiao, C. Xu, Y. Xu _et al._, \"A survey on vision transformer,\" _IEEE transactions on pattern analysis and machine intelligence_, 2022. * [32] K. Han, A. Xiao, E. Wu, J. Guo, C. Xu, and Y. Wang, \"Transformer in transformer,\" _Advances in Neural Information Processing Systems_, vol. 34, pp. 15 908-15 919, 2021. * [33] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, \"Swin transformer: Hierarchical vision transformer using shifted windows,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 10 1021-10 0222. * [34] C.-F. Chen, R. Pands, and G. Fan, \"Regionvrt: Regional-to-local attention for vision transformers,\" _arXiv preprint arXiv:2106.02689_, 2021. * [35] X. Chu, Z. Tian, Y. Wang, B. Zhang, H. Ren, X. Wei, H. Xia, and C. Shen, \"Twins: Revisiting the design of spatial attention in vision transformers,\" _Advances in Neural Information Processing Systems_, vol. 34, pp. 9355-9366, 2021. * [36] H. Lin, X. Cheng, X. Wu, F. Yang, D. Shen, Z. Wang, Q. Song, and W. Yuan, \"Cat: Cross attention in vision transformer,\" _arXiv preprint arXiv:2106.05786_, 2021. * [37] D. Zhou, B. Kang, X. Jin, L. Yang, X. Lian, Z. Jiang, Q. Hou, and J. Feng, \"DeepT: Towards deeper vision transformer,\" _arXiv preprint arXiv:2103.11886_, 2021. * [38] X. Chu, Z. Tian, B. Zhang, X. Wang, X. Wei, H. Xia, and C. Shen, \"Conditional positional encodings for vision transformers,\" _arXiv preprint arXiv:2102.10882_, 2021. * [39] K. Wu, H. Peng, M. Chen, J. Fu, and H. Chao, \"Rethinking and improving relative position encoding for vision transformer,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 10 033-10 041. * [40] H. Touvron, M. Cord, A. Sablayrolles, G. Synnaeve, and H. Jegou, \"Going deeper with image transformers,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 32-42. * [41] R. Luo, \"Mission design for the vel-lyon-1 (rebranded scoobi) student cubesat,\" 2019. * [42] X. She and D. Zhang, \"Text classification based on hybrid cnn-lstm hybrid model,\" in _2018 11th International Symposium on Computational Intelligence and Design (ISCID)_, vol. 2. IEEE, 2018, pp. 185-189. * [43] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, \"Densely connected convolutional networks,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2017, pp. 4700-4708. * [44] T. N. Mundhenk, G. Konjevod, W. A. Sakla, and K. Boakve, \"A large contextual dataset for classification, detection and counting of cars with deep learning,\" 2016. [Online]. Available: [https://arxiv.org/abs/1609.04453](https://arxiv.org/abs/1609.04453) \\begin{tabular}{c c} & Ruikang Luo received the B.E. degree from the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore. He is currently currently pursuing the Ph.D. degree with the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore. His research interests include long-term traffic forecasting based on spatiotemporal data and artificial intelligence. \\\\ \\end{tabular} \\begin{tabular}{c c} & Yaofeng Song received the bachelor degree from the school of Automation Science and Engineering in South China University of Technology. Currently he is a MSc student in the school of Electrical and Electronic Engineering in Nanyang Technological University, Singapore. His research interests involve deep learning based traffic forecasting. \\\\ \\end{tabular} \\begin{tabular}{c c} & Han Zhao received the Bachelor degree in Electrical and Electronic Engineering from Nanyang Technological University, Singapore in 2018. He is currently working toward the Ph.D degree in Electrical and Electronic Engineering in Nanyang Technological University, Singapore. His research interests include intelligent transportation system (ITS), short-term traffic flow prediction and graph neural networks. \\\\ \\end{tabular} \\begin{tabular}{c c} & Yicheng Zhang Yicheng Zhang received the Bachelor of Engineering in Automation from Hefei University of Technology in 2011, the Master of Engineering degree in Pattern Recognition and Intelligent Systems from University of Science and Technology of China in 2014, and the PhD degree in Electrical and Electronic Engineering from Nanyang Technological University, Singapore in 2019. He is currently a research scientist at the Institute for Infocomm Research (I2R) in the Agency for Science, Technology and Research, Singapore (A*STAR). Before joining I2R, he was a research associate affiliated with Rolls-Roye \\# NTU Corp Lab. He has participated in many industrial and research projects funded by National Research Foundation Singapore, A*STAR, Land Transport Authority, and Civil Aviation Authority of Singapore. He published more than 70 research papers in journals and peer-reviewed conferences. He received the IEEE Intelligent Transportation Systems Society (ITSS) Young Professionals Traveling Scholarship in 2019 during IEEE ITSC, and as a team member, received Singapore Public Sector Transformation Award in 2020. \\\\ \\end{tabular} \\begin{tabular}{c c} & Yi Zhang received her Bachelor degree of Engineering from Shandong University, China in 2014, and the PhD degree in Electrical and Electronic Engineering from Nanyang Technological University, Singapore in 2020. She is currently a research scientist at the Institute for Infocomm Research (I2R) in the Agency for Science, Technology and Research, Singapore (A*STAR). Her research interests focus on intelligent transportation system, including urban traffic flow management, model-based traffic signal scheduling, lane change prediction and bus dispatching and operation management. \\\\ \\end{tabular} \\begin{tabular}{c c} & Nanbin Zhao received the Bachelor's Degree of Engineering from University of Electronic Science and Technology of China in 2019 and the Master's Degree of Science from the National University of Singapore in 2020. He is currently pursuing the Ph.D. degree at the School of Electrical and Electronic Engineering, Nanyang Technological University. His research interests include intelligent transportation systems, vehicle control, machine learning, and IOT. \\\\ \\end{tabular} \\\\ \\end{tabular} \\begin{tabular}{c c} & Liping Huang obtained her Ph. D, and Master of Computer Science from Jilin University in 2018 and 2014, respectively. She has been working as a research fellow at Nanyang Technological University since 2019 June. Dr. Huang's research interests include spatial and temporal data mining, mobility data pattern recognition, time series prediction, machine learning, and job shop scheduling. In the aforementioned areas, she has more than twenty publications and serves as the reviewer of multiple journals, such as IEEE T-Big Data, IEEE T-ETCI, et al. \\\\ \\end{tabular} \\begin{tabular}{c c} & Rong Su received the M.A.Sc. and Ph.D. degrees both in electrical engineering from the University of Toronto, Toronto, Canada, in 2000 and 2004 respectively. He is affiliated with the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore. His research interests include modeling, fault diagnosis and supervisory control of discrete-event dynamic systems. Dr. Su has been a member of IFAC technical committee on discrete event and hybrid systems (TC 1.3) since 2005. \\\\ \\end{tabular}
Accurate vehicle type classification serves a significant role in the intelligent transportation system. It is critical for ruler to understand the road conditions and usually contributive for the traffic light control system to response correspondingly to alleviate traffic congestion. New technologies and comprehensive data sources, such as aerial photos and remote sensing data, provide richer and high-dimensional information. Also, due to the rapid development of deep neural network technology, image based vehicle classification methods can better extract underlying objective features when processing data. Recently, several deep learning models have been proposed to solve the problem. However, traditional pure convolutional based approaches have constraints on global information extraction, and the complex environment, such as bad weather, seriously limits the recognition capability. To improve the vehicle type classification capability under complex environment, this study proposes a novel Densely Connected Convolutional Transformer in Transformer Neural Network (Dense-TNT) framework for the vehicle type classification by stacking Densely Connected Convolutional Network (DenseNet) and Transformer in Transformer (TNT) layers. Three-region vehicle data and four different weather conditions are deployed for recognition capability evaluation. Experimental findings validate the recognition ability of our proposed vehicle classification model with little decay, even under the heavy fogy weather condition. deep learning, transformer, remote sensing, vehicle classification.
Summarize the following text.
256
isprs/6988f38b_abff_4027_ad28_99ef4f7aa503.md
Automated Production of Cloud-Free and Cloud Shadow-Free Image Mosaics from Cloudy Satellite Imagery Min LI Soo Chin LIEW and Leong Keong KWOH ## 1 Introduction Cloud cover is a big problem in optical remote sensing of the earth surfaces, especially over the humid tropical regions. This problem can usually be solved by producing a cloud-free mosaic from several multi-date images acquired over the same area of interest. In this method, an image containing the least cloud covers is taken as the base image. The cloudy areas in the image are masked out, and then filled in by cloud-free areas from other images acquired at different time. It is equivalent to the manual \"cut-and-paste\" method. The cloud-masking process can be automated by intensity-thresholding to discriminate the bright cloudy areas from cloud-free areas. However, simple thresholds cannot handle thin clouds and cloud shadows, and often confuse bright land surfaces as clouds. In this paper, we present an automated procedure for producing cloud-free and cloud shadow-free image mosaic from cloudy optical imagery, that is able to overcome the pitfalls encountered by the simple thresholding method. This method works for both multispectral and panchromatic images. In this procedure, the pixels are classified into clouds, vegetation, buildings or bare soil based on the pixel intensity, colour, size and shape features. Cloud shadows are automatically located from the knowledge of the imaging geometry and the intensity gradients at cloud edges. Each pixel/patch in each of the images is then ranked according to some predefined ranking criteria. The highest ranked pixels/patches are preferably used to compose the mosaic. ## 2 Description of the Algorithm Figure 1 shows a schematic diagram of the system for operational production of cloud-free and cloud shadow-free mosaics from optical satellite imagery. ### Input Images The inputs to the system are multispectral/panchromatic images of the same region acquired within a specified time interval. The images are co-registered before being fed into the system. ### Balancing of Grey Level The brightness of pixels at the same location from two different scenes will be slightly different due to the atmospheric effects, sun angles and sensor look angles during acquisition. This disparity is especially prominent in low-albedo vegetated areas. Therefore, it is necessary to balance the intensity of the patches so as to minimize the variation. An image from the set of input images is chosen as the reference image. The pixel values of all other images in the same set are adjusted according to \\[P\\ =\\ E\\ _{ref}\\ \\ +\\ (\\ S\\ -\\ E\\ ) \\tag{1}\\] where \\(P\\) is the output pixel value, \\(S\\) is the input pixel value, \\(E\\ _{ref}\\) is the mean pixel value of an overlap area around the mask patch being processed from the reference image, \\(E\\) is the mean pixel value of the same overlap area from the image to be balanced. Please note that the grey-level balancing procedure must be applied to each band. ### Cloud and Cloud-Shadow Masking Initial cloud and cloud shadow masks are produced using simple intensity thresholds. However, bright pixels of bare soil or building may be confused with cloud pixels. Such confusions are resolved by making use of size, shape and colour information of the bright pixel clusters. Clouds that need to be masked out are much larger than individual buildings. Man-made features such as buildings and bare soil normally have simple geometric shapes. An automatic method has been developed not only to calculate the size of bright patches but also to detect the lines, simple shapes and colour of the bright land surface in order to eliminate improper masking of these buildings and bare soil as clouds by the initial intensity thresholds. We employ a technique based on a geometric model, solar illumination direction and sensor viewing direction, as well as the intensity gradient to automatically predict the approximate location of cloud shadows near to the cloud edges. The initial cloud mask produced using a fixed intensity threshold usually excludes the thin clouds at cloud edges. To solve this problem, a morphological filter is applied to dilate the initial cloud mask patch so that the thin clouds at cloud edges are included in the cloud mask. ### Pixel/Patch Ranking In the masking process, some bright patches on bare soil can still be mistaken as cloud patches due to the similar intensity level and colour. These patches can be unfilled because no good data patches in the set of co-registered cloudy images can be used to mosaic the composite image. We therefore introduce a pixel/patch ranking procedure that employs the pixel/patch intensity and some suitably chosen band ratios to rank the pixels/patches in the order of \"cloudness\" and \"shadowiness\" according to some predefined ranking criteria described below. In this procedure, a shadow intensity threshold \\(T_{S}\\), a vegetation intensity threshold \\(T_{V}\\), a cloud threshold \\(T_{C}\\) are determined from the intensity histogram. The pixel/patch ranking procedure uses these shadow and cloud thresholds to rank the pixels/patches in order of \"cloudiness\" and \"shadowness\". Each of the non-cloud and non-shadow pixels/patches in the images is classified into one of three broad classes based on the band ratios: vegetation, open land and others. For each image \\(n\\) from the set of \\(N\\) acquired images, each pixel/patch at a location \\((i,j)\\) is assigned a rank \\(r_{n}(i,j)\\) based on the pixel/patch intensity \\(Y_{n}(i,j)\\) according to the following rules: 1. For \\(T_{S}\\leq(Y_{mr},Y_{n})\\leq T_{vv}\\) if \\(Y_{m}<Y_{n}\\) and class = \"vegetation\", then \\(r_{m}<r_{n}\\); 2. For \\(T_{v}\\leq(Y_{mr},Y_{n})\\leq T_{C}\\) if \\(Y_{m}<Y_{n}\\) and class = \"open land\", then \\(r_{m}<r_{n}\\); 3. If \\(Y_{m}<T_{S}\\) and \\(Y_{n}>T_{C}\\) then \\(r_{m}<r_{n}\\); 4. For \\(Y_{mr}\\), \\(Y_{n}<T_{S}\\), if \\(Y_{m}>Y_{n}\\), then \\(r_{m}<r_{n}\\); 5. For \\(Y_{mr}\\), \\(Y_{n}>T_{C}\\), if \\(Y_{m}<Y_{m}\\), then \\(r_{m}<r_{n}\\); In this scheme, pixels/patches with lower rank values of \\(r_{n}\\) are more superior and are more likely to be selected. Pixels/patches with intensities falling between the shadow and cloud thresholds are the most superior, and are regarded as the \"good pixels/patches\". The \"good pixels/patches\" are further classified into \"vegetation pixels/patches\" or \"open land pixels/patches\" depending on whether the pixel/patches intensity is below or above the vegetation threshold. As rule of thumb, the darker \"good pixels/patches\" are preferred over the brighter \"good pixels/patches\" because the brighter \"good pixels/patches\" may be contaminated by thin clouds. Where no good pixels/patches are available, the \"shadow pixels/patches\" are preferred over the \"cloud pixels/patches\". Where all pixels/patches at a given location are \"shadow pixels/patches\", the brightest shadow pixels/patches will be chosen. In locations where all pixels/patches have been classified as \"cloud pixels/patches\", the darkest cloud pixels/patches will be selected. After ranking the pixels/patches, the rank-\\(r\\) index map \\(n_{r}(i,j)\\) representing the index \\(n\\) of the image with rank \\(r\\) at the pixel/patch location \\((i,j)\\) can be generated. It is preferred that only the rank-1 and rank-2 index maps are generated and kept for use in generating the cloud-free and cloud-shadow free mosaics. ### Merging of Sub-images The rank-1 and rank-2 index maps generated from the pixel/patch ranking procedure are used to merge the input multi-scenes that have been processed by the grey-level balance. If the pixel at a given location has been classified as \"vegetation pixel\", the pixels from the rank-1 image and the rank-2 image at that location may be averaged together in order to avoid sudden spatial discontinuities in the final mosaic image. Otherwise, the pixels from the rank-1 image are used. ### Mosaic Production The basic idea of the cloud-free and cloud shadow-free mosaic algorithm is to mask the clouds, cloud-shadows and select good image data from the different scenes in a selected set of co Figure 1: A schematic diagram of the cloud-free and cloud shadow-free mosaic generating system registered cloudy images, and to mosaic \"clean\" data together. If the clean data pixels from different scenes are simply consolidated without additional processing, the final image will be very \"speckled\" and appear discontinuous. Therefore, a patch of pixels rather than the individual pixels is chosen to form the final mosaic. The final mosaic is composed from the images with cloud, cloud-shadow masks and the merged image generated from the merging of sub-images procedure. To suppress the visibility of the seam line between adjacent patches, the residual intensity differences between the patches are balanced using the intensity histograms of local patches. Secondly, the patches are made to overlap at their boundaries and the system will blend the image intensity from one patch to another in these overlapping regions. Finally, the images resulting from the mosaic process are geo-referenced to a map. The mosaic production procedure will put the image from the mosaic process into the map. ## 3 Results and Conclusions An example of applying the cloud-free mosaicking algorithm the six cloudy SPOT panchromatic images is shown in Figure 2. Figure 3-(a) shows a mosaic of cloudy SPOT multispectral images over Singapore and the southern part of the Peninsular Malaysia. The resulting cloud-free and cloud shadow-free mosaic is shown in Figure 3-(b). The mosaicking algorithm has also been tested on 1-m resolution IKONOS colour images. In this paper, we have presented the method for producing cloud-free and cloud shadow-free multi-scene mosaics from cloudy SPOT and IKONOS images. The system has been implemented successfully over a large area covered by about 50 SPOT scenes. The success of the cloud-free and cloud shadow-free mosaic depends on the choice of the shadow, vegetation and cloud intensity thresholds. Confusions arise when high-albedo open land surfaces or buildings are encountered. Such confusions can be resolved by making use of size and colour information to classify the pixels /patches into a few broad land cover classes. In many cases the clouds that need to be masked out are much large than the individual building, an automatic method is developed to calculate the size of the bright patches in order to eliminate improper masking of these buildings. As a result, this procedure allows a few small cloud patches to remain in the mosaic. A large, very bright and white patch of open land surface will be considered as cloud. When the bright and white patch of open land does not contain cloud-shadow, it is still possible for this patch of open land surface to be selected and used in forming the final mosaic. The approximate location of cloud shadow can be predicted based on the knowledge of solar illumination direction, sensor viewing direction and cloud height. **References**: S. C. Liew, M. Li, L.K. Kwoh, P. Chen, and H. Lim, \"Cloud-free multi-scene mosaics of SPOT images,\" in _Proc. International Geoscience and Remote Sensing Symposium_, 1998, vol. 2, pp. 1083-1085 M. Li, S. C. Liew, L.K. Kwoh, and H. Lim, \"Improved cloud-free multi-scene mosaics of SPOT images,\" in _Proc. Asian Conf. Remote Sensing_, 1999, vol. 1, pp. 294-298 M. Li, S. C. Liew, and L.K. Kwoh, \"Generating \"cloud free\" and \"cloud-shadow free\" mosaic for SPOT panchromatic images,\" in _Proc. International Geoscience and Remote Sensing Symposium_, 2002, vol. 4, pp. 2480-2482 M. Li, S. C. Liew, and L.K. Kwoh, \"Producing Cloud Free and Cloud-Shadow Free Mosaic from Cloudy IKONOS Images\" in _Proc. International Geoscience and Remote Sensing Symposium_, 2003, vol. 6, pp. 3946-3948Figure 2: An example of applying the cloud-free, cloud shadow-free mosaicking algorithm to six cloudy SPOT panchromatic images. (a – f) and the cloud-free, cloud shadow-free composite (g) of the same area Figure 3-(b). Multi-scene cloud-free and cloud shadow-free mosaic of Singapore and Southern Peninsulaular Malaysia generated using 48 SPOT multispectral scenes. (SPOT images © CNES, acquired and processed by CRISP, reproduced under licence from SPOT IMAGE)
The humid tropical region is always under partial or complete cloud covers. As a result, optical remote sensing images of this region always encounter the problem of cloud covers and associated shadows. In this paper, an operational system for producing cloud-free and cloud shadow-free image mosaics from cloudy optical satellite imagery is presented. The inputs are several cloudy images of the same area acquired by the IKONOS or SPOT satellites. By mosaicking the cloud-free and cloud shadow-free areas in the set of images, a reasonably cloud-free and cloud shadow-free composite scene can be made. This technique is especially valuable in tropical regions with persistent and extensive cloud cover. Mosaic, Feature, Detection, Algorithms, High Resolution, IKONOS, SPOT, Imagery
Give a concise overview of the text below.
152
arxiv-format/1804_02985v1.md
**Verification of Space Weather Forecasts issued by the Met Office Space Weather Operations Centre** **M. A. Sharpe1, S. A. Murray2** [MISSING_PAGE_POST] Footnote 21: https: ## 1 Introduction In recent decades there have been significant technological advances upon which governments, industries and organizations have become increasingly dependent. Many of these advances are vulnerable to space weather to the extent that security and/or safety could be severely compromised when significant events occur. After severe space weather was added to the UK's National Risk Register of Civil Emergencies in 2011, the UK government sought to establish a 24/7 space weather forecasting centre and the Met Office Space Weather Operations Centre (MOSWOC) was officially opened on 8th October 2014. Part of MOSWOCs remit is to issue a daily Space Weather Technical Forecast (SWTF) to help affected UK industries and infrastructure build resilience to space weather events; issued at midnight with a midday update, it contains a: * space weather activity analysis; * four-day solar activity summary; * geo-magnetic storm forecast (GMSF); * coronal mass ejection (CME) warning service; * X-ray flare forecast (XRFF); * solar radiation storm forecast; and * high energy electron event forecast. Verification of these products is crucially important for forecasters, users, modelers and stakeholders because it facilitates an understanding of the strengths and weaknesses of each forecast product. Ideally verification should be performed in near-real time to enable instant forecaster feedback because this enables: * necessary corrections to be made in a timely fashion; and * operational forecasters to use the results to further develop their forecasting skills. As a member of the International Space Environment Service ([http://www.spaceweather.org](http://www.spaceweather.org)), the Met Office is helping to coordinate verification efforts with1. NASA Community Coordinated Modelling Center on the implementation of the Flare Scoreboard ([https://ccmc.gsfc.nasa.gov/challenges/flare.php](https://ccmc.gsfc.nasa.gov/challenges/flare.php)); a system to enable the automatic upload of flare predictions and provide immediate verification to intercompare the forecasts from participating organisations; 2. the EU on the development project Flare Likelihood And Region Eruption foreCASTing (FLARECAST; [http://www.flarecast.eu](http://www.flarecast.eu)) to automatically forecast and verify X-ray flares. Some initial MOSWOC flare forecast verification has been undertaken as part of the FLARECAST project (_Murray et al_, 2017), however the real-time operational verification system that has now been developed for use by the MOSWOC forecasters was not fully explored in this work. Operational verification of most SWTF products is planned and progress to date includes investigations into the skill of GMSFs, XRFFs and Earthbound-CME warnings; with near-real-time verification of the former two products already operational using the Warnings Verification System (WVS) (_Sharpe_, 2015) and the Area Forecast Verification System (AFVS) (_Sharpe_, 2013). The methodologies used to verify GMSFs and XRFFs are outlined in Section 2 and results for the period between April 2015 and October 2016 are presented in Section 3. Section 4 contains brief conclusions and an outline of further work. ## 2 Verification Methodologies ### Geo-Magnetic Storm Forecasts The GMSF is both probabilistic and multi-category; each category referring to a different geomagnetic activity level, measured using the K index at 13 observing sites stationed across the globe, from which a planetary K value (Kp) is evaluated. GMS level: G1/G2. (Minor/Moderate) denote Kp values of 5-, 5o, 5+, 6-, 6o or 6+; G3. (Strong) denote Kp values of 7-, 7o or 7+; G4. (Severe) denote Kp values of 8-, 8o or 8+; and G5. (Extreme) denote Kp values of 9-, 9o or 9+. Additionally G0, denotes Kp values of 4+ or below. Forecasters issue GMSFs by first analyzing images to identify CMEs and coronal holes and then using the Wang-Sheeley-Arge Enlil model (_Edmonds, 2013_) to identify high-speed solar wind streams and CMEs. However associated forecasts of GMSs are limited because values of the z-component of the sun's magnetic field are unknown (except as measured by the ACE/DSCOV satellite). One further source of information is from an Autoregressive Integrated Moving Average model of Kp values; however, there is no model which accurately predicts Kp fluctuations. Consequently, forecasting is essentially a subjective process which continues to rely heavily upon the experience of operational forecasters. Table 1 shows the GMSF within the 00Z issue of the SWTF on 1st October 2016. The columns from left to right display: 1. a single word description of the GMS type; 2. the G-scale level associated with this type of GMS; 3. whether each type of GMS has been observed during the previous 24 h period; 4-7. forecast probabilities for the likelihood that each type of GMS will occur during four consecutive days into the future. During day 1 (Table 1 column 4) the predicted probability of a GMS level: \\(\\geq\\) G1 was 55%, \\(\\geq\\)G3 was 5%, \\(\\geq\\)G4 was 1% and \\(\\geq\\) G5 was 1%. However, the minimum forecast probability is stipulated at 1%; therefore in this analysis a forecast value of 1% is interpreted as 0%. The probabilities in Table 1 refer to the chance that the GMS level will be reached or exceeded at least once during the 24 h period. Therefore, column 4 forecasts that the probability associated with: G0. is 100% - 55% = 45%; G1/G2. is 55% - 5% = 50%; G3. is 5% - 0% = 5%; G4. and G5. is 0%. GFZ Helmholtz Centre Potsdam Kp values are used as the truth data source for GMSF verification; however, these values have a one-month latency period which makes them unsuitable for near-real-time verification. Consequently, estimated Kp values from the Space Weather Prediction Center (SWPC) are used to provide valuable feedback to MOSWOC forecasters. During the first ten months of 2016, SWPC and GFZ Kp values differed on 81 days. The verification metric most commonly associated with multi-category probabilistic forecasts is the Ranked Probability Score (\\(RPS\\)) (_Epstein_, 1969 and _Murphy_, 1971). The RPS is defined by \\[RPS=\\sum_{n=0}^{5}(P(G_{n})-O(G_{n}))^{2}; \\tag{1}\\] \\begin{table} \\begin{tabular}{c c c c c c} GMS & \\multicolumn{2}{c}{Test 24} & Day 1 & Day 2 & Day 3 & Day 4 \\\\ Probability & Level & Hours & (00-24 UTC) & (00-24 UTC) & (00-24 UTC) & (00-24 UTC) \\\\ (Exceedance) & & (VN) & (\\%) & (\\%) & (\\%) & (\\%) \\\\ Minor or & G1 to & Y & 55 & 35 & 30 & 10 \\\\ Moderate & G2 & N & 5 & 5 & 1 & 1 \\\\ Severe & G4 & N & 1 & 1 & 1 & 1 \\\\ Extreme & G5 & N & 1 & 1 & 1 & 1 \\\\ \\end{tabular} \\end{table} Table 1: GMSF contained within the 00Z SWTF issued on 1st October 2016. where in the present case \\(P(G_{n})\\) is the forecast probability that the maximum GMS level to be observed during the 24h period is \\(\\leq\\ G_{n}\\) (where \\(n=0,1/2,3,4\\) or \\(5\\)) and \\(O(G_{n})\\) is \\(0\\) if the maximum observed level is \\(<G_{n}\\) and \\(1\\) otherwise. The \\(RPS\\) (which ranges from \\(1\\) to a perfect score of \\(0\\)) is calculated separately for every day of each forecast and a mean value (\\(\\overline{RPS}\\)) is obtained by simply averaging the \\(RPS\\) values calculated for a large number of forecasts; \\(90\\%\\) confidence intervals are produced using simple bootstrapping with replacement. The \\(RPS\\) provides a very valuable approach to the problem of verifying multi-category probabilistic forecasts; however, a reference is required against which to benchmark the performance. Three common reference forecast choices are: random chance, persistence and climatology. Short-term climatology (subsequently referred to as a prediction-period) has been chosen for the reference in the present study and \\(RPS_{ref}\\) has been evaluated by replacing \\(P(G_{n})\\) in Equation (1) with the frequency of occurrence of GMSs over a prediction period encompassing the most recent 180 days. 180 days was chosen following an investigation (outlined in Section 3) which revealed it to be an accurate predictor period for GMSs. The use of a reference forecast enables the Ranked Probability Skill Score (\\(RPSS\\)), defined by \\[RPSS=1-\\frac{\\overline{RPS}}{\\overline{RPS}_{ref}} \\tag{2}\\] to be evaluated; this score ranges from -\\(\\infty\\) to a perfect score of \\(1\\) with \\(RPSS\\ >\\ 0\\) implying that the forecast is more skilful than the reference. Confidence intervals for this statistic (calculated using bootstrapping with replacement) indicate whether there is any statistically significant evidence to suggest that the forecast is more skillful than the reference. Verification of the GMSF has been performed for each forecast range by the AFVS (_Sharpe_, 2013) using daily maximum values of Kp. The AFVS was originally designed to verify a range of forecast categories against a truth data distribution (representing the conditions throughout an area); however, when presented with a truth data source containing only a daily maximum it can also be used to verify a daily maximum forecast like the GMSF. An alternative verification approach is to treat the GMSF as a probabilistic warning service, verifying the forecast probabilities associated with each GMS level separately. In practice however, only categories \\(\\geq\\) G\\(1\\) may be evaluated, because the more severe levels occur so rarely that robust statistics cannot be obtained over the available time frame. In the present study the Warnings Verification System (WVS) (_Sharpe_, 2015) has been used to verify the GMSF as a service, using Relative Operating Characteristic (ROC) plots and reliability diagrams (_Jolliffe and Stephenson_, 2012). The WVS is a flexible system originally developed to verify terrestrial weather; this system allows the analysis of near-hits by way of flexing thresholds in terms of space, time, intensity and confidence. However, in the present study flexing has only been applied in terms of intensity so that Kp values of 4-, 4o and 4+ are each categorized as a 'low-miss' except when they occur during a warning when they are categorized as a 'low-hit'. The only other possible flex appropriate to this analysis is time because: confidence flexing cannot be applied since there is only one definitive Kp value and spatial flexing cannot be applied since no near-Earth Kp values are available against which to assess the forecast. ### X-ray Flare Forecasts The second aspect of the SWTF considered in the present study is the X-ray Flare Forecast (XRFF); a sample of which is shown in Table 2. This forecast is similarly displayed to the GMSF shown in Table 1; from left to right, column(s): 1. contains a description of the type of XRF; 2. identifies the class associated with each type of flare; 3. identifies whether any XRFs have been observed during the previous 24 h period; 4-7. contain forecast probabilities that each type of XRF will occur during four consecutive days into the future. Forecast probabilities for each active region are calculated using a Poisson-statistics technique (_Gallagher et al, 2002_) based on historical flare rates for forecaster-assigned McIntosh classifications (_McIntosh, 1990_). These active region probabilities are combined to give a full-disk forecast, i.e., the chance of a flare occurring somewhere on the solar disk in the next 24 hours. The resulting model probability can be edited by the MOSWOC forecaster using their expertise before being issued as the Day 1 forecast as shown in Table 2. The Day 2-4 forecasts are purely based on forecaster expertise. More details about the forecasting method can be found in Murray et al, 2017. The XRF classes below M-class are A-class, B-class and C-class; however, these types of flare are not included in the forecast. In the soft X-ray range, flares are classified as A-, B-, C-, M-, or X- class according to the peak flux measured near Earth by the GOES spacecraft over 1-8 A (in Wm-2). Each class has a peak flux ten times greater than the preceding one, with X-class flares having a peak flux of order 10-4 Wm-2.\" During each 24 h period of Table 2 an M-class flare is predicted to occur with a probability of 28% and an X-class flare with a probability of 2%. There is a subtle, yet important difference between the values contained within Table 1 and Table 2; in the former the probabilities denote the chance of exceeding each level, whereas in the latter the probabilities indicate the chance that each class will be observed at least once during the 24 h period. Consequently, using \\(P(X)\\) and \\(P(M)\\) to denote the probabilities associated with X-class and M-class flares (as they appear in Table 2), it is theoretically possible for \\(P(X)\\) to be greater than \\(P(M)\\); whereas, \\(P(G5)>P(G4)\\) is impossible. Although the values in Table 2 denote the probabilities of observation (rather than exceedance), user interest will lie mainly in the maximum flux class to occur during each 24 h period; therefore, some manipulation of the values displayed in Table 2 is required. The following paragraph derives expressions for \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} X Ray & \\multicolumn{1}{c|}{Past 24} & Day 1 & Day 2 & Day 3 & Day 4 \\\\ Flares & Level & Flours & (00-24 UTC) & (00-24 UTC) & (00-24 UTC) & (00-24 UTC) \\\\ Probability & & (V/N) & (\\%) & (\\%) & (\\%) & (\\%) \\\\ \\hline Active & M Class & Y & 28 & 28 & 28 & 28 \\\\ Very Active & X Class & N & 2 & 2 & 2 & 2 \\\\ \\hline \\end{tabular} \\end{table} Table 2: XRFF contained within the 00Z SWTF on 21st July 2016. \\(P(\\mbox{maximum flux is A,B or C-class}),P(\\mbox{maximum flux is M-class})\\) and \\(P(\\mbox{maximum flux is X-class})\\) from the probabilities that appear in Table 2 (denoted by \\(P(\\mbox{M})\\) and \\(P(\\mbox{X})\\)). Evaluating the probability that the maximum flux is X-class is simple; because X is the \\[P(\\mbox{maximum flux is X-class})\\ =\\ P(X). \\tag{1}\\] To calculate the probability that the maximum flux is M-class it is first necessary to observe that \\[P(\\bar{M})=P(\\mbox{minimum flux is X-class})\\ +\\ P(\\mbox{maximum flux is A, B or C-class}) \\tag{2}\\] where \\(P(\\bar{M})=1-P(\\mbox{M})\\) is the probability that M-class will not occur. The XRF truth data source is long wave radiation observations reported by the Geo-Orbiting Earth Satellite (GOES-15) which takes measurements every 60-seconds. X-class flares are very rare, for example Wheatland et al (2005) noted that out of 10,226 days from 1975 to 2003, M-class flares occurred on \\(\\sim\\)25% of those days whereas X-class events occurred on only \\(\\sim\\)4% of those days. Analysis for the present study reveals that X-class flares occurred on just over 2% of days between 2010 and 2015. Therefore, it is relatively safe to assume that the first term on the right hand side of Equation (2) is zero since it is virtually impossible for the minimum 60-second observation during a 24 h period to be X-class and consequently, \\[P(\\mbox{maximum flux is A, B or C-class})\\ =\\ 1\\ -\\ P(M). \\tag{3}\\] The final expression to obtain is \\(P(\\mbox{maximum flux is M-class})\\); since \\[P(\\mbox{maximum flux is M-class})\\ =\\ 1-\\ P(\\mbox{maximum flux is not M-class})\\] it follows that \\[P(\\mbox{maximum flux is M-class})=\\ 1-(P(X)+\\ P(\\mbox{Maximum flux is A, B or C-class}))\\] which, on application of Equation (3), gives \\[P(\\mbox{maximum flux is M-class})\\ =\\ P(M)-P(X). \\tag{4}\\] Equations (1), (3) and (4) are used to calculate maximum XRFF probabilities which are used by the AFVS to verify the skill using the RPS and the RPSS. For the latter, a reference forecast is necessary and (as with the GMSF) the frequency of XRF class activity during a rolling prediction period is used for this purpose. The analysis outlined in Section 3 suggest that a prediction period of 120 days is a suitable choice. The XRFF is also assessed by the WVS, verifying separately the forecast probabilities associated with M-class and X-class flares; in practice however, only M-class flares can be evaluated because during the trial period X-class flares occurred too rarely to facilitate the calculation of robust statistics. As was the case for GMS verification, only intensity flexing is applied, for which a low-hit threshold of 10\\({}^{\\mbox{-}6}\\)Wm\\({}^{\\mbox{-}2}\\) (C-class) is used. ## 3 Results Two 4-day SWTFs are issued each day; a main forecast is issued at 00Z and an update at 12Z. It is inappropriate to analyze the performance on day 1 by amalgamating the update with the main forecast because the day 1 component to this update is a 12 h (rather than a 24 h) forecast; therefore, only 00Z forecasts are considered in the present study. ### Geo-Magnetic Storms The RPS is used to assess the skill of MOSWOC forecasts; however, as discussed in Section 2, a reference forecast is required against which to benchmark the score by calculating the RPSS. Arguably the simplest (and most basic) choice is random chance since its production requires no prior knowledge or information. The skill associated with a forecast generated by random chance is usually low and consequently it does not predict events well. However, despite this random chance is used (implicitly) in a number of popular verification statistics such as the Equitable Threat Score and the Peirce and Heidke Skill Scores (_Jolliffe and Stephenson_, 2012). Persistence is another popular reference choice, again (no doubt) because it requires little prior knowledge or information; persistence forecasts usually predict that tomorrow will be the same as today. When events are rare or conditions are benign persistence can produce a very favorable score; however, it is a completely ineffective predictor of the onset of severe events. The third most popular choice for a reference forecast is climatology. This option is less common because it requires prior knowledge of the conditions over a long time frame; indeed, in meteorology it is common to calculate climatology over a 30 years period. This reference sets the probability of each forecast category to its climatological frequency of occurrence. Usually the climatological period is fixed in advance; for example, in meteorology it is common to compare the latest season against the distribution formed by accumulating each corresponding season over the 30-year period between 1981 and 2010 (_National Climate Information Centre_, 2017). The climatology of solar activity is dissimilar to meteorological climatology, so it is not valid to follow this methodology exactly; however, it is unclear which prediction period is most appropriate. Consequently, different period lengths (from 30 to 360 days) have been analyzed to obtain an accurate predictor. For each prediction period definitive GFZ data was used to calculate the frequency of occurrence of each GMS-level; however, GFZ Kp values are only available following a one-month (approximately) latency period (although SWPC produce real time estimates). The near-real time nature of GMS forecast verification precludes the use of truth data which is unavailable to the forecaster; therefore, a one-month latency period has been built into this analysis. Extensive checking confirmed that each prediction period length appeared to produce a similar 12-month rolling mean RPS value. Therefore, the minimum RPS value was calculated on only the first day of every month throughout the 16-year trial period. This method identified 180 days as the best performing prediction period length. Figure 1 (which uses definitive Kp-values from GFZ) displays the rolling 180-day frequency of occurrence of G0 (\\(\\times\\)), G1-2 (\\(\\times\\)), G3 (\\(\\Box\\)) and G4 (\\(\\Box\\)), and G5 (\\(\\circ\\)) for this 16-year trial period (beginning January in 2001), which also includes the period of MOSWOC forecasts analyzed in the present study. Figure 1 clearly reveals the extent to which G0 dominates, although there has been a decrease in its frequency in recent months; in the 180 day period to February 2016 G0 was the maximum GMS level to occur on only 78.4% of occasions, down from a previous high of 96.0% in the 180 day period to October 2014. During the period of SWTF analysis (April 2015 to October 2016) the maximum GMS level to occur in the 180 day prediction period was: * G0 on between 77.8% and 88.9% of days; * G1-2 on between 11.1% and 19.4% of days;Figure 1: Rolling 180-day frequency of occurrence of daily maximum GMS level. Figure 2: Rolling 12-monthly RPSS values (\\(\\times\\)) with 90% bootstrapped confidence intervals for each day of the GMSF. Days 1 to 4 are indicated by solid, long dashed, short dashed and dotted lines, respectively. * G3 on between 0% and 2.8% of days; * G4 on between 0% to 1.7% of days; and * G5 was not observed. Although it is inappropriate to calculate the RPSS for individual forecasts, monthly or annual values are available via Equation (2) following an evaluation of \\(\\overline{RPS}\\) and \\(\\overline{RPS}_{ref}\\) - the latter being calculated by substitution of the PDFs in Figure 1 for the forecast. Figure 2 displays rolling 12-monthly RPSS estimates (\\(\\times\\)) and 90% confidence intervals for each day of the GMSF throughout the April 2015 to October 2016 period of analysis. Each point in this figure is plotted against the final month of the 12-month period to which it corresponds, consequently March 2016 represents the period from April 2015 to March 2016, and October 2016 represents the period from November 2015 to October 2016. Although the RPSS is greatest on day 1 the majority of the confidence intervals associated with it cross the green no-skill line. The lower tail of the intervals for October 2016 does not intersect with this line, implying evidence at the 90% level to indicate that the skill during this 12-month period is greater than that associated with a rolling 180-day prediction period of GMS activity. However, all remaining confidence intervals intersect the green line, indicating that there is currently no evidence to suggest that these forecast days are more skillful than a daily 180-day prediction period at identifying the correct maximum daily GMS level. Figure 3 displays Relative Operating Characteristic (ROC) plots calculated using GMSFs issued between April 2015 and October 2016; these plots describe the skill associated with each day of the forecast at correctly discriminating the days on which Kp reached or exceeded 5- (G1). A ROC curve is simply a plot of the Hit Rate (the proportion of events that were forecast) against the False Alarm Rate (the proportion of non-events that were incorrectly forecast), both of which range between 0 and 1. The Hit Rate is positively orientated whereas the False Alarm Rate is negatively orientated. Each point on a ROC curve represents the value of these two statistics at a different probability level. If action is taken when the forecast probability of an event is low the Hit Rate will be relatively large because events are forecast more frequently; however, the False Alarm Rate will also be relatively large because many of these forecasts will be false alarms. As the action/no-action forecast probability threshold is increased the value of both statistics reduce (tending to zero when action is never taken) and a ROC curve is formed by drawing a line through all these points. The further this curve resides above and to the left of the leading diagonal the more skill the forecast has at correctly distinguishing events from non-events; however, a curve that resides close to the diagonal indicates that the forecast cannot distinguish events from non-events. Although the WVS assesses the performance of all levels identified in the GMSF, only G1/2 is considered in the present study because the performance statistics associated with more severe levels are insufficiently robust for detailed analysis due to their low base rates. There are three ROC-curves in each sub-plot, the green curve represents standard (un-flexed) verification methodology, whereas the black lines apply flexing using low-hit and low-miss categories. All the points within each sub-plot of Figure 3 reside above the grey diagonal no-skill line, indicating that each day of the GMSF has skill at discriminating events of G1 or above. The black line formed by + points awards a hit to any warning during which the maximum Kp value is at least 4- but only registers a missed event when G1 is not forecast and the maximum Kpvalue is at least 5-. Whereas the black line formed by \\(\\Box\\) points also awards a miss when Kp is at least 4- when no warning is issued. The purpose of the exclusive-flexed curve (\\({}^{+}\\) points) is to give the forecaster the benefit of the doubt when Kp values occur which are almost classified as a G1, whilst not penalizing near-G1 events; whereas the inclusive-flexed curve (\\(\\Box\\) points) assesses whether the forecasts are inadvertently using a different threshold to Kp=5-. Comparing each pair of \\({}^{+}\\) and \\(\\Box\\) points at every probability threshold reveals that the Hit Rate values Figure 3: ROC-plots generated for GMS level events \\(\\geq\\) G1 using the (\\(\\times\\)) un-flexed, the flexed including low-misses (\\(\\Box\\)) and the flexed excluding low-misses (\\({}^{+}\\)) technique for GMSFs on (a) day 1; (b) day 2; (c) day 3 and (d) day 4, issued between April 2015 and October 2016. significantly increases whereas the False Alarm Rate remains virtually unchanged - this is a consequence of the low base rate. The curve formed by the \\(+\\) points indicates that during a significant proportion of the days on which G1 was predicted the maximum Kp value was either 4-, 4o or 4+. In each plot the exclusive-flexed curve (\\(+\\)) shows better discrimination than the green un-flexed curve. This clearly indicates that maximum daily Kp values of 4-, 4 or 4+ often occur when G1 is forecast with a non-zero probability. The inclusive-flexed curve (\\(\\Box\\)) amounts to simply reducing the Kp event threshold to 4- (from 5-), it is interesting to observe that the resulting ROC-curves virtually coincide with the green un-flexed curves. A comparison of each point in sub-figure (a) reveals that inclusive-flexed values of the Hit Rate and False Alarm Rate are smaller than their green un-flexed counterparts. The fact that the curves are virtually coincident is an indication that the decrease in the proportion of correctly warned-for events is matched by an increase in the proportion of forecasts that were false alarms; consequently, the ability with which events are correctly identified is almost unchanged. In other words, the GMSF is equally skilled at identifying days on which Kp\\(\\geq\\)4- as it is at identifying days on which Kp\\(\\geq\\)5-. The same conclusion also applies to sub-figures (c) and (d) (forecast days 3 and 4); however, sub-figure (b) appears to indicate that day 2 (identified as the worst performer in Figure 4) has (slightly) more skill at identifying Kp=4-. The areas under the un-flexed (\\(\\times\\)), inclusive-flexed (\\(\\Box\\)) and exclusive-flexed (\\(+\\)) ROC-curves on forecast day: 1. are 0.764, 0.742 and 0.878; 2. are 0.667, 0.699 and 0.822; 3. are 0.688, 0.680 and 0.827; and 4. are 0.664, 0.666 and 0.809 respectively. Figure 4 displays reliability diagrams for the period between April 2015 and October 2016, to assess the accuracy of the probabilities with which the GMSF predicts days during which the maximum Kp value is at least 5- (G1). In both sub-figures: the grey dotted diagonal indicates perfect reliability; the region between the grey dashed lines denotes skill (in the Brier Score sense); the horizontal dot-dashed line of no-resolution denotes the frequency of occurrence of G1; and the solid, long-dashed, short-dashed and dotted back lines show the reliability of forecast days 1 through 4 respectively. The dark grey, mid grey, light grey and pale grey histograms denote the frequency with which events were predicted on days 1 through 4 respectively. In sub-figure (a) the verifying Kp event threshold is 5-, whereas in (b) it is 4-; consequently, these plots are the counterpart to the un-flexed (\\(\\times\\)) and inclusive-flexed (\\(\\Box\\)) ROC-curves in Figure 3. The horizontal dot-dashed lines reveal that the maximum daily Kp value was \\(\\geq\\) 5- on 18% of occasions and \\(\\geq\\) 4- on 39% of occasions. The histograms (which are identical in both figures because the forecast is identical) reveal that G1 was rarely forecast with a high probability, especially at longer range. On days 1, 2, 3 and 4 probabilities \\(\\geq\\) 50% were issued on 17%, 13%, 9% and 7% of occasions and probabilities \\(\\geq\\) 90% on only 15, 3, 1 and 1 occasions respectively. Consequently, there is low confidence associated with the points in these figures that represent forecasts of higher confidence; never-the-less (with the exception of the 0-10% probability bin) almost all remaining points in sub-figure (a) lie below the no-skill region - a clear indication that G1 was over-predicted. The equivalent lines in sub-figure (b) lie above the dotted-diagonal (perfect skill) line because the event threshold used in this plot is a Kp-value of 4- (rather than 5-); however, although the majority of these points indicate under-forecasting many of them lie in the region between the two grey off-diagonal dashed lines (the forecast-skill-region). This appears to indicate that the forecast is more reliable at correctly identifying Kp values \\(\\geq\\) 4- than those \\(\\geq\\) 5-. The Brier score can be decomposed into three components (_Jolliffe and Stephenson, 2012_), one of these is a negatively orientated measure (between 0 and 1) of the reliability (\\(REL\\)) given by \\[REL=\\sum_{k=1}^{K}\\frac{n_{k}}{N}\\Big{(}\\frac{o_{k}}{n_{k}}-P_{k}\\Big{)}^{2}. \\tag{5}\\] In this expression \\(K\\) denotes a probability bin, \\(N\\) is the total number of forecast days, \\(n_{k}\\) is the number of times a geo-magnetic storm was forecast with a probability \\(P_{k}\\) and \\(o_{k}\\) is the total number of times a geo-magnetic storm was observed given that it was forecast with a probability \\(P_{k}\\). REL for forecast day: 1. is 0.024 in (a) and 0.009 in (b); 2. is 0.025 in (a) and 0.014 in (b); 3. is 0.019 in (a) and 0.018 in (b); 4. is 0.013 in (a) and 0.023 in (b). These scores (being negatively orientated) confirm that for forecast days 1 and 2 the GMSF appears to provide a slightly more reliable forecast for lower Kp (4- to 4+) events. Figure 4: Reliability diagrams for GMSFs of G1 issued between April 2015 and October 2016 on: day 1 (solid/dark grey), day 2 (long-dashed/mid-dark grey), day 3 (short-dashed/mid-grey) and day 4 (dotted/light grey); when verified against daily maximum Kp values of at least (a) 5- and (b) 4-. ### X-ray Flares A similar analysis to that described in Section 3.1 was also undertaken to evaluate the most suitable rolling prediction period for the evaluation of a reference for XRF forecasts. Rolling mean RPS values were again used to identify 120 days as the best prediction period. Examination of flare occurrence over solar cycle (see e.g., the histograms of Figure 5 in _Wheatland_, 2005) confirms that a relatively short prediction period is a sensible choice, since periods longer than 12 months would prove problematic during the sharp rising and declining phases; therefore, daily prediction period lengths between 30 and 360 days were considered. When undertaking this analysis for GMSFs the truth data were only available following an (approx) one-month latency period; however, the truth data source for XRFFs is GOES long-wave radiation flux and minute-by-minute values for these are available instantly. Therefore, no such latency period is appropriate because the truth data are immediately available to the forecaster. As was the case or GMSFs, extensive checking of every considered prediction period length appeared to produce similar 12-month rolling mean RPS values; however, in the present case, an examination of minimum mean RPS values on the first day of each month throughout the 16-year trial period did not reveal any optimal prediction period length. Therefore, again taking into consideration solar cycle variation as highlighted in Figure 5 of _Wheatland_, 2005 a 120-day prediction period was chosen. Figure 5 displays the rolling 120 day frequency of occurrence of ABC-class (\\(\\times\\)), M-class (\\(\\times\\)) and X-class (\\(\\Box\\)) flares from January 2001 and including the period of MOSWOC forecasts analyzed in the present study; it clearly reveals the extent to which ABC-class flares dominate and their frequency of occurrence has noticeably increased in recent months. During the past five years the minimum 120 day frequency of occurrence of ABC-class flares was 56.8% - observed in March 2014 and the maximum of 99.3% was observed in July 2016. During the period of SWTF analysis the maximum XRF class to occur in the 120 day prediction period was: * ABC on between 70.0% and 99.2% of days; * M on between 0.8% and 28.3% of days; * X on between 0% and 1.7% of days. Rolling 12-monthly RPSSs for each day of the XRFF (together with 90% confidence intervals, calculated using bootstrapping with replacement) are displayed in Figure 6. These scores have been evaluated via Equation (2), using the PDFs in Figure 5 to calculate \\(\\overline{RPS}_{ref}\\); forecast days 1 to 4 are shown as solid, long-dashed, short-dashed and dotted lines respectively. All point estimates of the RPSSs on days 1 and 2 lie above the green no-skill line, as do the majority of estimates on days 3 and 4; however, all their accompanying confidence intervals cross this line. Therefore, there is little evidence to suggest that the skill of the forecast at correctly identifying the maximum daily XRF class exceeds that obtained by using a rolling 120-day prediction period. Similarly the confidence intervals provide little evidence to suggest that any one forecast day is more skilful than another; however, the estimates alone suggest that day 1 tends to be more accurate than subsequent forecast days. What is obvious from this figure is the increasingFigure 5: Rolling 120-day frequency of occurrence of daily maximum long wave radiative flux class. Figure 6: Rolling 12-monthly RPSS values with 90% bootstrapped confidence intervals for each day of the XRFF size of the confidence intervals - those associated with the RPSS calculated for the 12-month period to March 2016 are much smaller than the equivalent values calculated in October 2016. One probable cause for this increase is the rapid decrease in the base rate of M-class flares over this time frame - as indicated by the green line in Figure 5. Figure 7 displays ROC-plots calculated from XRFFs issued between April 2015 and October 2016; these plots describe the skill associated with each forecast day at correctly discriminating when the maximum daily flux is at least M-class. Although the WVS assesses the performance of both M and X class flares, only M is considered in the present study because the performance statistics associated with X are insufficiently robust for detailed analysis, due to their low base rate. The three curves that are displayed in each sub-figure are as described in relation to Figure 3, except that in the present case the low-hit threshold corresponds to a C-class flare. The points on each curve of each sub-figure lie above the grey diagonal no-skill line, indicating that each day of the XRFF has skill at discriminating fluxes corresponding to M-class (or C-class) flares or above. In each plot the exclusive-flexed curve (+) displays better discrimination than the un-flexed curve (\\(\\times\\)), clearly indicating that a C-class flare often occurs when an M-class is forecast with a non-zero probability. The inclusive-flexed curve (\\(\\Box\\)) amounts to simply reducing the event threshold to a C-class flare, and the resulting curves indicate less discriminatory skill compared with the un-flexed curves. It is likely that a reduction in the event threshold will increase: the base rate, the number of hits and the number of missed events; it is also likely to decrease the number of false alarms and correct rejections. In Figure 7 the Hit Rates on each inclusive-flexed curve are smaller than their un-flexed counterparts, indicating that as a proportion, the number of missed events has increased more than the number of hits. Inclusive-flexed False Alarm Rates have also reduced compared with their un-flexed counterparts, indicating that (as a proportion) the number of false alarms has reduced more than the number of correct rejections; however, this decrease is not large enough to offset the reduction in Hit Rate and consequently the area underneath the inclusive-flexed ROC-curve (a summary indicator of discriminatory skill) has reduced. The areas under the un-flexed, inclusive-flexed and exclusive-flexed ROC-curves on day: 1. are 0.874, 0.784 and 0.989; 2. are 0.871, 0.786 and 0.989; 3. are 0.847, 0.777 and 0.984; and 4. are 0.822, 0.775 and 0.980 respectively. For each type of flexing the area under the ROC-curve decreases monotonically with increasing forecast range. This trend was also found in the _Murray et al_ (2017) work, with the day 1 forecast generally being more skillful than subsequent days. The fact that the area beneath each inclusive-flexed ROC-curve is smaller than the corresponding area beneath each un-flexed ROC-curve indicates that either the C-class flare threshold (1.0E-\\({}^{6}\\)Wm\\({}^{2}\\)) provides a low-hit threshold that is too small or that the discriminatory skill of the XRFF service is optimized by the M-class flare threshold (1.0E-\\({}^{5}\\)Wm-\\({}^{2}\\)). + points only register missed events when an M-class flare is not forecast and an M-class flare occurs, whereas a hit is awarded to any forecast during which the maximum XRF class is at least C; the purpose being to give the forecaster the benefit of the doubt when XRFs occur which are almost classified as M-class, whilst not penalizing flare events that were nearly M-class. Comparing each pair of + and \\(\\Box\\) points at every probability threshold reveals that the change in Hit Rates is significantly greater than the change in False Alarm Rates - this is a consequence of the low base rate associated with XRFs. Figure 8 displays reliability diagrams for the period between April 2015 and October 2016, these are used to assess the accuracy with which the XRFF predicted (a) M-class and (b) C-class flares: the format of these plots is identical to Figure 4. The horizontal dot-dashed lines indicate that XRFs of at least M-class and C-class occurred on 9% and 53% of occasions respectively. Figure 7: ROC-plots generated using the (\\(\\times\\)) un-flexed, the flexed including low-misses (\\(\\Box\\)) and the flexed excluding low-misses (\\(+\\)) technique for (a) day 1; (b) day 2; (c) day 3 and (d) day 4 XRFFs for M-class flares, issued between April 2015 and October 2016. The histograms reveal that M-class was rarely forecast with high probability; however, there is very little difference between the different shaded bars, indicating that the frequency with which M-class flares are forecast with n% probability (where n is a decile) is similar on each day of the forecast. In sub-figure (a) the majority of points lie below the no-skill region - a clear indication that M-class flares are over-forecast, as also found in the _Murray et al_ (2017) study. However, the curve in sub-figure (b) is significantly above the diagonal indicating under-forecasting of C-class flares. Therefore, it appears that instead of using a long-wave flux threshold of 1.0E-\\({}^{5}\\)Wm-\\({}^{2}\\) (M-class) the actual (unintentional) threshold was between 1.0E-\\({}^{6}\\)Wm-\\({}^{2}\\) and 1.0E-\\({}^{5}\\)Wm-\\({}^{2}\\). The reliability component to the Brier Score for forecast day: 1. is 0.036 in (a) and 0.031 in (b); 2. is 0.032 in (a) and 0.031 in (b); 3. is 0.028 in (a) and 0.030 in (b); 4. is 0.028 in (a) and 0.029 in(b). These negatively orientated scores are very similar; they appear to indicate that the XRFF predicts M-class and C-class flares with similar reliability. ## 4 Conclusions The present study contains the results of analyzing GMSFs and XRFFs contained within daily 00Z SWTFs issued by MOSWOC over the 19-month period between April 2015 and October 2016. Two approaches have been adopted: Figure 8: Reliability diagrams for XRFFs issued between April 2015 and October 2016 on: day 1 (solid/dark grey), day 2 (long-dashed/mid-dark grey), day 3 (short-dashed/mid-grey) and day 4 (dotted/light grey); when verified against daily maximum long-wave fluxes of at least (a) 1.0E-\\({}^{5}\\)Wm-\\({}^{2}\\) (M-class) and (b) 1.0E-\\({}^{6}\\)Wm-\\({}^{2}\\) (C-class). 1. a ROC and reliability analysis is used to assess the ability with which G1 GMSs and M-class XRFs are predicted; and 2. a RPSS analysis is performed to analyse the skill of the GMSFs and XRFFs against the skill demonstrated by simply forecasting the frequency of occurrence over the most recent 180-day and 120-day prediction periods respectively (chosen to optimise \\(\\overline{RPS}_{ref}\\)). For the GMSF: * The ROC analysis revealed that each day of the forecast had skill at discriminating days on which the maximum Kp-value was greater than or equal to 5- (G1); however, the forecast displayed a virtually identical level of skill at identifying days on which the maximum Kp-value was greater than or equal to 4-. * The reliability analysis revealed that G1 storms were over-forecast, whereas Kp-values \\(\\geq\\)5- were slightly under-forecast; consequently, the GMSF was found to more reliably predict maximum Kp-values \\(\\geq\\)4- than maximum Kp-values \\(\\geq\\)5-. * The RPSS analysis presented little statistically significant evidence that day 1 of the GMSF was a better predictor of maximum GMS level than the frequency of occurrence over the preceding 180 days. For the XRFF: * The ROC analysis revealed that each day of the forecast had more skill at correctly identifying M-class flares than C-class flares. * The reliability analysis confirmed that although M-class flares are over-forecast, C-class flares are greatly under-forecast; therefore, it is likely that the most appropriate event-threshold was between 1.0E-6Wm-2 (C-class) and 1.0E-5Wm-2 (M-class). * The RPSS analysis indicated that the XRFF struggled to outperform a forecast comprised of only the frequency of occurrence over the preceding 120 days, with the confidence intervals associated with these estimates providing no statistically significant evidence. In the future our goals are to continue the analysis of the GMSF and XRFF components to the SWTF as this provides valuable feedback and guidance to MOSWOC forecasters. Plans also exist to: compare the performance of these services against equivalent services provided by other space weather centres and expand the verification to include other SWTF components (the next area of study being coronal mass ejection forecasts). ### Acknowledgments, Samples, and Data The authors would like to thank the MOSWOC forecasting team and David Jackson and Suzy Bingham from the Space Weather Research Team for their assistance. S. A. Murray was partly supported by the European Union Horizon 2020 research and innovation programme under grant agreement No. 640216 (FLARECAST project). All data necessary to reproduce the findings in this study are freely available for: * observed planetary Kp values via FTP download GFZ Helmholtz centre website [http://www.gfz-potsdam.de/en/home/](http://www.gfz-potsdam.de/en/home/)* observed long wave radiation flux reported from GOES via FTP download from the SWPC website [http://www.swpc.noaa.gov/](http://www.swpc.noaa.gov/) * MOSWOC Space Weather Technical Forecasts via the Met Office [http://www.metoffice.gov.uk/](http://www.metoffice.gov.uk/) using a Freedom of Information request ### Glossary of Terms AFVS: Area Forecast Verification System CME: Coronal Mass Ejection FLARECAST: Flare Likelihood And Region Eruption foreCASTing GOES: Geo-Orbiting Earth Satellite GMSF: Geomagnetic storm forecasts MOSWOC: Met Office Space Weather Operations Centre PDF: Probability Density Function ROC: Relative Operating Characteristic RPS: Ranked Probability Score RPSS: Ranked Probability Skill Score SWPC: Space Weather Prediction Centre SWTF: Space Weather Technical Forecast XRFF: X-Ray Flare Forecasts WVS: Warnings Verification System ## References * 106, doi:10.1002/swe.20019 * E. S. Epstein (1969), A scoring system for probability forecasts of ranked categories, J. Applied Meteorology 8, 985-987. * Gallagher et al. (2002) Gallagher, P. T., Y.-J. Moon, and H. Wang (2002), Active-Region Monitoring and Flare Forecasting I. Data Processing and First Results, Solar Physics, 209, 171-183, doi:10.1023/A:1020950221179. * Jolliffe and Stephenson (2012) Jolliffe, I. T., and D. B. Stephenson (2012), Forecast verification: A Practitioner's Guide in Atmospheric Science, John Wiley and Sons: Chichester * McIntosh (1990) McIntosh, P. S. (1990), The classification of sunspot groups, Solar Physics, 125, 251-267, doi:10.1007/BF00158405. * A. H. Murphy (1971), A note on the ranked probability score, Journal of Applied Meteorology **10**,155-156S. A. Murray, S. Bingham, M. A. Sharpe, and D. R. Jackson (2017), Flare forecasting at the Meta-Office Space Weather Operations Centre, Space Weather, 15, 577-588, doi:10.1002/2016SW001579. * Sharpe (2013) M. A. Sharpe (2013), Verification of Marine Forecasts using an Objective Area Forecast Verification System, Meterol. Apps. 20(2): 224-235 * Sharpe (2015) M. A. Sharpe (2015), A flexible approach to the objective verification of warnings, Meterol. Apps. 23(1): 65-7 * Wheatland (2005) M. Wheatland (2005), A statistical solar flare forecast method, Space Weather, 3, S07003, doi:10.1029/2004SW000131. * National Climate Information Centre (2017) National Climate Information Centre (2017). UK seasonal weather summary Autumn 2016, Weather 72(1): 15
The Met Office Space Weather Operations Centre was founded in 2014 and part of its remit is a daily Space Weather Technical Forecast to help the UK build resilience to space weather impacts; guidance includes four day geo-magnetic storm forecasts (GMSF) and X-ray flare forecasts (XRFF). It is crucial for forecasters, users, modelers and stakeholders to understand the strengths and weaknesses of these forecasts; therefore, it is important to verify against the most reliable truth data source available. The present study contains verification results for XRFFs using GOES-15 satellite data and GMSF using planetary K-index (Kp) values from the GFZ Helmholtz Centre. To assess the value of the verification results it is helpful to compare them against a reference forecast and the frequency of occurrence during a rolling prediction period is used for this purpose. Analysis of the rolling 12-month performance over a 19-month period suggests that both the XRFF and GMSF struggle to provide a better prediction than the reference. However, a relative operating characteristic and reliability analysis of the full 19-month period reveals that although the GMSF and XRFF possess discriminatory skill, events tend to be over-forecast.
Condense the content of the following passage.
246
arxiv-format/1312_6036v1.md
# Crowdsourced bi-directional disaster reporting and alerting on smartphones in Lao PDR Lutz Frommberger\\({}^{1}\\) and Falko Schmid\\({}^{1}\\) \\({}^{1}\\)Lutz Frommberger and Falko Schmid are with the International Lab for Local Capacity Building (Capacity Lab) at the Faculty for Mathematics and Informatics at the University of Bremen, Germany, Capacity Lab, Enrique-Schmid-Str. 5, 28359 Bremen, Germany. Email: {lutz,schmid}@capacitylab.org, WWW: www.capacitylab.org ## I Introduction Natural disasters are a threat for people in any country in the world. But especially in developing countries. They can have severe consequences that affect people's lives. It is widely recognized that natural disasters are a main reason for poverty as they \"reduce or eliminate equal access to opportunities and, therefore, to development\" [1]. Due to climatic change and increasing populations, the effect of natural disasters and the damage caused appears to rise dramatically. In Lao People's Democratic Republic (Lao PDR), the Mekong river and its confluences are of critical importance for many of the inhabitants, and a large fraction of the population lives near those rivers. Thus, tropical storms and the resulting floods can have a severe impact on the whole country. As an example, the typhoon Ketsana that struck the southern provinces of Lao PDR in late September 2009 resulted in 180,000 people being directly affected and an estimated damage of 58 Million US-S [14]. Agricultural sector was hit the hardest, which aggravated rice production and, thus, food security. But significant damage was also done to the transport sector by destroying roads and bridges. Lao government estimated the loss of GDP caused by this single event at 0.4%, that is, about 20 Million US-S [14]. Large-scale disasters like this are a great challenge for people affected. Equally, they are a great challenge for governmental administrative units (GAUs) which are in charge of disaster response, which is a difficult issue under large-scale disaster conditions. But in developing countries, people are also often confronted with problems on a smaller scale, e.g., local outbreaks of human, plant, or animal diseases. While initially not having a large impact on the overall country, these smaller incidents can have severe consequences for affected individuals. It may also easily occur that, e.g., diseases can spread and affect others, and, thus, become larger scale problems. In any disaster case, the flow of information is a critical issue. This applies both for communication from the local level towards administration, and vice versa. On the one hand, detailed information from affected regions is essential for the appropriate GAUs to organize disaster response and provide needed support, and on the other hand, information on upcoming disasters or updates on the situation are needed for local people to take the right actions. A seamless flow of information between different administrative levels (e.g., from district to province level) is also essential for efficient disaster response. To account for this, we report on Mobile4D, a bi-directional location-based disaster alerting and reporting system based on smartphones that, on the one hand, allows for sending out emergency warnings from the administration to affected people and, on the other hand, to report disasters at the local level as a crowdsourcing effort. This paper is organized as follows: First, we describe related work on disaster management systems, especially in developing countries. Then we give a detailed description of the Mobile4D system in Sect. III, highlighting the situation in Lao PDR, Goals, and architecture and features of the system. Section IV reports on a first test in Lao PDR before the paper closes with a conclusion. Fig. 1: Flood in Lao PDR after typhoon Ketsana in September 2009. It directly affected 180,000 people. ## II Related Work Several ICT frameworks and systems related to disasters are in use, most of them targeting the developed world. In developing countries, additional issues have to be faced. As [17] point out, effective warning systems require \"not only the use of ICTs, but also the existence of institutions that allow for the effective mobilization of their potential\", so the effective inclusion of administrative units play a critical role. Sahana [5] is a complex modular Open Source disaster management toolkit targeting at large-scale disasters, especially for organizing and coordinating disaster response. It has been successfully applied in many lesser developed countries. A review on Geohazard Warning Systems is given in [2]. Mobile devices gain increasing importance in disaster cases. Disaster alert systems based on SMS show a good impact in developing countries [12]. [6] present a Android smartphone based disaster alerting system which focusses mostly on routing issues in the disaster response phase. In general, the use of smartphones can show great impact in developing countries. This has especially been shown in health care related cases [3], e.g., by providing the opportunity for remote diagnosis based on photos [13]. _Crowdsourcing_ is an increasingly popular way to collect data provided from people at the local level and build larger information bases. The Web 2.0 based platform Usahidi is one of the most popular examples of the impact of crowdsourcing crisis information [16]. In the context of natural disasters, crowdsourcing techniques were used in the 2012 Haiti earthquake to organize help [20]. Especially crowdsourcing of geographical information (Volunteered Geographic Information - VGI) can have a strong impact in developing countries, for example for monitoring development [9]. The impact of VGI was also explored in the case of natural disasters [7]. The use of VGI in disaster cases is still subject to further extended research [10]. ## III Mobile4D System Overview In this section, we introduce Mobile4D, a mobile crowdsourcing disaster alerting and reporting system implemented in Laos. Mobile4D brings together the power of local knowledge about, e.g., places, people, livestock, or crop with GAUs responsible for coordination and support. Mobile4D enables affected people to report disasters directly to GAUs and enables GAUs to use direct communication channels to coordinate action and advice. ### _Situation and information flow in Lao PDR_ The Mobile4D system was planned in tight cooperation with the Ministry of Agriculture and Forestry (MAF) in Lao PDR. It is designed to be embedded in PRAM KSN, a WWW-based knowledge sharing platform among agricultural extension workers [19]. In PRAM KSN, extension workers can report on their work, ask questions (that will be answered by teachers and experts in administration), access tutoring material, and, most importantly, get in touch with each other and share experiences and advice directly. Within PRAM KSN, a disaster warning and reporting component was the first enhancement wished by local users of PRAM KSN, as natural hazards play an important role in the extension worker's work in the villages. Mobile4D then was designed to meet the goals and workflows of PRAM KSN. In a disaster cases, several GAUs in Lao PDR are in charge. For Mobile4D, we concentrate on the administrative units under the Ministry of Agriculture and Forestry (MAF). In Lao PDR, there are 17 provinces, with each province containing a number of districts. MAF has administrative offices in any province (Provincial Agriculture and Forestry Office - PAFO) and every district (District Agriculture and Forestry Office - DAFO). All institutions have their specific role in disaster cases. Villages are organized in village clusters, so-called _kumbans_. Also, International Non Government Organizations (INGOs) can play a role in disaster cases. Usually, the communication between DAFO and PAFO is pursued by paper. Telephone and fax are only used in urgent cases. Communication between PAFO and Ministry is usually pursued by telephone or fax. Email or other internet services are usually not used, although every province capital has internet access. However, 3G or 2G mobile internet is accessible in large parts of Laos provided by several telephone providers. With mobile internet connections it is possible to even reach remote locations by TCP/IP services. ### _Main Goals_ ICT systems for disaster management, alerting, and response help, among other things, to overcome shortcomings in communication and information flow. However, those systems (but also any ICT system in general) often neglect the specific affordances of developing countries. Under conditions of limited resources, workflows and systems have to be adapted to local circumstances and cultural background. Thus, the Mobile4D system patuliarly aims at two goals: 1. to provide a _bi-directional_ flow of information, that is, both from administration to the local level and vice-versa 2. to institutionalize the flow of communication, that is, to directly embed the reporting and alerting system into administrative workflows. These two aims tackle problems that have been identified as being amongst the larger challenges for current crowdsourcing based disaster management systems [15]. Furthermore, Mobile4D aims at 1. fully integrating small-scale disasters into the usual disaster management workflow by allowing local disaster reports. ### _System Architecture_ Taking Lao PDR's widespread mobile internet coverage into account, Mobile4D is designed as an internet and smartphone based system for the free and widespread Android platform. The costs for Android smartphones drop drastically, and even older devices offer the full range ofsensors and interaction possibilities necessary, making the used market an attractive source for mobile devices. Mobile4D basically consists of three components: 1. an _Android app_ which allows people in the villages to report disasters, receive warnings, and make contact with people in GAUs to get help, 2. a _WWW frontend_ which allows different GAUs to receive and manage reports, send out warnings and information material, and contact people affected, and 3. the _disaster management server_ handling the communication traffic. The design of Mobile4D is completely devoted to low-cost technology, realtime communication, and reliability. The web client runs on several years old PCs with up-to-date web browsers. The web client renders functionality with JavaScript, i.e., all code is loaded from the internet when the page is first accessed. This results in some waiting time at initial startup, but after that, no other network communication is needed than transmitting the efficiently encoded disaster data. This ensures full functionality also under weak network conditions, which is an issue, because especially district offices rely on mobile internet connections. First, it is planned to equip the people responsible for disaster response at the district level with a smartphone, as these are the people that take over disaster reporting duties anyway (that is, one smartphone for every district would suffice). But in a longer perspective and with increased use of smartphones, every individual with an Android smartphone is a potential contributor to the Mobile4D system and is also able to receive disaster warnings. ### _Information Flow and Sharing_ Mobile4D supports direct top-down and bottom-up communication to exchange information. Information can be disaster alerts, information material, media, or ongoing correspondence about situations between GAUs and people affected. Figure 2 shows how information can flow _top-down_ from the ministry level (MAF) directly to specific affected villages and back. Additionally, the information is distributed to the correct subordinate units on province (PAFO) and district (DAFO) level, and to non-governmental organizations (INGO). These direct channels enable to shortcut slow information distribution and make information available immediately where it is required. All alerts are sent out as Push messages. This ensures reliable real-time communication while being extremely efficient in terms of bandwidth. Most importantly, our crowdsourcing approach enables information to flow in the same way _bottom-up_: when people are affected by disasters, they can report on the situation to the GAUs and INGOs. This is supposed to be done by the Android app which allows to send reports directly from the place where the disaster has occurred (see Fig. 3 for screenshots). Mobile4D sends the reports to all the responsible GAUs in the hierarchy, but directs the information to the GAU responsible to take action (e.g., infrastructural problems can be resolved on district level, while severe disease outbreaks are handled on province level). Internal protocols ensure that information gets reviewed and answered. Also reporters will always be automatically notified when their report is processed by staff in the GAU. Whenever a disaster is reported at local level, the information about it is immediately sent out to all neighboring villages without administrative review. People get informed when situations are reported and can guard against potential threads to protect health and belongings at a very early stage. All information can be shared with everybody by forwarding received information via SMS or establishing a voice call. Phone numbers of local reporters and GAU staff are always prominently displayed and can be used for direct communication by everyone. Furthermore, Mobile4D supports interfacing with social network platforms like Twitter, where detailed information about disasters and their states can be made available along with their geographic location. Fig. 2: Top-down and bottom-up information flow in Mobile4D: information exchange and communication is flexible and can be initiated from GAUs as well from individuals at the village level. In particular, Mobile4D also spreads disaster reports at the local level to other affected users without the need to pass all administrative layers. ### _Information Generation and Processing_ In contrast to social network based platforms, Mobile4D has a particular focus on the data gathering process. It offers structured assembly of information to create specific disaster alerts (floods, bush fires, infrastructural problems, and diseases of humans, animals, and plants). In both, the mobile and the web client users are guided stepwise through an intuitive disaster reporting process. This step-by-step procedure ensures that important data is at least asked for and helps the reporters to provide structured information even in stressful situations. As it has been shown that text-free widgets can usually be fully understood [4], Mobile4D tries to avoid textual interfaces wherever possible (see Fig. 4 for an example). ### _Administrative Integration_ Staff of GAUs have a role-based web browser interface (see Fig. 5) to process incoming information of local reporters or other GAUs. They can establish direct communication, send out information and support to places and people in their area of responsibility. They are able to add tutorial documents (e.g., in PDF format) to disaster reports that will directly be spread to the smartphones of affected users. Mobile4D provides tools to perform any kind of administrative work related to disasters: reviewing information, getting in touch with reporters, assigning issues to other administrative layers, sending out information material, updating, merging, resolving disasters, etc.. At the current level, Mobile4D does not distinguish different administrative layers more than providing a specific role. As we believe in the power of local solutions, each GAU is a fully autonomous participant within the system. All layers can access the same data. It is consequently monitored which GAU performed which action, and all other GAU will be notified if they feel responsible for the disaster edited. That being said, Mobile4D does not enforce new workflows within the administration, but if offers full transparency for all administrative layers to inspect and monitor actions taken by others. In any step, contact information is provided, and the possibility to get in direct contact, through the system or any other communication channel, is encouraged. To assure the quality of data, Mobile4D provides a multi-level verification system based on the different administrative layers: Each GAU (MAF, PAFO, DAFO) has the opportunity to _verify_ any report, e.g., after checking back via a phone call or personal visit, thus giving the report and \"official\" stamp. In addition, single users can also verify the report. That is, the crowd is used for quality assurance itself, as a large number of user verification is a good estimate of its reliability [8]. For the user, this verification system can be a valuable help to assess the reliability of data on his own. Perspectively, this verification mechanism can also be used by the system to automatically decide when an alert is distributed without any administrative interference. At the moment, this is not implemented yet. ### _Interoperability_ #### Iv-G1 Compliance with the Common Alerting Protocol Mobile4D does not aim at providing all the features a full-fledged disaster management system can offer. Especially in the response phase, powerful management systems are available. To be able to be interoperable with such systems, Mobile4D is fully compliant with the Common Alerting Protocol (CAP) [11]. CAP is an XML based protocol for exchanging warnings and alerts. CAP has become an OASIS1 standard in 2004. Footnote 1: OASIS: Organization for the Advancement of Structured Information Standards Mobile4D fully maps a disaster's attributes (such as sender, urgency, status) to the corresponding fields in CAP. Attributes supported in Mobile4D only but not in CAP (such as information specific to the location, for example, the name of the village cluster (kumban) where the disaster Fig. 4: Intuitive text-free interface for entering the water level in a flooding by sliding the water up and down. Fig. 3: Screens of the Mobile 4D Android app. Left: Start screen with direct access to all important functions. Right: Map overview of nearby disaster alerts. occured) are stored in the CAP parameter field not to lose important information. By that, Mobile4D allows for export of any alert into CAP, and alerts specified in CAP can be imported into the CAP database. It is easily possible to adapt the Mobile4D API to automatically accept CAP specified alerts. The following code sniplet shows the result of a Mobile4D disaster report exported in CAP: ``` <?xmlversion=\"1.0\"encoding=\"UTF-8\"?> <alertxml=\"urnio:oasis:names:tc:emergency:cap:1.1\"> <identifier></identifier> <sender>89</sender> <sent>2013-09-25707:05:02.917-05:00</sent> <status>Actual</status> <msgType>A alert</msgType> <source>Markoffice; +856 1234567;MAF</source> <scope>Public</scope> <info> <language>=en-US</language> <category>Health</category> <event>I have seen the same thing in another village nearby last year</event> <responseType>None</responseType> <urgency>Future</urgency> <severity>Extreme</severity> <certainty>Possible</certainty> <effective>2013-09-24719:00:00-05:00 <effective> <effective> <parameter> <valuerName>location</valueName> <value>19.845519,102.078652</value> </parameter> <parameter> <valuName>disasterType</valueName> <value>PlantDiseaseInfo</value> </parameter> <parameter> <valueName>province</valueName> <value>Louangphabang</value> </parameter> <parameter> <valuerName>district</valueName> <valuerName>proxhang</value> <parameter> <valuerName>kumban</valueName> <value>Sangkalok</value> </parameter> </info> </alert> #### Iii-C2 Geocoding with MapIT Mobile4D allows to geocode pictures with MapIT [18]. MapIT is a tool to generate geometric geographic data directly from photos taken with smartphones. This feature is helpful to locate, e.g., agricultural lots with their precise geometry. MapIT allows to directly mark the object of interest on the smartphone picture, and the resulting geographical object is directly integrated in Mobile4D disaster alerts as the area being affected. This allows for very exact localization of alerts. ## IV Mobile4D Tests in Lao PDR Mobile4D was extensively tested in April 2013 in the province of Luang Prabang, Lao PDR (see Figure 6). The test involved staff members of MAF, staff members of the province office of Luang Prabang, and district officers of districts in the province. The system was set up with locally available technical infrastructure, that is, laptops used at Fig. 5: Screenshot of the administrative WWW interface. It opens a communication channel that allows to ask further questions or send information to the reporter (and anyone affected), but also offers contact information like telephone numbers for direct contact. An overview of reported disaster cases is shown as icons on a map. work and privately owned smartphones. We used mobile internet provided by three different phone companies. This resulted in a highly heterogeneous technical ecosystem. Main purpose of the tests was to gather feedback from the people directly affected with disaster alerting and management at the administrative levels, as they are the prospective users of the system. The system proved to work reliably and efficiently, also under very weak network conditions. During extensive feedback sessions with all participants we identified points of improvement. Most prominently, those were in the area of information visualization and usability. Further features demanded were integrating more reasoning and forecasting capabilities and the possibility to edit and add geographical information such as place names. All participants were very positive about the Mobile4D system and hoped for being able to use in as a part of their daily routines. In particular, they pointed out the efficiency of direct communication channels between affected people and GAUs, which allows to take quick actions and provide important information directly where it is needed. As a result of the successful tests, it was agreed with MAF to set up a pilot installation in one province in Laos to evaluate the system's impact for a longer period of time. ## V Conclusions We presented Mobile4D, a crowdsourced system for disaster reporting and alerting with smartphones. Along with larger natural disasters, Mobile4D targets also at small-scale hazards at a local level. It allows for bi-directional communication, from the local level towards administration and vice-versa. Governmental administrative units are directly involved in the flow of data, and local communication structures are strengthened. In an extended field test, Mobile4D performed reliably and efficiently, proving its suitability for developing countries such as Lao PDR. ## Acknowledgment Part of this work is supported by the German Research Foundation (DFG) through the Collaborate Research Center SFB/TR 8 \"Spatial Cognition\". Further funding was provided by the German Ministry for Research and Education (BMBF). We thank the Lao Ministry for Agriculture and Forestry for substantial support, in particular Savanh Hanephom, Thatheva Saphangthong, Soudchay Nhouyvanisvong, and Alounxay Onta, as well as our partners at UNU-IIST Macau, Peter Haddawy, Han Ei Chew, and Borort Sort. We also want to give credit to the students of the Mobile4D student project at the University of Bremen for their dedicated work on the system: Timo Bonanaty, Christian Czotscher, Nathalie Gabor, Satia Herfert, Helmar Hutschenreuter, Andreas Kastner, Pascal Knuppel, Daniel Langerenken, Carsten Pfeffer, Thorben Schiller, Arne Schlamann, Urs-Bjorn Schmidt, Nadine Schomaker, Denis Szadkowski, Denny Teuchert, Thomas Weber, Malte Wellmann, Michal Wladysiak, and Daniela Zimmermann. ## References * [1] I. Akantara-Ayala. Geomorphology, natural hazards, vulnerability and prevention of natural disasters in developing countries. _Geomorphology_, 47(2):107-124, 2002. * [2] D. Bhattacharya, J. Ghosh, and N. Samadhya. Review of geohazard warning systems toward development of a popular usage geohazard warning communication system. _Natural Hazards Review_, 13(4):260-271, 2012. * [3] M. N. Boulos, S. Wheeler, C. Tavares, and R. Jones. How smartphones are changing the face of mobile and participatory healthcare: an overview, with example from eCAALYX. _Biomedical engineering online_, 10(1):24, 2011. Fig. 6: Mobile4D field tests in Laos: training and data acquisition in the districts Luang Prabang, Chompet und Pak-Ou. * [4] B. M. Chaudry, K. H. Connelly, K. A. Siek, and J. L. Welch. Mobile interface design for low-literacy populations. In _Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium_, pages 91-100. ACM, 2012. * [5] P. Currion, C. d. Silva, and B. Van de Walle. Open source software for disaster management. _Communications of the ACM_, 50(3):61-65, 2007. * [6] J. T. B. Fajardo and C. M. Oppus. A mobile disaster management system using the android technology. _WSEAS Transactions on Communications_, 9(6):343-353, 2010. * [7] D. W. Farthing and J. M. Ware. When it comes to mapping developing countries, disaster preparedness is better than disaster response. In _AGI Geocommunity '10: Opportunities in a Changing World_, 2010. * [8] C. C. Freifield, R. Chunara, S. R. Mekaru, E. H. Chan, T. Kass-Hout, A. A. Iacucci, and J. S. Brownstein. Participatory epidemiology: use of mobile phones for community-based health reporting. _PLoS medicine_, 7(12):e1000376, 2010. * [9] L. Frommberger, F. Schmid, and C. Cai. Micro-mapping with smartphones for monitoring agricultural development. In _Proceedings of the ACM Symposium on Computing for Development (DEV 2013)_, Bangalore, India, 2013. * [10] M. F. Goodchild and J. A. Glennon. Crowdsourcing geographic information for disaster response: a research frontier. _International Journal of Digital Earth_, 3(3):231-241, 2010. * [11] E. Jones. Common alerting protocol, v. 1. 2, 2005. * [12] I. Mahmud, J. Akter, and S. Rawshon. SMS based disaster alert system in developing countries: A usability analysis. _Internation Journal of Multidisciplinary Management Studies_, 2(4), 2012. * [13] A. W. Martinez, S. T. Phillips, E. Carrilho, S. W. Thomas III, H. Sindi, and G. M. Whitesides. Simple telemedicine for developing regions: camera phones and paper-based microfluidic devices for real-time, off-site diagnosis. _Analytical Chemistry_, 80(10):3699-3707, 2008. * [14] Mekong River Commission. Annual Mekong flood report 2009, July 2010. * [15] R. W. M. Narvaez. Crowdsourcing for disaster preparedness: Realities and opportunities. Master's thesis, Graduate Institute of International and Development Studies, Geneva, 2012. * [16] O. Okolloh. Ushahidi, or 'testimony': Web 2.0 tools for crowdsourcing crisis information. _Participatory learning and action_, 59(1):65-70, 2009. * [17] R. Samarajiva. Mobilizing information and communications technologies for effective disaster warning: lessons from the 2004 tsunami. _New Media & Society_, 7(6):731-747, 2005. * [18] F. Schmid, L. Frommberger, C. Cai, and C. Freksa. What you see is what you map: Geometry-preserving micro-mapping for smaller geographic objects with mapf. In D. Vandenbroucke, B. Boucher, and J. Crompovets, editors, _Geographic Information Science at the Heart of Europe_, pages 3-19. Springer, 2013. * [19] B. Sort, P. Haddawy, and H. E. Chew. Challenges in ICT-enabled knowledge sharing among agricultural extension workers in Lao People's Democratic Republic. In _Proceedings of the 63rd Annual International Communication Association Conference_, London, UK, 2013. * [20] M. Zook, M. Graham, T. Shelton, and S. Gorman. Volunteered geographic information and crowdsourcing disaster relief: A case study of the Hatitian earthquake. _World Medical & Health Policy_, 2(2):7-33, 2010.
Natural disasters are a large threat for people especially in developing countries such as Laos. ICT-based disaster management systems aim at supporting disaster warning and response efforts. However, the ability to directly communicate in both directions between local and administrative level is often not supported, and a tight integration into administrative workflows is missing. In this paper, we present the smartphone-based disaster and reporting system Mobile4D. It allows for bi-directional communication while being fully involved in administrative processes. We present the system setup and discuss integration into administrative structures in Lao PDR.
Summarize the following text.
110
arxiv-format/2004_12623v1.md
# Localizing Grouped Instances for Efficient Detection in Low-Resource Scenarios Amelie Royer IST Austria [email protected] Christoph H. Lampert IST Austria [email protected] ## 1 Introduction As a core component of natural scene understanding, object detection in natural images has made remarkable progress in recent years through the adoption of deep convolutional networks. A driving force in this growth was the rise of large public benchmarks, such as PASCAL VOC [5] and MS COCO [15], which provide extensive bounding box annotations for objects in natural images across a large diversity of semantic categories and appearances. However, many real-life detection problems exhibit drastically different data distributions and computational requirements, for which state-of-the-art detection systems are not well suited, as summarized in Figure 1. For example, object detection in aerial or satellite imagery often requires localizing objects of a _single class_, _e.g._, cars [37], houses [19] or swimming pools [31]. Similarly, in biomedical applications, only some specific objects are relevant, _e.g._ certain types of cells [35]. Moreover, input images in practical detection tasks are often of much higher resolution, yet contain small and sparsely distributed objects of interest, such that only a very limited fraction of pixels is actually relevant, while most academic benchmarks often contain more salient objects and cluttered scenes. Last but not least, detection speed is often at least as important as detection accuracy for practical applications. This is particularly apparent when models are meant to run on embedded devices, such as autonomous drones, which have limited computational resources and battery capacity. In this work, we propose **ODGI** (**O**bject **D**etection with **G**rouped **I**nstances), a _top-down_ detection scheme specifically designed for efficiently handling inhomogeneous object distributions, while preserving detection performance. Its key benefits and components are summarized as follows: 1. a _multi-stage pipeline_, in which each stage selects only _a few promising regions_ to be analyzed by the next stage, while discarding irrelevant image regions. Figure 1: Recent benchmarks and challenges highlight the task of detecting small objects in aerial views, in particular for real-life low-resource scenarios [21, 26, 38, 34, 26, 1]. The data distribution and computational constraints for such tasks often vastly differ from state-of-the-art benchmarks, for instance MS COCO [15]. 2. Fast single-shot detectors augmented with the ability to identify _groups of objects_ rather than just individual objects, thereby substantially reducing the number of regions that have to be considered. 3. ODGI reaches similar accuracies than ordinary single-shot detectors while operating at _lower resolution_ because groups of objects are generally larger and easier to detect than individual objects. This allows for a further reduction of computational requirements. We present the proposed method, ODGI, and its training procedure in Section3. We then report main quantitative results as well as several ablation experiments in Section4. ## 2 Related work **Cascaded object detection.** A popular approach to object detection consists in extracting numerous region _proposals_ and then classifying them as one of the object categories of interest. This includes models such as RFCN [3], RCNN and variants [7, 8, 2], or SPPNet [9]. Proposal-based methods are very effective and can handle inhomogeneously distributed objects, but are usually too slow for real-time usage, due to the large amount of proposals generated. Furthermore, with the exception of [25], the proposals are generally class-independent, which makes these methods more suitable for general scene understanding tasks, where one is interested in a wide variety of classes. When targetting a specific object category, class-independent proposals are wasteful, as most proposal regions are irrelevant to the task. **Single-shot object detection and Multi-scale pyramids.** In contrast, single-shot detectors, such as SSD [17], or YOLO [22, 23, 24], split the image into a regular grid of regions and predict object bounding boxes in each grid cell. These single-shot detectors are efficient and can be made fast enough for real-time operation, but only provide a good speed-versus-accuracy trade-off when the objects of interest are distributed homogeneously on the grid. In fact, the grid size has to be chosen with worst case scenarios in mind: in order to identify all objects, the grid resolution has to be fine enough to capture all objects even in image regions with high object density, which might rarely occur, leading to numerous empty cells. Furthermore, the number of operations scales quadratically with the grid size, hence precise detection of individual small objects in dense clusters is often mutually exclusive with fast operation. Recent work [16, 30, 20, 17, 18, 36] proposes to additionally exploit multi-scale feature pyramids to better detect objects across varying scales. This helps mitigate the aforementioned problem but does not suppress it, and, in fact, these models are still better tailored for dense object detection. Orthogonal to this, ODGI focuses on making the best of the given input resolution and resources and instead resort to grouping objects when individual small instances are too hard to detect, following the paradigms that \"coarse predictions are better than none\". These groups are then refined in subsequent stages if necessary for the task at hand. **Speed versus accuracy trade-off.** Both designs involve intrinsic speed-versus-accuracy trade-offs, see for instance [10] for a deeper discussion, that make neither of them entirely satisfactory for real-world challenges, such as controlling an autonomous drone [38], localizing all objects of a certain type in aerial imagery [1] or efficiently detecting spatial arrangements of many small objects [32]. Our proposed method, ODGI, falls into neither of these two designs, but rather combines the strength of both in a flexible multi-stage pipeline: It identifies a small number of specific regions of interest, which can also be interpreted as a form of proposals, thereby concentrating most of its computations on important regions. Despite the sequential nature of the pipeline, each individual prediction stage is based on a coarse, low resolution, grid, and thus very efficient. ODGI's design resembles classical detection cascades [14, 27, 33], but differs from them in that it does not sequentially refine classification decisions for individual boxes but rather refines the actual region coordinates. As such, it is conceptually similar to techniques based on branch-and-bound [12, 13], or on region selection by reinforcement learning [6]. Nonetheless, it strongly differs from these on a technical level as it only requires minor modifications of existing object detectors and can be trained with standard backpropagation instead of discrete optimization or reinforcement learning. Additionally, ODGI generates meaningful groups of objects as intermediate representations, which can potentially be useful for other visual tasks. For example, it was argued in [28] that recurring group structures can facilitate the detection of individual objects in complex scenes. Currently, however, we only make use of the fact that groups are visually more salient and easier to detect than individuals, especially at low image resolution. ## 3 ODGI: Detection with Grouped Instances In Section3.1 we introduce the proposed multi-stage architecture and the notion of group of objects. We then detail the training and evaluation procedures in Section3.2. ### Proposed architecture We design ODGI as a multi-stage detection architecture \\(\\phi_{S}\\circ\\cdots\\circ\\phi_{1}\\), \\(S>1\\). Each stage \\(\\phi_{s}\\) is a detection network, whose outputs can either be _individual objects_ or _groups of objects_. In the latter case, the predicted bounding box defines a relevant image subregion, for which detections can be refined by feeding it as input to the next stage. To compare the model with standard detection systems, we also constrain the last stage to only output individual objects. **Grouped instances for detection.** We design each stage as a lightweight neural network that performs fast object detection. In our experiments, we build on standard single-shot detectors such as YOLO [22] or SSD [17]. More precisely, \\(\\phi_{s}\\) consists of a _fully-convolutional_ network with output map \\([I,J]\\) directly proportional to the input image resolution. For each of the \\(I\\times J\\) cells in this uniform grid, the model predicts bounding boxes characterized by four coordinates - the box center \\((x,y)\\), its width \\(w\\) and height \\(h\\), and a predicted confidence score \\(c\\in[0,1]\\). Following common practice [22, 23, 17], we express the width and height as a fraction of the total image width and height, while the coordinates of the center are parameterized relatively to the cell it is linked to. The confidence score \\(c\\) is used for ranking the bounding boxes at inference time. For intermediate stages \\(s\\leq S-1\\), we further incorporate the two following characteristics: _First_, we augment each predicted box with a binary _group flag_, \\(g\\), as well as two real-valued _offset values_\\((o_{w},o_{h})\\): The flag indicates whether the detector considers the prediction to be a single object, \\(g=0\\), or a group of objects, \\(g=1\\). The offset values are used to appropriately rescale the stage outputs which are then passed on to subsequent stages. _Second_, we design the intermediate stages to predict _one bounding box_ per cell. This choice provides us with an _intuitive definition of groups, which automatically adapts itself to the input image resolution_ without introducing additional hyperparameters: If the model resolution \\([I,J]\\) is fine enough, there is at most one individual object per cell, in which case the problem reduces to standard object detection. Otherwise, if a cell is densely occupied, then the model resorts to predicting one group enclosing the relevant objects. We provide further details on the group training process in Section3.2. **Multi-stage pipeline.** An overview of ODGI's multi-stage prediction pipeline is given in Figure2. Each intermediate stage takes as inputs the outputs of the previous stage, which are processed to produce image regions in the following way: Let \\(B\\) be a bounding box predicted at stage \\(\\phi_{s}\\), with confidence \\(c\\) and binary group flag \\(g\\). We distinguish three possibilities: (i) the box can be discarded, (ii) it can be accepted as an individual object prediction, or (iii) it can be passed on to the next stage for further refinement. This decision is made based on two confidence thresholds, \\(\\tau_{\\text{low}}\\) and \\(\\tau_{\\text{high}}\\), leading to one of the three following actions: 1. if \\(c\\leq\\tau_{\\text{low}}\\): The box \\(B\\) is discarded. 2. if \\(c>\\tau_{\\text{high}}\\) and \\(g=0\\): The box \\(B\\) is considered a strong individual object candidate: we make it \"exit\" the pipeline and directly propagate it to the last stage's output as it is. We denote the set of such boxes as \\(\\mathcal{B}_{s}\\). 3. if (\\(c>\\tau_{\\text{low}}\\) and \\(g=1\\)) or (\\(\\tau_{\\text{high}}\\geq c>\\tau_{\\text{low}}\\) and \\(g=0\\)): The box \\(B\\) is either a group or an individual with medium confidence and is a candidate for refinement. After this filtering step, we apply non-maximum suppression (NMS) with threshold \\(\\tau_{\\text{nms}}\\) to the set of refinement candidates, in order to obtain (at most) \\(\\gamma_{s}\\) boxes with high confidence and little overlap. The resulting \\(\\gamma_{s}\\) bounding boxes are then processed to build the image regions that will be passed on to the next stage by multiplying each box's width and height by \\(1/o_{w}\\) and \\(1/o_{h}\\), respectively, where \\(o_{w}\\) and \\(o_{h}\\) are the offset values learned by the detector. This rescaling step ensures that the extracted patches cover the relevant region well enough, and compensates for the fact that the detectors are trained to _exactly_ predict ground-truth coordinates, rather than fully enclose them, hence sometimes underestimate the extent of the relevant region. The resulting rescaled rectangular regions are extracted from the input image and passed on as inputs to the next stage. The final output of ODGI is the combination of object boxes predicted in the last stage, \\(\\phi_{S}\\), as well as the kept-back outputs from previous stages: \\(\\mathcal{B}_{1}\\dots\\mathcal{B}_{S-1}\\). The above patch extraction procedure can be tuned via four hyperparameters: \\(\\tau_{\\text{low}}\\), \\(\\tau_{\\text{high}}\\), \\(\\tau_{\\text{mms}}\\), \\(\\gamma_{s}\\). At training time, we allow as many boxes to pass as the memory budget allows. For our experiments, this was \\(\\gamma_{s}^{\\text{train}}=10\\). We also do not use any of the aforementioned filtering during training, nor thresholding (\\(\\tau_{\\text{low}}^{\\text{train}}=0\\), \\(\\tau_{\\text{high}}^{\\text{train}}=1\\)) nor NMS Figure 2: Overview of ODGI: Each stage \\(S\\) consists of a single-shot detector that detects groups and individual objects, which are further processed to produce a few relevant image regions to be fed to subsequent stages and refine detections as needed. (\\(\\tau_{\\text{mms}}^{\\text{train}}=1\\)), because both negative and positive patches can be useful for training subsequent stages. For test-time prediction we use a held-out validation set to determine their optimal values, as described in Section 4.2. Moreover, these hyperparameters can be easily changed on the fly, without retraining. This allows the model to easily adapt to changes of the input data characteristics, or to make better use of an increased or reduced computational budget for instance. **Number of stages.** Appending an additional refinement stage benefits the speed-vs-accuracy trade-off fit when the following two criteria are met: _First_, a low number of non empty cells; This correlates to the number of extracted crops, thus to the number of feed-forward passes of subsequent stages. _Second_, a small average group size: Smaller extracted regions lead to increased resolution once rescaled to the input size of the next stage, making the detection task which is fed to subsequent stages effectively easier. From the statistics reported in Table 1, we observe that for classical benchmarks such as MS-COCO, using only one stage suffices as groups are often dense and cover large portions of the image: In that case, ODGI collapses to using a single-shot detector, such as [22, 17]. In contrast, datasets of aerial views such as VEDAI [21] or SDD [26] contain small-sized group structures in large sparse areas. This is a typical scenario where the proposed refinement stages on groups improve the speed-accuracy trade-off. We find that for the datasets used in our experiments \\(S=2\\) is sufficient, as regions extracted by the first stage typically exhibit a dense distribution of large objects. We expect the case \\(S>2\\) to be beneficial for very large, _e.g_. gigapixel images, but leave its study for future work. Nonetheless, extending the model to this case should be straightforward: This would introduce additional hyperparameters as we have to tune the number of boxes \\(\\gamma_{s}\\) for each stage; However, as we will see in the next section, these parameters have little impact on training and can be easily tuned at test time. ### Training the model We train each ODGI stage independently, using a combination of three loss terms that we optimize with standard backpropagation (note that in the last stage of the pipeline, only the second term is active, as no groups are predicted): \\[\\mathcal{L}_{ODGI}=\\mathcal{L}_{\\text{groups}}+\\mathcal{L}_{\\text{coords}}+ \\mathcal{L}_{\\text{offsets}} \\tag{1}\\] \\(\\mathcal{L}_{\\text{coords}}\\) is a standard mean squares regression loss on the predicted coordinates and confidence scores, as described for instance in [22, 17]. The additional two terms are part of our contribution: The _group loss_, \\(\\mathcal{L}_{\\text{groups}}\\), drives the model to classify outputs as individuals or groups, and the _offsets loss_, \\(\\mathcal{L}_{\\text{offsets}}\\), encourages better coverage of the extracted regions. The rest of this section is dedicated to formally defining each loss term as well as explaining how we obtain ground-truth coordinates for group bounding boxes. **Group loss.** Let \\(\\mathbf{b}=b_{n=1\\dots N}\\) be the original ground-truth individual bounding boxes. We define \\(A^{ij}(n)\\) as an indicator which takes value 1 _iff_ ground-truth box \\(b_{n}\\) is assigned to output cell \\((i,j)\\) and 0 otherwise: \\[A^{ij}(n)=\\llbracket b_{n}\\cap\\text{cell}_{ij}|>0\\rrbracket,\\text{with}\\ \\llbracket x \\rrbracket=1\\ \\text{if}\\ x,\\ \\text{else}\\ 0 \\tag{2}\\] For the model to predict groups of objects, we should in principle consider all the unions of subsets of \\(\\mathbf{b}\\) as potential targets. However, we defined our intermediate detectors to predict only one bounding box per cell by design, which allows us to avoid this combinatorial problem. Formally, let \\(B^{ij}\\) be the predictor associated to cell \\((i,j)\\). We define its target ground-truth coordinates \\(\\bar{B}^{ij}\\) and group flag \\(\\bar{g}^{ij}\\) as: \\[\\bar{B}^{ij} =\\bigcup_{n|A^{ij}(n)=1}b_{n} \\tag{3}\\] \\[\\bar{g}^{ij} =\\llbracket\\#\\{n|A^{ij}(n)=1\\}>1\\rrbracket, \\tag{4}\\] with \\(\\cup\\) denoting the minimum enclosing bounding box of a set. We define \\(\\mathcal{L}_{\\text{groups}}\\) as a binary classification objective: \\[\\mathcal{L}_{\\text{groups}}=-\\sum_{i,j} A^{ij}\\Big{(}\\bar{g}^{j}\\log(g^{ij}) \\tag{5}\\] \\[+(1-\\bar{g}^{ij})\\log(1-g^{ij})\\Big{)},\\] where \\(A^{ij}=\\llbracket\\sum_{n}A^{ij}(n)>0\\rrbracket\\) denotes whether cell \\((i,j)\\) is empty or not. In summary, we build ground-truth \\(\\bar{B}^{ij}\\) and \\(\\bar{g}^{ij}\\) as follows: For each cell \\((i,j)\\), we build the set \\(G^{ij}\\) which ground-truth boxes \\(b_{n}\\) of ground-truth boxes it intersects with. If the set is non empty and only a single object box, \\(b\\), falls into this cell, we set \\(\\bar{B}^{ij}=b\\) and \\(\\bar{g}^{ij}=0\\). Otherwise, \\(|G^{ij}|>1\\) and we define \\(\\bar{B}^{ij}\\) as the union of bounding boxes in \\(G^{ij}\\) and set \\(\\bar{g}^{ij}=1\\). In particular, this procedure automatically adapts to the resolution \\([I,J]\\) in a data-driven way, and can be implemented as a pre-processing step, thus does not produce any overhead at training time. **Coordinates loss.** Following the definition of target bounding boxes \\(\\bar{B}^{ij}\\) in (3), we define the coordinates loss as a standard regression objective on the box coordinates and confidences, similarly to existing detectors [8, 7, 17, 4, 22]. \\[\\mathcal{L}_{\\text{\\tiny{comb}}}= \\sum_{i,j}A^{ij}\\big{(}\\|B^{ij}-\\bar{B}^{ij}\\|^{2}+\\omega_{\\text {\\tiny{comb}}}\\,\\|c^{ij}-\\bar{c}^{ij}\\|^{2}\\] \\[+\\omega_{\\text{\\tiny{comb}}}\\sum_{i,j}(1-A^{ij})\\left(c^{ij} \\right)^{2}\\big{)} \\tag{6}\\] \\[\\bar{c}^{ij}=\\texttt{IoU}(B^{ij},\\bar{B}^{ij})=\\frac{|B^{ij}\\cap \\bar{B}^{ij}|}{|B^{ij}\\cup\\bar{B}^{ij}|} \\tag{7}\\] The first two terms are ordinary least squares regression objectives between the predicted coordinates and confidence scores and their respective assigned ground-truth. The ground-truth for the confidence score is defined as the intersection over union (IoU) between the corresponding prediction and its assigned target. Finally, the last term in the sum is a weighted _penalty term_ to push confidence scores for empty cells towards zero. In practice, we use the same weights as in [22], i.e. \\(\\omega_{\\text{conf}}=5\\) and \\(\\omega_{\\text{no-obj}}=1\\). **Offsets loss.** In intermediate stages, ODGI predicts offset values for each box, \\(o_{w}\\) and \\(o_{h}\\), that are used to rescale the region of interest when it is passed as input to the next stage, as described in Section 3.1. The corresponding predictors are trained using the following _offsets loss_: \\[\\mathcal{L}_{\\text{offsets}}=\\sum_{i,j} A^{ij}\\Big{[}\\left(o_{w}-\\bar{o}_{w}(B^{ij},\\bar{B}^{ij})\\right)^{2}\\] \\[+\\left(o_{h}-\\bar{o}_{h}(B^{ij},\\bar{B}^{ij})\\right)^{2}\\Big{]}. \\tag{8}\\] The target values, \\(\\bar{o}_{h}(B^{ij},\\bar{B}^{ij})\\) and \\(\\bar{o}_{w}(B^{ij},\\bar{B}^{ij})\\), for vertical and horizontal offsets, are determined as follows: First, let \\(\\alpha\\) denote the center y-coordinate and \\(h\\) the height. Ideally, the vertical offset should cause the rescaled version of \\(B^{ij}\\) to encompass both the original \\(B^{ij}\\) and its assigned ground-truth box \\(\\bar{B}^{ij}\\) with a certain margin \\(\\delta\\), which we set to half the average object size (\\(\\delta=0.0025\\)). Formally: \\[h^{\\text{scaled}}(B,\\bar{B})=\\max(|(\\alpha(\\bar{B})\\!+\\!h(\\bar{B}) /2+\\delta)\\!-\\!\\alpha(B)|,\\] \\[|(\\alpha(\\bar{B})\\!-\\!h(\\bar{B})/2-\\delta)\\!-\\!\\alpha(B)|)\\] \\[\\bar{o}_{h}(B^{ij},\\bar{B}^{ij})=\\max(1,h(B^{ij})/h^{\\text{scaled }}(B^{ij},\\bar{B}^{ij})) \\tag{9}\\] For the horizontal offset, we do the analogous construction using the \\(B^{ij}\\)'s center x-coordinate and its width instead. **Evaluation metrics.** We quantitatively evaluate the ODGI pipeline as a standard object detector: Following the common protocol from PASCAL VOC 2010 and later challenges [5], we sort the list of predicted boxes in decreasing order of confidence score and compute the _average precision (MAP)_ respectively to the ground-truth, at the IoU cut-offs of 0.5 (_standard_) and 0.75 (_more precise_). In line with our target scenario of single-class object detection, we ignore class information in experiments and focus on raw detection. Class labels could easily be added, either on the level of individual box detections, or as a post-processing classification operation, which we leave for future work. **Multi-stage training.** By design, the inputs of stage \\(s\\) are obtained from the outputs of stage \\(s-1\\). However it is cumbersome to wait for each stage to be fully trained before starting to train the next one. In practice we notice that even after only a few epochs, the top-scoring predictions of intermediate detectors often detect image regions that can be useful for the subsequent stages, thus we propose the following training procedure: After \\(n_{e}=3\\) epochs of training the first stage, we start training the second, querying new inputs from a queue fed by the outputs of the first stage. This allows us to jointly and efficiently train the two stages, and this delayed training scheme works well in practice. ## 4 Experiments We report experiments on two aerial views datasets: VEDAI [21] contains 1268 aerial views of countryside and city roads for vehicle detection. Images are 1024x1024 pixels and contain on average 2.96 objects of interest. We perform 10-fold cross validation, as in [21]. For each run, we use 8 folds for training, one for validation and one for testing. All reported metrics are averaged over the 10 runs. Our second benchmark, SDD [26], contains drone videos taken at different locations with bounding box annotations of road users. To reduce redundancy, we extract still images every 40 frames, which we then pad and resize to 1024x1024 pixels to compensate for different aspect ratios. For each location, we perform a random train/val/test split with ratios 70%/5%/25%, resulting in total in 9163, 651 and 3281 images respectively. On average, the training set contains 12.07 annotated objects per image. sdd is overall much more challenging than vedai: at full resolution, objects are small and hard to detect, even to the human eye. We consider three common backbone networks for ODGI and baselines: tiny, a simple 7-layer fully convolutional network based on the tiny-YOLO architecture, yolo, a VGG-like network similar to the one used in YOLOv2 [22] and finally MobileNet [29], which is for instance used in SSD Lite [17]. More specifically, on the vedai dataset, we train a standard tiny-yolov2 detector as baseline and compare it to ODGI-_teeny-tiny_ (ODGI-tt), which refers to two-stage ODGI with tiny backbones. For sdd, objects are much harder to detect, thus we use a stronger _YOLO V2_ model as baseline. We compare this to ODGI-_teeny-tiny_ as above as well a stronger variant, ODGI-_yolo-tiny_ (ODGI-yt), in which \\(\\phi_{1}\\) is based on the yolo backbones and \\(\\phi_{2}\\) on tiny. Finally we also experiment with the lightweight MobileNet architecture as baseline and backbones, with depth multipliers 1 and 0.35. The corresponding ODGI models are denoted as ODGI-_100-35_ and ODGI-_35-35_. All models are trained and evaluated at various resolutions to investigate different grouping scenarios. In all cases, the detector grid size scales linearly with the image resolution, because of the fully convolutional network structures, ranging from a \\(32\\times 32\\) grid for 1024px inputs to \\(2\\times 2\\) for 64px. We implement all models in Tensorflow and train with the Adam optimizer [11] and learning rate 1e-3. To facilitate reproducibility, we make our code publicly available 1. ### Main results To benchmark detection accuracy, we evaluate the average precision (MAP) for the proposed ODGI and baselines. As is often done, we also apply non-maximum suppression to the final predictions, with IoU threshold of 0.5 and no limit on the number of outputs, to remove near duplicates for all methods. Besides retrieval performance, we assess the computational and memory resource requirements of the different methods: We record the number of boxes predicted by each model, and measure the average runtime of our implementation for one forward pass on a single image. As reference hardware, we use a server with _2.2 GHz Intel Xeon processor (short: CPU)_ in single-threaded mode. Additional timing experiments on weaker and stronger hardware, as well as a description of how we pick ODGI's test-time hyperparameters can be found in Section4.2. We report experiment results in Figure3 (see Table1 for exact numbers). We find that the proposed method improves over standard single-shot detectors in two ways: _First_, when comparing models with similar accuracies, ODGI generally requires fewer evaluated boxes and shorter runtimes, and often lower input image resolution. In fact, only a few relevant regions are passed to the second stage, at a smaller input resolution, hence they incur a small computational cost, yet the ability to selectively refine the boxes can substantially improve detection. _Second_, for any given input resolution, ODGI's refinement cascade generally improves detection retrieval, in particular at lower resolutions, _e.g._ 256px: In fact, ODGI's first stage can be kept efficient and operate at low resolution, because the regions it extracts do not have to be very precise. Nonetheless, the regions selected in the first stage form an easy-to-solve detection task for the second stage (see for instance Figure4 (d)), which leads to more precise detections after refinement. This also motivates our choice of mixing backbones, _e.g._ using ODGI-_yolo-tiny_, as detection in stage 2 is usually much easier. \\begin{table} \\begin{tabular}{|c||c|c||c||c|} \\hline **VDEAI** & [email protected] & [email protected] & CPU [s] & \\#boxes \\\\ \\hline \\hline ODGI-t 512-256 & 0.646 & 0.422 & 0.83 & \\(\\leq 448\\) \\\\ ODGI-tt 512-64 & 0.562 & 0.264 & 0.58 & \\(\\leq 268\\) \\\\ ODGI-t 256-128 & 0.470 & 0.197 & 0.22 & \\(\\leq 96\\) \\\\ ODGI-t 256-64 & 0.386 & 0.131 & 0.16 & \\(\\leq 72\\) \\\\ ODGI-t 128-64 & 0.143 & 0.025 & 0.08 & \\(\\leq 24\\) \\\\ \\hline tiny-yolo 1024 & 0.684 & 0.252 & 1.9 & 1024 \\\\ tiny-yolo 512 & 0.383 & 0.057 & 0.47 & 256 \\\\ tiny-yolo 256 & 0.102 & 0.009 & 0.13 & 64 \\\\ \\hline \\end{tabular} \\begin{tabular}{|c||c|c||c||c|} \\hline **SDD** & [email protected] & [email protected] & CPU [s] & \\#boxes \\\\ \\hline \\hline ODGI-yot 512-256 & 0.463 & 0.069 & 2.4 & \\(\\leq 640\\) \\\\ ODGI-t 512-256 & 0.429 & 0.061 & 1.2 & \\(\\leq 640\\) \\\\ ODGI-t 256-128 & 0.305 & 0.035 & 0.60 & \\(\\leq 160\\) \\\\ ODGI-t 256-128 & 0.307 & 0.044 & 0.31 & \\(\\leq 160\\) \\\\ \\hline yolo 1024 & 0.470 & 0.087 & 6.6 & 1024 \\\\ yolo 512 & 0.309 & 0.041 & 1.7 & 256 \\\\ yolo 256 & 0.160 & 0.020 & 0.46 & 64 \\\\ \\hline \\end{tabular} \\begin{tabular}{|c||c|c||c|} \\hline **SDD** & [email protected] & [email protected] & CPU [s] & \\#boxes \\\\ \\hline \\hline ODGI-100-35 512-256 & 0.434 & 0.061 & 0.76 & \\(\\leq 640\\) \\\\ ODGI-100-35 256-128 & 0.294 & 0.036 & 0.19 & \\(\\leq 160\\) \\\\ \\hline mobile-100 1024 & 0.415 & 0.061 & 1.9 & 1024 \\\\ mobile-100 512 & 0.266 & 0.028 & 0.46 & 256 \\\\ mobile-100 256 & 0.100 & 0.009 & 0.12 & 64 \\\\ \\hline \\end{tabular} \\begin{tabular}{|c||c|c||c|} \\hline **SDD** & [email protected] & [email protected] & CPU [s] & \\#boxes \\\\ \\hline \\hline ODGI-35-35 512-256 & 0.425 & 0.055 & 0.50 & \\(\\leq 640\\) \\\\ ODGI-35-35 256-128 & 0.250 & 0.029 & 0.13 & \\(\\leq 160\\) \\\\ \\hline mobile-35 1024 & 0.411 & 0.054 & 0.84 & 1024 \\\\ mobile-35 512 & 0.237 & 0.026 & 0.19 & 256 \\\\ mobile-35 256 & 0.067 & 0.007 & 0.050 & 64 \\\\ \\hline \\end{tabular} \\end{table} Table 1: MAP and timing results on the VEDAI and SDD datasets for the model described in Section4. The results for ODGI models are reported with \\(\\gamma_{1}^{\\text{test}}\\) chosen as described in Section4.2. Figure 3: Plots of [email protected] versus runtime (CPU) for the VEDAI and SDD datasets on three different backbone architectures. The metrics are reported as percentages relative to the baseline run at full resolution. Each marker corresponds to a different input resolution, which the marker size is proportional to. The black line represents the baseline model, while each colored line corresponds to a specific number of extracted crops, \\(\\gamma_{1}\\), For readability, we only report results for a subset of \\(\\gamma_{1}\\) values, and provide full plots in the supplemental material. ### Additional Experiments **Runtime.** Absolute runtime values always depend on several factors, in particular the software implementation and hardware. In our case, software-related differences are not an issue, as all models rely on the same core backbone implementations. To analyze the effect of hardware, we performed additional experiments on weaker hardware, a _Raspberry Pi 3 Model B with 1.2 GHz ARMv7 CPU (Raspii)_, as well as stronger hardware, an _Nvidia GTX 1080Ti graphics card (GPU)_. Table 2 shows the resulting runtimes of one feed-forward pass for the same models and baselines as in Table 1. We also report the total number of pixels processed by each method, _i.e_. that have to be stored in memory during one feed-forward pass, as well as the number of parameters. The main observations of the previous section again hold: On the Raspberry Pi, timing ratios are roughly the same as on the Intel CPU, only the absolute scale changes. The differences are smaller on GPU, but ODGI is still faster than the baselines in most cases at similar accuracy levels. Note that for the application scenario we target, the GPU timings are the least representative, as systems operating under resource constraints typically cannot afford the usage of a 250W graphics card (for comparison, the Raspberry Pi has a power consumption of approximately 1.2W). **Hyperparameters.** As can be seen in Figure 3, a higher number of crops, \\(\\gamma_{1}^{\\text{test}}\\), improves detection, but comes at a higher computational cost. Nonetheless, ODGI appears to have a better accuracy-speed ratio for most values of \\(\\gamma_{1}^{\\text{test}}\\). For practical purposes, we suggest to choose \\(\\gamma_{1}^{\\text{test}}\\) based on how many patches are effectively used for detection. We define the _occupancy rate_ of a crop as the sum of the intersection ratios of ground-truth boxes that appear in this crop. We then say a crop is _relevant_ if it has a non-zero occupancy rate, _i.e_. it contains objects of interest: For instance, at input resolution 512px on vedai's validation set, we obtain an average of 2.33 relevant crops, hence we set \\(\\gamma_{1}^{\\text{test}}=3\\). The same analysis on sdd yields \\(\\gamma_{1}^{\\text{test}}=6\\). Three additional hyperparameters influence ODGI's behavior: \\(\\gamma_{\\text{low}}^{\\text{test}}\\), \\(\\gamma_{\\text{high}}^{\\text{test}}\\), and \\(\\gamma_{\\text{mms}}^{\\text{test}}\\), all of which appear in the patch extraction pipeline. For a range of \\(\\gamma_{1}\\in[1,10]\\), and for each input resolution, we perform a parameter sweep on the held-out validation set over the ranges \\(\\tau_{\\text{low}}\\in\\{0.,0.1,0.2,0.3,0.4\\}\\), \\(\\tau_{\\text{high}}\\in\\{0.6,0.7,0.8,0.9,1.0\\}\\), and \\(\\tau_{\\text{mms}}\\in\\{0.25,0.5,0.75\\}\\). Note that network training is independent from these parameters as discussed in Section 3.1. Therefore the sweep can be done efficiently using pretrained \\(\\phi_{1}\\) and \\(\\phi_{2}\\), changing only the patch extraction process. We report full results of this validation process in the supplemental material. The main observations are as follows: (i) \\(\\gamma_{\\text{low}}^{\\text{test}}\\) is usually in \\(\\{0,0.1\\}\\). This indicates that the low confidence patches are generally true negatives that need not be filtered out. (ii) \\(\\tau_{\\text{high}}\\in\\{0.8,0.9\\}\\) for vedai and \\(\\tau_{\\text{high}}\\in\\{0.6,0.7\\}\\) for sdd. This reflects intrinsic properties of each dataset: vedai images contain only few objects which are easily covered by the extracted crops. It is always beneficial to refine these predictions, even when they are individuals with high confidence, hence a high value of \\(\\tau_{\\text{high}}\\). In contrast, on the more challenging sdd, ODGI more often uses the shortcut for confident individuals in stage 1, in order to focus the refinement stage on groups and lower-confidence individuals which can benefit more. (iii) \\(\\tau_{\\text{mms}}^{\\text{test}}\\) is usually equal to 0.25, which encourages non-overlapping patches and reduces the number of redundant predictions. \\begin{table} \\begin{tabular}{|c||c|c|c||c|c|c|} \\hline **VEDAI** & ODGI-10-35 & ODGI-100-35 & mobile-100 & mobile-100 & mobile-100 \\\\ & 512-256 & 256-128 & 1024 & 512 & 256 \\\\ \\hline \\hline [email protected] & 0.65 & 0.47 & 0.39 & 0.14 & 0.68 & 0.38 & 0.10 \\\\ \\hline \\hline Raspii [s] & 4.9 & 1.2 & 0.87 & 0.44 & 10.5 & 2.6 & 0.70 \\\\ \\hline GPU [ms] & 13.9 & 11.7 & 11.7 & 11.7 & 14.3 & 8.2 & 7.0 \\\\ \\hline \\hline \\#\\(\\eta_{\\text{parameters}}\\) & 22M & 22M & 22M & 11M & 11M & 11M \\\\ \\hline \\#\\(\\eta_{\\text{pixels}}\\) & 458k & 98k & 73k & 25k & 1M & 262k & 65k \\\\ \\hline \\hline \\end{tabular} \\begin{tabular}{|c||c|c|c|c|} \\hline **SDD** & ODGI-10-35 & ODGI-10-35 & mobile-100 & mobile-100 & mobile-100 \\\\ & 512-256 & 256-128 & 1024 & 512 & 256 \\\\ \\hline \\hline [email protected] & 0.43 & 0.29 & 0.42 & 0.27 & 0.10 \\\\ \\hline Raspii [s] & 6.6 & 1.6 & 17.3 & 4.0 & 0.92 \\\\ \\hline GPU [ms] & 19.9 & 17.6 & 23.1 & 11.0 & 9.5 \\\\ \\hline \\hline \\#\\(\\eta_{\\text{parameters}}\\) & 2.6M & 2.6M & 2.2M & 2.2M & 2.2M \\\\ \\hline \\#\\(\\eta_{\\text{pixels}}\\) & 655k & 164k & 1M & 260k & 65k \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Additional timing results. Time is indicated in seconds for a _Raspberry Pi (Raspii)_, and in milliseconds for an _Nvidia GTX 1080Ti graphics card (GPU)_. #\\(\\eta_{\\text{pixels}}\\) is the total number of pixels processed and #\\(\\eta_{\\text{parameters}}\\), the number of model parameters. \\begin{table} \\begin{tabular}{|c||c|c|c|c|} \\hline **SDD** & \\(\\gamma_{1}=1\\) & \\(\\gamma_{1}=3\\) & \\(\\gamma_{1}=5\\) & \\(\\gamma_{1}=10\\) \\\\ \\hline ODGI-1512-256 & **0.245** & **0.361** & **0.415** & **0.457** \\\\ no groups & 0.225 & 0.321 & 0.380 & 0.438 \\\\ fixed offsets & 0.199 & 0.136 & 0.246 & 0.244 \\\\ no offsets & 0.127 & 0.127 & 0.125 & 0.122 \\\\ \\hline \\hline ODGI-1256-128 & **0.128** & **0.243** & **0.293** & **0.331** \\\\ no groups & 0.122 & 0.229 & 0.282 & 0.326 \\\\ fixed offsets & 0.088 & 0.136 & 0.150 & 0.154 \\\\ no offsets & 0.030 & 0.040 & 0.040 & 0.040 \\\\ \\hline \\end{tabular} \\end{table} Table 3: [email protected] results comparing ODGI with three ablation variants, _no groups_, _fixed offsets_ and _no offsets_ (see text). ### Ablation study In this section we briefly report on ablation experiments that highlight the influence of the proposed contributions. Detailed results are provided in the supplemental material. **Memory requirements.** ODGI stages are applied consequently, hence only one network needs to live in memory at a time. However having independent networks for each stage can still be prohibitory when working with very large backbones, hence we also study a variant of ODGI where weights are shared across stages. While this reduces the number of model parameters, we find that it can significantly hurt detection accuracy in our settings. A likely explanation is that the data distribution in stage 1 and stage 2 are drastically different in terms of object resolution and distribution, effectively causing a _domain shift_. **Groups.** We compare ODGI with a variant without group information: we drop the loss term \\(\\mathcal{L}_{\\text{groups}}\\) in (1) and ignore group flags in the transition between stages. Table 3 (row _no groups_) shows that this variant is never as good as ODGI, even for larger number of crops, confirming that the idea of grouped detections provides a consistent advantage. **Offsets.** We perform two ablation experiments to analyze the influence of the region rescaling step introduced in Section 3.2. First, instead of using learned offsets we test the model with offset values fixed to \\(\\frac{2}{3}\\), 50% expansion of the bounding boxes, which corresponds to the value of the target offsets margin \\(\\delta\\) we chose for standard ODGI. Our experiments in Table 3 show that this variant is inferior to ODGI, confirming that the model benefits from learning offsets tailored to its predictions. Second, we entirely ignore the rescaling step during the patch extraction step (row _no offsets_). This affects the MAP even more negatively: extracted crops are generally localized close to the relevant objects, but do not fully enclose them. Consequently, the second stage retrieves partial objects, but with very high confidence, resulting in strong false positives predictions. In this case, most correct detections emerge from stage 1's early-exit predictions, hence increasing \\(\\gamma_{1}\\), 5 passing forward more crops, does not improve the MAP in this scenario. ## 5 Conclusions We introduce ODGI, a novel cascaded scheme for object detection that identifies _groups of objects_ in early stages, and refines them in later stages _as needed_: Consequently, (i) empty image regions are discarded, thus saving computations especially in situations with heterogeneous object density, such as aerial imagery, and (ii) groups are typically larger structures than individuals and easier to detect at lower resolutions. Furthermore, ODGI can be easily added to off-the-shelf backbone networks commonly used for single-shot object detection: In extensive experiments, we show that the proposed method offers substantial computational savings without sacrificing accuracy. The effect is particularly striking on devices with limited computational or energy resources, such as embedded platforms. Figure 4: Qualitative results for ODGI. No filtering step was applied here, but for readability we only display boxes predicted with confidence at least 0.5. Best seen on PDF with zoom. Additional figures are provided in the supplemental material. ## References * [1] Airbus. Airbus ship detection challenge, a Kaggle competition, 2018. * [2] Z. Cai and N. Vasconcelos. Cascade R-CNN: delving into high quality object detection. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2018. * [3] J. Dai, Y. Li, K. He, and J. Sun. R-FCN: object detection via region-based fully convolutional networks. In _Conference on Neural Information Processing Systems (NeurIPS)_, 2016. * [4] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2014. * [5] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The Pascal visual object classes challenge: A retrospective. _International Journal of Computer Vision (IJCV)_, pages 98-136, 2015. * [6] M. Gao, R. Yu, A. Li, V. I. Morariu, and L. S. Davis. Dynamic zoom-in network for fast object detection in large images. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2018. * [7] R. Girshick. Fast R-CNN. In _International Conference on Computer Vision (ICCV)_, 2015. * [8] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2014. * [9] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. _IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI)_, pages 1904-1916, 2015. * [10] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, and K. Murphy. Speed/accuracy trade-offs for modern convolutional object detectors. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2016. * [11] D. P. Kingma and J. L. Ba. Adam: a method for stochastic optimization. In _International Conference on Learning Representations (ICLR)_, 2015. * [12] C. H. Lampert. An efficient divide-and-conquer cascade for nonlinear object detection. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2010. * [13] C. H. Lampert, M. B. Blaschko, and T. Hofmann. Efficient subwindow search: A branch and bound framework for object localization. _IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI)_, pages 2129-2142, 2009. * [14] H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua. A convolutional neural network cascade for face detection. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2015. * [15] T. Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick. Microsoft COCO: common objects in context. In _European Conference on Computer Vision (ECCV)_, 2014. * [16] T.-Y. Lin, P. Goyal, R. B. Girshick, K. He, and P. Dollar. Focal loss for dense object detection. In _International Conference on Computer Vision (ICCV)_, 2017. * [17] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single shot multibox detector. In _European Conference on Computer Vision (ECCV)_, 2016. * [18] M. Murari, S. Manal, M. Prashant, D. Sanhita, and K. V. Santosh. AVDNet: A small-sized vehicle detection network for aerial visual data. In _IEEE Geoscience and Remote Sensing Letters_, 2019. * [19] S. M ller and D. W. Zaum. Robust building detection in aerial images. In _International Society for Photogrammetry and Remote Sensing, Workshop CMRT_, 2005. * [20] M. Najibi, B. Singh, and L. S. Davis. Autofocus: Efficient multi-scale inference. In _International Conference on Computer Vision (ICCV)_, 2019. * [21] S. Razakarivony and F. Jurie. Vehicle detection in aerial imagery: A small target detection benchmark. _Journal of Visual Communication and Image Representation_, pages 187-203, 2015. * [22] J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2016. * [23] J. Redmon and A. Farhadi. YOLO9000: Better, faster, stronger. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2017. * [24] J. Redmon and A. Farhadi. YOLOv3: An incremental improvement. _arXiv preprint arXiv:1804.02767_, 2018. * [25] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In _Conference on Neural Information Processing Systems (NeurIPS)_, 2015. * [26] A. Robicquet, A. Sadeghian, A. Alahi, and S. Savarese. Learning social etiquette: Human trajectory prediction in crowded scenes. In _European Conference on Computer Vision (ECCV)_, 2016. * [27] H. A. Rowley, S. Baluja, and T. Kanade. Neural network-based face detection. _IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI)_, pages 23-38, 1998. * [28] M. A. Sadeghi and A. Farhadi. Recognition using visual phrases. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2011. * [29] M. Sandler, A. G. Howard, M. Zhu, A. Zhmoginov, and L. Chen. MobileNetV2: Inverted residuals and linear bottlenecks. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2018. * [30] B. Singh, M. Najibi, and L. S. Davis. Sniper: Efficient multi-scale training. In _Conference on Neural Information Processing Systems (NeurIPS)_, 2018. * [31] D. Steinworth. Finding swimming pools with Google Earth: Greek government hauls in billions in back taxes. _SPIEGEL online_, 2010. _[http://www.spiegel.de/international/europefinding-swimming-pools-with-google-earth-greek-government-huals-in-billions-in-back-taxes-a-709703.html_](http://www.spiegel.de/international/europefinding-swimming-pools-with-google-earth-greek-government-huals-in-billions-in-back-taxes-a-709703.html_). * A dataset for segmentation, detection and classification of tiny objects. _International Conference on Pattern Recognition (ICPR)_, 2018. * [33] P. Viola and M. J. Jones. Robust real-time face detection. _International Journal of Computer Vision (IJCV)_, pages 137-154, 2004. * [34] G.-S. Xia, X. Bai, L. Zhang, S. Belongie, J. Luo, M. Datcu, and M. Pelilo. Dota: A large-scale dataset for object detection in aerial images. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2018. * [35] W. Xie, J. A. Noble, and A. Zisserman. Microscopy cell counting and detection with fully convolutional regression networks. _Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization_, 6(3):283-292, 2018. * [36] F. Yang, H. Fan, P. Chu, E. Blasch, and H. Ling. Clustered object detection in aerial images. In _International Conference on Computer Vision (ICCV)_, 2019. * [37] T. Zhao and R. Nevatia. Car detection in low resolution aerial images. In _International Conference on Computer Vision (ICCV)_, 2001. * [38] P. Zhu, L. Wen, X. Bian, H. Ling, and Q. Hu. Vision meets drones: A challenge, a ECCV 2018 Workshop, 2018.
State-of-the-art detection systems are generally evaluated on their ability to **exhaustively** retrieve objects **densely** distributed in the image, across a wide variety of appearances and semantic categories. Orthogonal to this, many real-life object detection applications, for example in remote sensing, instead require dealing with large images that contain only a few small objects of a single class, scattered **heterogeneously** across the space. In addition, they are often subject to strict **computational constraints**, such as limited battery capacity and computing power. To tackle these more practical scenarios, we propose a novel flexible detection scheme that efficiently adapts to variable object sizes and densities: We rely on a sequence of detection stages, each of which has the ability to predict **groups of objects as well as individuals**. Similar to a detection cascade, this multi-stage architecture spares computational effort by discarding large irrelevant regions of the image early during the detection process. The ability to group objects provides further computational and memory savings, as it allows working with lower image resolutions in early stages, where groups are more easily detected than individuals, as they are more salient. We report experimental results on two aerial image datasets, and show that the proposed method is as accurate yet computationally more efficient than standard single-shot detectors, consistently across three different backbone architectures.
Write a summary of the passage below.
268
arxiv-format/2210_08888v1.md
Highlights **Modelling the impact of repeat asymptomatic testing policies for staff on SARS-CoV-2 transmission potential** Carl A. Whitfield 1, University of Manchester COVID-19 Modelling Group 2, Ian Hall Footnote 1: Corresponding author: [email protected] Footnote 2: This recognises the equal contributions of the following authors: Jacob Curran-Sebastian, Rajenki Das, Elizabeth Fearon, Martyn Fyles, Yang Han, Thomas A. House, Hugo Lewkowicz, Christopher E. Overton, Xiaoxi Pang, Lorenzo Pellis, Heather Riley, Francesca Scarbel, Helena B. Stage, Bindu Vekaria, Feng Xu, Jingsi Xu, Luke Webb. * Model of SARS-CoV-2 test sensitivity and infectiousness based on data freely available in the literature * Simple, efficient algorithm for simulating testing in a heterogeneous population * Regular lateral flow tests have a similar impact on transmission to PCR tests * Adherence behaviour is crucial to actual testing impact * Regular testing reduces the size and likelihood of outbreaks in closed populations. Modelling the impact of repeat asymptomatic testing policies for staff on SARS-CoV-2 transmission potential Carl A. Whitfield 1 Footnote 1: Corresponding author: [email protected] University of Manchester COVID-19 Modelling Group 2 Footnote 2: This recognises the equal contributions of the following authors: Jacob Curran-Sebastian, Rajenki Das, Elizabeth Fearon, Martyn Fyles, Yang Han, Thomas A. House, Hugo Lewkowicz, Christopher E. Overton, Xiaoxi Pang, Lorenzo Pellis, Heather Riley, Francesca Scarabel, Helena B. Stage, Bindu Vekaria, Feng Xu, Jingsi Xu, Luke Webb. , Ian Hall Department of Mathematics University of Manchester United Kingdom ## 1 Introduction In the early stages of the COVID-19 pandemic, many countries had limited capacity for SARS-CoV-2 diagnostic testing, and so a large proportion of asymptomatic or'mild' infections were not being detected. At this stage in the UK, testing was primarily being used in hospitals for patient triage and quarantine measures. By late 2020, many western countries had greatly increased their capacity to perform polymerase chain reaction (PCR) tests, which sensitively detect SARS-CoV-2 RNA from swab samples taken of the nose and/or throat. Importantly, several types of antigen test in the form of lateral flow devices (LFDs) came to market, promising to detect infection rapidly (within 30 mins of the swab) and inexpensively in comparison to PCR. In the UK, many of these devices underwent extensive evaluation of their sensitivity and specificity in lab and real-world settings [1], and several were made freely available to the general public. Nonetheless, there was significant concern surrounding the sensitivity of LFD tests [2; 3], their sensitivity was observed to be low relative to PCR in early pilot studies [4; 5; 6; 7] and so they were more likely to miss true positives. Furthermore, in some cases, extremely low values for their specificity were reported [8; 9; 10], suggesting that false positives may be common,although this later appeared not to be the case for the devices that were systematically tested and rolled out to the wider public in the UK [1; 11]. Models of SARS-CoV-2 testing and observational studies predicted that regular asymptomatic testing and contact-tracing could significantly reduce transmission rates in the population [12; 13; 14; 15]. Furthermore, specific studies on hospitals [16], care-homes [17], and schools [18; 19] have demonstrated the impact that regular LFD testing can have and has had on reducing transmission in these vital settings. However, studies have also highlighted the potential pitfalls and inefficiencies of such policies [20], in particular around the factors affecting adherence to these policies. Testing can only reduce transmission if it results in some contact reduction or mitigation behaviour in those who are infectious. Therefore, studies have shown the importance of ensuring that isolation policies in workplaces are coupled with measures to support isolation, such as paid sick-leave [21; 22; 23]. The focus of this paper is modelling the potential impact of testing in workplaces and in particular its effect on reducing the transmission of SARS-CoV-2 in these settings. We consider several of the confounding factors already discussed, including test sensitivity and adherence to policy, as well as highlighting some other important features including the impact of population heterogeneity. These questions are addressed using data on the within-host dynamics of SARS-CoV-2 viral load as well as data on test sensitivity and infectiousness that is available in the literature. The model we present is generic, and similar to those used in [12; 24; 13; 14], in order to be applicable to a wide range of settings and scenarios. However, we also extend some of these findings to the care home setting, to understand its implications for evaluating and comparing potential testing policies in care homes. To give context, the results presented in section 3.3, were used to inform policy advice for staff testing in social care from the Social Care Working Group (a sub-committee of the Scientific Advisory Group for Emergencies - SAGE) which was reported to Department for Health and Social Care (DHSC) and SAGE in the UK [25]. This paper takes a more detailed look into the predictions of this model and its implications for implementing testing policies in workplaces in general. ## 2 Methods We focus on quantifying the effects of testing and isolation on a single individual, for which we use the concept of an infected person's \"infectious potential\". This was then extended to investigate the impact of testing in a generic workplace setting on transmission of SARS-CoV-2 making some basic assumptions regarding contact and shift patterns. Table 1 provides a list of all the symbols we use to represent the parameters and variables in this section, alongside a description of each. \\begin{tabular}{|c|p{142.3pt}|} \\hline Symbol & Description \\\\ \\hline \\multicolumn{3}{|c|}{**(a) Transmission modelling**} \\\\ \\hline \\(R_{0}\\), \\(R_{\\text{ind}}^{k}\\) & Basic reproduction number and expected reproduction number of individual \\(k\\) respectively. \\\\ \\hline \\(c_{0}\\), \\(c_{k}(t)\\) & Basic contact rate and contact rate of individual \\(k\\) respectively. \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|p{284.5pt}|} \\hline \\(p_{k}(t)\\) & Probability of a contact between infectious individual \\(k\\) and a susceptible individual at time \\(t\\) resulting in an infection. \\\\ \\hline \\(\\beta_{0}\\), \\(J_{k}(t)\\) & Baseline probability of a contact resulting in an infection and relative infectiousness of individual \\(k\\) at time \\(t\\). \\\\ \\hline \\(X\\) & Fractional reduction in contact rate due to isolation (\\(X=1\\) used throughout). \\\\ \\hline \\(t_{\\text{isol}}^{(k)}\\), \\(\\tau_{\\text{isol}}\\) & The time since infection when the individual begins isolation and the duration of the isolation period respectively. \\\\ \\hline \\(\\tau_{\\text{inf}}^{(k)}\\) & Duration of infection of individual \\(k\\), defined as infectious when nasal viral load is detectable by PCR. \\\\ \\hline \\multicolumn{3}{|c|}{**(b) Viral load models**} \\\\ \\hline \\(V_{k}(t)\\) & Nasal viral load of individual \\(k\\) at time \\(t\\) since infection. \\\\ \\hline \\(V_{p}^{(k)}\\), \\(t_{p}^{(k)}\\) & Peak viral load value of individual \\(k\\) and the time since infection it occurs respectively. \\\\ \\hline \\(r_{k}\\), \\(d_{k}\\) & Exponential rate of viral growth and decay respectively in individual \\(k\\). \\\\ \\hline \\(V_{\\text{lod}}\\), \\(\\Delta\\)Ct, \\(\\omega_{p}\\) & Viral load parameters in reference [26] \\\\ \\(\\omega_{r}\\) & \\\\ \\hline \\multicolumn{3}{|c|}{**(c) Infectiousness model**} \\\\ \\hline \\(J_{p}^{(k)}\\), \\(h_{k}\\) & Theoretical maximum infectiousness (at high viral load) and steepness of Hill function relating infectiousness to viral load respectively for individual \\(k\\). \\\\ \\hline \\end{tabular} \\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \\hline \\(K_{m}\\) & Threshold parameter for Hill function of viral load vs. infectiousness (same for all individuals). \\\\ \\hline IP\\({}_{k}\\) & Infectious potential of individual \\(k\\), a values of 1 indicates the same overall infectiousness as the population mean without any isolation. \\\\ \\hline \\(\\Delta\\)IP & Relative reduction in IP compared to some baseline case. \\\\ \\hline \\multicolumn{3}{|c|}{**(d) Testing models**} \\\\ \\hline \\(P_{\\rm PCR}(V)\\), & Probability of positive test result given a viral load of \\(V\\) \\\\ \\(P_{\\rm LFDh}(V)\\), & for PCR, high-sensitivity LFD, or low-sensitivity LFD \\\\ \\(P_{\\rm LFDl}(V)\\) & respectively. \\\\ \\hline \\(P_{\\rm max}\\), \\(s_{p}\\), \\(V_{50}^{(p)}\\) & PCR sensitivity parameters: maximum sensitivity, slope of logistic function, and threshold viral load for logistic function respectively. \\\\ \\hline \\(\\lambda\\), \\(s_{l}\\), \\(V_{50}^{(l)}\\) & LFD \"high sensitivity\" parameters (relative to PCR sensitivity): maximum sensitivity, slope of logistic function, and threshold viral load for logistic function respectively. \\\\ \\hline \\(P_{\\rm not}\\)\\(P_{\\rm miss}\\) & Parameters of adherence: the proportion of people who do no tests and the proportion of tests missed by those who do test respectively. \\\\ \\hline \\(A\\) & Composite adherence parameter (i.e. the proportion of tests that get performed, \\(A=(1-P_{\\rm not})(1-P_{\\rm miss})\\)). \\\\ \\hline \\end{tabular} \\begin{table} \\begin{tabular}{|c|p{284.5pt}|} \\hline \\(Z\\), \\(p\\), \\(\\tau_{\\rm pos}\\) & Parameters of simple testing model: fractional impact of isolation on infectiousness, test sensitivity in window of opportunity, and duration of window of opportunity respectively. \\\\ \\hline \\(\\tau\\) & Time between tests for a given regular mass testing policy. \\\\ \\hline \\multicolumn{3}{|c|}{**(e) Workplace models**} \\\\ \\hline \\(W_{k}(t)\\) & Shift pattern indicator (= 1 when individual \\(k\\) is at work, and = 0 otherwise). \\\\ \\hline \\(N_{s}\\), \\(N_{r}\\) & Number of employees in a model workplace and the number of residents in the care-home model respectively. \\\\ \\hline \\(f_{s}\\), \\(p_{c}\\) & Fraction of days staff spend on-shift (9/14 used here) and probability of them making a contact during a shift with a particular co-worker who is also in work that day respectively. \\\\ \\hline \\(a\\), \\(b\\) & Fractional contact probabilities (relative to \\(p_{c}\\)) in model care-home between staff and between residents respectively. \\\\ \\hline \\end{tabular} \\end{table} Table 1: Symbols used in this paper to represent various mathematical variables and parameters and their interpretation, broken down by category. ### Infectious and Transmission potential Without loss of generality, we define the time an individual \\(k\\) gets infected as \\(t=0\\). Assuming non-repeating contacts at rate \\(c_{k}(t)\\) then the expected number of transmission events from that infected individual (or their _Transmission potential_[24]) is \\[R_{\\text{ind}}^{(k)}=\\int_{0}^{\\infty}c_{k}(t)p_{k}(t)\\mathrm{d}t. \\tag{1}\\] Initially, we aim to gain generic insights into how testing can impact on transmission, and so we consider one other simplifying assumption, that the contact rate \\(c_{k}(t)\\) can be described by a simple step function, such that the contact rate is reduced by a factor \\(X\\) when an individual self-isolates \\[c_{k}(t)=\\begin{cases}c_{0}&\\text{if }t<t_{\\text{isol}}^{(k)}\\text{ or }t\\geq t_{\\text{isol}}^{(k)}+\\tau_{\\text{isol}}\\\\ (1-X)c_{0}&\\text{if }t_{\\text{isol}}^{(k)}\\leq t<t_{\\text{isol}}^{(k)}+\\tau_{ \\text{isol}},\\end{cases} \\tag{2}\\] where the parameters are as defined in table 1(a). Finally, we suppose that the probability of transmission per contact event is \\(p_{k}(t)=\\beta_{0}J_{k}(t)\\) where \\(\\beta_{0}\\) is a constant. We define the (arbitrary) scaling of the infectiousness \\(J(t)\\) by setting \\(\\langle\\int_{0}^{\\infty}J(t)\\mathrm{d}t\\rangle=\\langle\\tau_{\\text{inf}}\\rangle\\), where \\(\\langle\\cdot\\rangle\\) denotes a population average such that \\(\\langle x\\rangle\\equiv\\sum_{k=1}^{N}x_{k}/N\\). The parameter \\(\\langle\\tau_{\\text{inf}}\\rangle\\) is the average period for which a person can test positive via PCR (i.e. how long they have a detectable COVID infection). Therefore, ignoring isolation, if the contact rate \\(c_{0}\\) is the same for all individuals (note that this is not a necessary assumption, variations in contact rate between individuals can be absorbed into the infectiousness by a simple scaling factor) then the population baseline reproduction number without any isolation will be \\(R_{0}=\\beta_{0}c_{0}\\langle\\tau_{\\text{inf}}\\rangle\\). Then wecan rewrite equation (1), the individual reproduction number for individual \\(k\\), as \\[R_{\\rm ind}^{(k)}=\\frac{R_{0}}{\\langle\\tau_{\\rm inf}\\rangle}\\int_{0}^{\\infty}J_{ k}(t)\\left[1-XI(t;t_{\\rm isol}^{(k)},t_{\\rm isol}^{(k)}+\\tau_{\\rm isol}^{(k)}) \\right]\\,{\\rm d}t\\equiv R_{0}{\\rm IP}_{k} \\tag{3}\\] where \\(I(x;a,b)=H(x-x_{1})-H(x-x_{2})\\) is an indicator function for the range \\(x_{1}<x<x_{2}\\) such that \\(H(x)\\) is the Heaviside step function. The quantity \\({\\rm IP}_{k}\\) we define as the individual's _infectious potential_ and under these simplifying assumptions is proportional to the individual's reproduction number in a fully susceptible population. More generally IP is the relative infectiousness of an individual (omitting any isolation) integrated over the infectious period, and so still has epidemiological relevance beyond the case of a fully susceptible population. Throughout this paper, we will use the relative reduction in IP vs. some baseline scenario (generally a scenario with no testing), to measure the impact of testing regimes, such that \\[\\Delta{\\rm IP}=1-\\frac{\\langle{\\rm IP}\\rangle}{\\langle{\\rm IP}_{0}\\rangle}, \\tag{4}\\] where \\(\\langle{\\rm IP}_{0}\\rangle\\) is the population average IP for the baseline scenario. ### Model of viral-load, infectiousness and test positive probability We use a RNA viral-load based model of infectiousness and test positive probability, similar to those used in [24; 13] to calculate the reduction in IP for different testing and isolation behaviour. Individual viral-load trajectories \\(V_{k}(t)\\) are generated to represent the concentration of RNA (in copies/ml) that should be measured in PCR testing of a swab of the nasal cavity. In turn these are used to calculate an infected individual's infectiousness \\(J(V(t))\\) and probability of testing positive \\(P(V(t))\\) over time. We assume, as in [26], that the viral load trajectory can be described by the following piecewise exponential (PE) model \\[V_{k}(t)=\\begin{cases}V_{p}\\exp[r(t_{p}-t)]&\\text{if }t\\leq t_{p}\\\\ V_{p}\\exp[-d(t-t_{p})]&\\text{if }t>t_{p}\\end{cases}, \\tag{5}\\] where the parameters are as defined in table 1(b). We use two different datasets to parameterise the PE model, which are laid out in the following sections. #### 2.2.1 Parameterisation of RNA viral-load model **Ke et al. (2021) data:** The PCR-measured viral-load trajectories (in RNA copies/ml) are generated at random based on the mechanistic model fits in [27]. That dataset consists of the results of daily PCR and virus culture tests to quantify RNA viral-load (in RNA copies/ml) and _infectious_ viral load, in arbitrary units akin to plaque-forming units (PFUs) respectively. There were 56 participants who had been infected by different variants of SARS-CoV-2 (up to the delta strain, as this dataset precedes the emergence of omicron). In order to use this data, we first simulated the \"refractory cell model\" (RCM) described in [27] for all 56 individual parameter sets given in the supplementary information of that article. Note that in [27] these were based on nasal swabs, we did not use the data from throat swabs. We found that, at long times, the RCM would show an (unrealistic) second growth stage of the virus. To remove this spurious behaviour from these trajectories, we fitted the parameters of the PE model (outlined in the previous section) to the data around the peak viral load. To do this, we truncated the data to only consist of only viral load values around the first peak that was above a threshold of \\(V_{\\rm thresh}=\\max V^{0.5}\\) (i.e. half of the maximum viral load on a log-scale, measured empirically from the generated trajectory). The data for each trajectory was then truncated between two points to avoid fitting this spurious behaviour. The first point was when the viral load first surpassed \\(V_{\\rm thresh}\\). The second point was either when viral load fell below \\(V_{\\rm thresh}\\), or the data reached a second turning point (a minimum) - whichever occurred first. We then used a simple least-squares fit on the log-scale to fit the PE model to this truncated data. In order to set realistic initial values for the non-linear least-squares fitting method, we estimated the peak viral load and time by taking the first maximum of the viral load trajectory and the time it occurred. Then estimated the growth and decay rates by simply taking the slope of straight lines connecting the viral load at the start and end of the truncated data to this peak value. Figure 1 shows all of the PE fits against the original RCM model fits. We can see that the PE model captures the dynamics around the peak well (which, for our purposes, is the most important part of the trajectory as it is when individuals are most likely to be infectious and test positive). However, this fitting comes at the expense of losing information about the changing decay rate at longer times (which has a much smaller effect on predicting testing efficacy). Supplementary table S1 summarises the mean and covariance of the maximum likelihood multivariate lognormal distribution of the PE model param eters. We use this distribution to generate random parameter sets for the PE model which are used to simulate different individuals. **Kissler et al. (2021) data:** The data from [26] consists of 46 individuals identified to have \"acute\" SARS-CoV-2 infections while partaking in regular PCR tests. To simulate the data from this paper, we sample the individual-level posteriors (available at [28]) directly. First, we converted the parameters contained in that dataset (\\(\\Delta Ct\\), difference between minimum Ct and the limit of detection (LoD); \\(\\omega_{p}\\), length of growth period from LoD to peak viral-load; and \\(\\omega_{r}\\) length of decay Figure 1: Left: Spaghetti plot of the 56 RCM model paramterisations given in [27]. Right: the same plot but showing the PE model parameterisations fitted in this paper. Note that the point-wise median and mean profiles shown here were computed on the scale of \\(\\log_{10}\\)copies/ml. period from peak viral-load to LoD) as follows \\[\\log_{10}(V_{p}) =\\log_{10}(V_{\\rm{lod}})+\\frac{\\Delta Ct}{3.60971}, \\tag{6}\\] \\[r =\\log(10)\\left[\\frac{\\log_{10}(V_{p})-\\log_{10}(V_{\\rm{lod}})}{ \\omega_{p}}\\right],\\] (7) \\[d =\\log(10)\\left[\\frac{\\log_{10}(V_{p})-\\log_{10}(V_{\\rm{lod}})}{ \\omega_{r}}\\right]. \\tag{8}\\] The parameter \\(V_{\\rm{lod}}=10^{2.65761}\\)copies/ml is the viral load corresponding to a Ct-value of 40 and 3.60971 is the fitted slope between Ct-value and RNA viral-load in \\(log_{10}\\)(copies/ml) in that study. Finally, in order to determine the final parameter \\(t_{p}\\), we used the result from [13] based on the same dataset that the viral load at time of infection is \\(V_{0}=10^{0.5255}\\)copies/ml, such that \\[t_{p}=\\frac{\\log V_{p}-\\log V_{0}}{r}. \\tag{9}\\] We separated the converted datasets into those individuals who were labelled as \"symptomatic\" and those who were not, as there was shown to be statistically significant differences in these populations in [26]. Then, to generate a new viral load trajectory a single parameter set is sampled from one of these two datasets, depending on whether the trajectory corresponds to a simulated individual who would develop symptoms or not. Example trajectories for the two cases are shown in figure 2. #### 2.2.2 Parameterisation of infectiousness as a function of RNA viral load In order to model infectiousness, we use \"infectious virus shed\" as fitted in [27] as a proxy. In [27] infectious virus shed is a Hill function of viral load \\[J_{k}(V_{k})=\\frac{J_{p}V_{k}^{h}}{V_{k}^{h}+K_{m}^{h}}. \\tag{10}\\]We found that neither of the random parameters \\(J_{p}\\) nor \\(h\\) (as given in [27]) were significantly correlated to any of the PE model parameters fitted for the same individuals. Therefore, we used the maximum likelihood bivariate lognormal distribution of the random parameters of this model \\(\\{J_{p},h\\}\\), given in Supplementary table S1. These infectiousness parameters are generated independently of the individual's RNA viral-load parameters \\(\\{V_{p},t_{p},r,d\\}\\). Examples of \\(J_{k}(x)\\) are shown in figure 3(a) as well as the mean relationship. Note that, in [27], the magnitude of \\(J_{p}\\) is given in arbitrary units and so the IP measure we use here is also in arbitrary units. Thus, we present results in terms of a relative reduction in IP (\\(\\Delta\\)IP), which is independent of the choice of infectiousness units. Figure 2: Spaghetti plot of trajectories generated using random samples of the posterior distribution of PE parameters in [26]. The point-wise mean and median lines are computed using 10,000 samples. Note that, for the purposes of this plot, the trajectories are truncated so that any values below \\(V_{\\text{lod}}\\) are set to \\(V_{\\text{lod}}\\) (so that spuriously small values on the log-scale do not affect the averages. The left graph shows samples from the symptomatic population in that study and right the asymptomatic. #### 2.2.3 Parameterisation of test sensitivity as a function of RNA viral load The probability of testing positive is also assumed to be deterministically linked to RNA viral load. Note that we assume this because of the data available on LFD test sensitivity as a function of RNA viral load measured by PCR, however several studies have suggested that LFD test sensitivity is actually more closely linked to infectious viral load or culture positive probability [29; 30; 31; 32]. Furthermore, the sensitivity relationships used here imply that the outcomes of subsequent tests are independent. This is likely to be an unrealistic assumption in practice since factors other than viral load may also influence test outcome, and these may vary between individuals. Figure 3: (a) The deterministic relationship between infectiousness and RNA viral load used here. Grey lines show individual samples as each individual is assumed to have a different random value of \\(J_{p}\\) and \\(h\\). The red line shows the population mean. (b) The different relationships between test-positive probability and RNA viral load used in this paper. The blue line shows the assumed PCR test senstivity, while orange shows the ‘high’ sensitivity (HS) LFD case and green shows the ‘low’ sensitivity (LS) case (note that the peak sensitivity is actually higher in the LS case, but overall sensitivity is lower). To model PCR testing, we use a logistic function with a hard-cutoff to account for the cycle-threshold (Ct) cutoff \\[P_{\\text{PCR}}(V_{k})=\\begin{cases}0&\\text{if }V_{k}<V_{\\text{cut}}\\\\ P_{\\text{max}}\\left[1+e^{-s_{p}(\\log_{10}V_{k}-\\log_{10}V_{50}^{(p)}}\\right]^{-1 }&\\text{if }V_{k}\\geq V_{\\text{cut}}\\end{cases}. \\tag{11}\\] The parameters \\(P_{\\text{max}}\\), \\(V_{50}\\) and \\(s_{t}\\) are extracted by a maximum-likelihood fit of the data on the \"BioFire defense\" PCR test given in [33], and are given in Supplementary table S1. Note that we fitted to logistic, normal cumulative distribution function (CDF), and log-logistic functions and chose maximum likelihood fit. To model LFD testing, we use two data sources to establish a 'low' and 'high' sensitivity scenario. In the 'high' sensitivity case we use the phase 3b data collected in [1]. We fitted logistic, normal CDF and log-logistic models to the data using a maximum likelihood method and chose the most likely fit (which was the logistic model). The phase 3b results in this study came from community testing in people who simultaneously tested positive by PCR. Therefore, the overall LFD sensitivity is given by the logistic function multiplied by the fraction who would test positive by PCR, i.e. \\[P_{\\text{LFDh}}(V_{k})=\\lambda P_{\\text{PCR}}(V_{k})\\left[1+e^{-s_{l}(\\log_{10 }V_{k}-\\log_{10}V_{50}^{(l)})}\\right]^{-1}. \\tag{12}\\] The parameters \\(s_{l}\\) and \\(V_{50}^{(l)}\\) are determined by maximum likelihood fit, which the relative sensitivity \\(\\lambda\\) is included to account the difference in sensitivity between self-testing and testing performed by lab-trained staff [1]. In the 'low' sensitivity LFD case, we use data from regular LFD and PCR testing in the social care sector in the UK [34]. Since this data is based on positive PCR results, we assume again \\(P_{\\text{LFDl}}(V_{k})=\\theta(V_{k})P_{\\text{PCR}}(V_{k})\\). The function \\(\\theta(V_{k})\\) is a stepwise function, and is parameterised in Supplementary table S1. All of the test-positive probability relationships are shown in figure 3(b). ### Adherence to policy In this paper we consider a number of testing policies which take the form of instructions to employees in the workplace regarding the number of tests to carry out per week. In general, we assume that PCR tests are carried out with 100% adherence, as these are assumed to be 'enforced' (our reference example is PCR testing of hospital and care home staff in the UK, which are carried out at the workplace by trained staff). LFD tests on the other hand are assumed voluntary, as these are carried out at home and reported online. We consider two behavioural parameters to model how individuals may choose to adhere with LFD testing policies, \\(P_{\\mathrm{not}}\\) and \\(P_{\\mathrm{miss}}\\) (see table 1(d)). In sections 3.2 we explicitly model two behavioural extremes, * \"All-or-nothing\": \\(P_{\\mathrm{not}}=1-A\\) and \\(P_{\\mathrm{miss}}=0\\), i.e. a fixed fraction of people complete all tests, while the rest complete none. * \"Leaky\": \\(P_{\\mathrm{miss}}=1-A\\) and \\(P_{\\mathrm{not}}=0\\), i.e. all people miss tests at random with the same probability. These cases demonstrate how the same overall adherence to testing (\\(0\\leq A\\leq 1\\)) can lead to different testing outcomes, depending on behaviour. The difference between these two behaviours can be captured by a simple model of \\(\\Delta\\mathrm{IP}\\) (relative to the case with no isolation) by making the following assumptions. Suppose each individual is supposed to test every \\(\\tau\\) days and there is some window \\(\\tau_{\\mathrm{pos}}\\) when they can test positive with probability \\(p\\). Ifthey do test positive, it will reduce their overall infectiousness by a factor \\(Z\\). Then, for the average individual, the relative reduction in IP will be \\[\\Delta\\text{IP}_{\\text{\\,AoN}} \\approx AZ\\left(1-(1-p)^{\\tau_{\\text{pos}}/\\tau}\\right), \\tag{13}\\] \\[\\Delta\\text{IP}_{\\text{\\,leaky}} \\approx Z\\left(1-(1-p)^{A\\tau_{\\text{pos}}/\\tau}\\right). \\tag{14}\\] Thus, \\(\\Delta\\text{IP}\\) is expected to scale linearly with adherence for the 'all-or-nothing' case, since testing will only impact a fixed fraction of the population, while in the 'leaky' case \\(\\Delta\\text{IP}\\) will scale non-linearly with \\(A\\) as it will change the expected number of tests a person will take during the period when they can test positive (\\(A\\tau_{\\text{pos}}/\\tau\\)). ### Workplace contact, shift and testing patterns To simulate the impact of testing in a workplace setting we model some simplified examples of contact and work patterns. #### 2.4.1 Modification of Infectious Potential to account for shift patterns Equation (2) supposes that the contact rate is constant over time unless the individual is isolating. When modelling a workplace intervention, we are generally interested in the effect on workplace transmission. Therefore, to generate the results in section 3.3, we consider a modified contact pattern that is proportional to work hours \\(c^{(w)}(t)=W(t)c(t)\\) where \\(W(t)=1\\) during scheduled work hours and 0 otherwise. Note, we also take \\(X=1\\), as we assume work contacts are completely removed by isolation. This parameterisation therefore ignores potential contacts with colleagues outside of work hours, which may also be relevant. For simplicity, we assume that all workers do the same fortnightly shift pattern, shown in table 2, so that all employees work, on average, 4.5 days per week, based on average working hours in the UK social care sector. #### 2.4.2 Definition of testing regimes simulated Table 2 defines the shift and testing patterns that we consider. Note that tests are assumed to take place only on work days. The numerical implementation of calculations of \\(\\Delta\\)IP is detailed in Ap \\begin{table} \\begin{tabular}{|l|c|} \\hline Name & Pattern \\\\ \\hline Shift & & \\\\ pattern & & \\\\ \\hline Daily & & \\\\ testing & & \\\\ \\hline 3 LFDs & & \\\\ per week & & \\\\ \\hline 2 LFDs & & \\\\ per week & & \\\\ \\hline 1 PCR & & \\\\ per week & & \\\\ \\hline 1 PCR per & & \\\\ fortnight & & \\\\ \\hline \\end{tabular} \\end{table} Table 2: Visual representation of the shift and testing patterns considered in sections 3.3, 3.4, and 3.5. Within each row, squares from left to right indicate days of the week, upper squares indicate the first week of the pattern, and the lower squares indicate the second week. A red square means there is a shift/test scheduled for that day, while a white square indicates there is not. pendix A.2. ### Models of transmission in a closed workplace To demonstrate the impact of testing interventions in a closed population, we consider two simple model workplaces. The algorithm to simulate transmission in these workplaces is detailed in A.3. For ease of comparison, we measure the impact of testing in these workplaces by the final outbreak size resulting from a single index case in a fully susceptible population. However in reality, mass asymptomatic testing is more useful when there is high community prevalence, and so we would expect repeated introductions over any prolonged period. We do not consider the case of repeated introductions here, nor immunity in the population, but these methods can readily be extended to that case. #### 2.5.1 A single-component workplace model We consider a workplace of \\(N_{s}\\) fixed staff, all of whom work the same shift pattern given by table 2. Each individual's shift pattern starts on a random day (from 1 to 14) so it is assumed that approximately the same number of workers are working each day. Contacts for each infectious individual on shift each day are drawn at random from the rest of the population on shift that day with fixed probability \\(p_{c}\\). Each contact is assumed to have probability of infection \\[p_{k,k^{\\prime}}(t)=1-\\exp\\left[-\\beta_{0}J_{k}(t)s_{k^{\\prime}}(t)\\right] \\tag{15}\\] where \\(\\beta_{0}\\) is the (average) transmission rate for the contact, \\(J_{k}(t)\\) is the infectiousness of the infectious individual (\\(k\\in\\{1,\\ldots,N_{s}\\}\\)), \\(s_{k}\\in\\{0,1\\}\\) is the susceptibility of the contact (\\(k^{\\prime}\\in\\{1,\\ldots,N_{s}\\}\ eq k\\)). An upper bound for the approximate reproduction number in this workplace can be calculated as follows \\[R_{\\mathrm{wp}}\\lesssim f_{s}^{2}\\beta_{0}p_{c}(N_{s}-1)\\langle\\tau_{\\mathrm{ inf}}\\rangle(1-\\Delta\\mathrm{IP}) \\tag{16}\\] where \\(f_{s}\\) is the fraction of days on-shift (9/14 here) and \\(1-\\Delta\\mathrm{IP}\\) is the relative infectious potential after taking into account any testing and isolation measures. #### 2.5.2 A two-component model to represent transmission in a care-home We also extend the model in the previous section to a basic model of contacts in a care home consisting of two populations: staff and residents. We assume the same shift patterns apply for the staff as in the previous section, but that residents are present in the care home on all days. As in the previous section, we assume that contacts are drawn at random each day for infectious individuals from the pool of other individuals at work that day. However, the contact probability is different for resident-resident, staff-resident, and staff-staff contacts such that it can be expressed by the following matrix \\[\\mathbf{P}_{c}=p_{c}\\begin{pmatrix}a&1\\\\ 1&b\\end{pmatrix}, \\tag{17}\\] such that \\(p_{c}\\), \\(ap_{c}\\), and \\(bp_{c}\\) are the staff-resident, staff-staff, and resident-resident contact probabilities respectively. We assume that transmission dynamics are the same for all groups but that testing and isolation measures can be applied separately. It is important to note that here we only consider the case of a closed population with a single (staff) index case, which is most similar to the early pandemic (i.e. a fully susceptible population, low incidence, and staff ingress is more likely than patient ingress due to limits on visitors). The situation becomes complex in more realistic scenarios (e.g. [17]) however many of the lessons we can learn from this simple case are transferable (at least qualitatively). In section 3.5 we consider two cases to measure the impact of mass asymptomatic testing of staff. First, to mimic the impact of social distancing policies for staff we vary \\(a\\) while keeping \\(p_{c}\\) and \\(b\\) constant (i.e. minimising disruption for residents) and compare the absolute and relative impacts of testing interventions. Second, to show the effect of the relative contact rates, which may vary widely between individual care-homes, we vary \\(a\\) and \\(b\\) such that the average number of contacts an infectious person will make is fixed, meaning that we impose the following constraint \\[b=1-\\frac{N_{s}f_{s}(N_{s}f_{s}-1)}{N_{r}(N_{r}-1)}(a-1) \\tag{18}\\] where \\(N_{r}\\) is the size of the fixed resident population, and \\(N_{s}\\) is the size of the fixed staff population. We will use this to investigate how testing policies can have different impacts in different care-homes even if the have similar overall transmission rates or numbers of cases. ## 3 Results ### Role of population heterogeneity in testing efficacy The individual viral-load based model introduced here accounts for correlations between infectiousness and the probability of testing positive both between individuals and over time. For example, people with a higher peak viral load are more likely to be positive and more likely to be (more) infectious. In this section we demonstrate how the model assumptions around heterogeneity in, and correlations between, peak viral load and peak infectiousness affect predictions of testing efficacy. To do this, we first calculate a population average model, which uses the time-point average of the population of \\(N\\) individuals for the infectiousness and test-positive probability \\[\\langle J(t)\\rangle =\\frac{1}{N}\\sum_{k=1}^{N}J(V_{k}(t)), \\tag{19}\\] \\[\\langle P_{\\text{LFD}}(t)\\rangle =\\frac{1}{N}\\sum_{k=1}^{N}P_{\\text{LFD}}(V_{k}(t)). \\tag{20}\\] These profiles are then assigned to all individuals in a parallel population of \\(N\\) individuals so that the impact of population heterogeneity can be compared directly. Figure 3.1 compares the relative overall change in infectious potential (\\(\\Delta\\)IP) for a population of infected individuals performing LFDs at varying frequency for the heterogeneous models vs their homogeneous (population average) counterparts. This is shown for several cases; in figure 3.1(a) the Ke et al. based model is used, which has low population heterogeneity in the PE parameters. Therefore, we see little difference between the full model prediction and the population average case. The 'high' sensitivity testing model outperforms the 'low' sensitivity model, as expected. However, the difference is proportionally smaller for high testing frequencies because frequent testing can compensate for low sensitivity to some extent. Furthermore, in the 'high' sensitivity model subjects are likely to test positive for longer periods of time and so retain efficacy better at longer periods between tests. Comparing this to figure 3.1(b) we see that there is a much larger discrepancy between the Kissler et al. model and its population average. This is because there is much more significant population heterogeneity in this model, and the correlation between infectiousness and test positive probability means that individuals who are more infectious are more likely to test positive and isolate. Therefore the effect of testing is significantly larger than is predicted by the population average model. This effect is also much larger in the 'low' sensitivity model, because in this model the test-positive probability has a very similar RNA viral load dependence to the infectiousness (with approximately \\(10^{6}\\) copies/ml being the threshold between low/high test-positive probability and infectiousness). This means there is greater correlation between these two properties and so this effect is amplified. The between-individual relationship of infectiousness and viral load for SARS-CoV-2 is still largely unknown. While studies have shown a correlation between viral load and secondary cases [35; 15], this could also be affected by the timing of tests (which may also be correlated to symptom onset) and therefore the within-individual variability in viral load over time. We found in this section that even though the two models of RNA viral load we use predict very similar testing efficacy, they highlight important factors to consider when modelling repeat asymptomatic testing: 1. Population heterogeneity: greater heterogeneity means poorer agreement between the individual-level and population-average model predictions. 2. Between individual correlation of infectiousness and test-positive probability: the greater this correlation is the more important the hetero geneity is to predictions of test efficacy. Thus, quantifying population heterogeneity in infectiousness (i.e. super-spreading) and likelihood of testing positive while infectious and the correlation between the two can significantly affect predictions of efficacy of repeat asymptomatic testing. Figure 4: Plots of \\(\\Delta\\)IP, the relative change in IP due to regular asymptomatic testing with LFDs vs. the time between tests. (a) Results using the Ke et al. based model of RNA viral load and (b) using the Kissler et al. based model. Blue lines show the results using the ‘high’ sensitivity model for LFD testing and orange the ‘low sensitivity model’. Dashed lines show the results using a population average model, based on using the population mean infectiousness and test-positive probability for all individuals. In all cases the reduction is calculated as in equation (4) relative to the baseline case with no testing and no isolation at symptom onset (\\(P_{\\text{isol}}=0\\)). Total adherence to all testing regimes is assumed in this case, and each point is calculated using the same population of 10,000 generated individuals. The shaded areas show the 95% confidence intervals in the mean of \\(\\Delta\\)IP, approximated by 1000 bootstrapping samples. ### Modelling the impact of non-adherence In figure 5 we calculate the two extremes of adherence behaviour considered here, comparing the \"all-or-nothing\" and \"leaky\" adherence models. We see that these different ways of achieving the same overall adherence only differ noticeably when testing at high frequency (as shown by the results for daily testing in figure 5). At high frequency, 'leaky' adherence results in a greater reduction in IP, because even though tests are being missed at the same rate, all individuals are still testing at a high-rate and so have a high chance of recording a positive test. Conversely, in the 'all-or-nothing' case, obstinate non-testers can never isolate, so the changing test frequency can only impact that sub-population who do test. We also see from figure 5 that the relative reduction in IP (\\(\\Delta\\)IP) is well approximated by fitting the reduced model of testing in equations (13) and (14) to the data for \\(\\Delta\\)IP. The fitted parameters for the two viral load models are given in table. We fit the models for the 'all-or-nothing' and 'leaky' cases separately to the \\(\\Delta\\)IP data using a least-squares method. Table 3 shows that the two cases give very similar testing parameters, suggesting that the simple model captures the behaviour well. The solid lines in figure 5 show the simple model results (equations (13) and (14)) using the mean fitted parameters from the final column of table 3. To summarise, a single adherence parameter may be sufficient to capture how adherence affects the impact that regular testing has on infectious potential, but only when the testing is not very frequent (e.g. every 3 or more days). When testing is frequent, the very simple model of equations (13) and (14) can be used to estimate the potential impact of testing at different frequencies on infectious potential for two extreme models of behaviour. Namely, when adherence is 'leaky' or 'all-or-nothing'. ### Comparison of staff testing policies for high-risk settings In the previous sections we have considered simple testing strategies consisting of repeated LFDs at a fixed frequency. In this section we consider scenarios more relevant to workplaces, outlined in table 2. Due to the mix of test types, it is less clear _a priori_ how the regimes will compare in efficacy. A key question we consider is whether substituting a weekly PCR test with an extra LFD test results in the better, worse, or similar \\(\\Delta\\) IP, depending on the underlying assumptions. Figure 6 shows the main results comparing the various regimes. At 100% adherence and high LFD sensitivity (figure 6(a)), we find some interesting results, primarily that \"2 LFDs + 1 PCR\" and \"3 LFDs\" perform similarly, as do \"3 LFDs + 1 PCR\" and \"Daily LFDs\", suggesting that, in theory, \\begin{table} \\begin{tabular}{|l|l|c|c|c|c|c|c|} \\hline RNA viral & Adherence & \\multicolumn{4}{c|}{Fitted parameters} & \\multicolumn{2}{c|}{Mean of fitted parameters} \\\\ \\cline{3-8} load model & behaviour & Z & p & \\(\\tau_{\\text{pos}}\\) (days) & Z & p & \\(\\tau_{\\text{pos}}\\) (days) \\\\ \\hline \\multirow{3}{*}{Ke et al.} & ‘Lally-or-nothing’ & 1.0 & 0.562 & 4.47 & \\multirow{3}{*}{1.0} & \\multirow{3}{*}{0.558} & \\multirow{3}{*}{4.37} \\\\ \\cline{2-3} \\cline{5-8} & ‘Leaky’ & 1.0 & 0.555 & 4.26 & & & \\\\ \\hline \\multirow{3}{*}{Kissler et al.} & ‘All-or-nothing’ & 0.958 & 0.561 & 4.38 & \\multirow{3}{*}{0.945} & \\multirow{3}{*}{0.562} & \\multirow{3}{*}{4.28} \\\\ \\cline{2-3} \\cline{5-8} & ‘Leaky’ & 0.932 & 0.563 & 4.19 & & & \\\\ \\hline \\end{tabular} \\end{table} Table 3: Parameters of the simplified models of \\(\\Delta\\)IP given in equation (13) and (14) fitted to the scatter plot data in figure 5. The parameters were fitted separately for each viral load model and each model of adherence behaviour. The final column shows the mean parameters from the ‘all-or-nothing’ and ‘leaky’ model fits, which were used to generate the line data in figure 5. substituting PCR tests for LFD tests does not have a large impact on transmission reduction. This is because, even though PCR tests are more sensitive than LFDs, the turnaround time from taking a PCR test to receiving the result limits the potential reduction in IP that can be achieved by these tests. This is demonstrated in figure 7 by the change in \\(\\Delta\\)IP as we change the mean PCR turnaround time. In the low sensitivity case (figure 6(b)), there is a larger difference between \"2 LFDs + 1 PCR\" and \"3 LFDs\", but again \"Daily LFDs\" outperform \"3 LFDs + 1 PCR\" (at 100% adherence) since the \"3 LFDs\" are not sensitive to the \"3 LFDs\". Figure 5: Relative reduction in IP (\\(\\Delta\\)IP) vs. adherence calculated for the (a) Ke et al. based model of RNA viral load and (b) Kissler et al. based model. The circles show the results when adherence is ‘all-or-nothing’ while squares show the case when it is ‘leaky’. Error bars show the 95% confidence intervals of the mean approximated by 1000 bootstrapping samples. Additionally, the solid lines show equation (13) while the dashed lines show equation (14) with parameters given by the final column of table 3. The dot and line colours correspond to different testing frequencies, as labelled in the captions. In all cases the reduction was calculated as in equation (4) relative to the baseline case with no testing and no isolation at symptom onset (\\(P_{\\text{isol}}=0\\)). Each point was calculated using the same population of 10,000 generated individuals and the ‘high’ sensitivity model of LFDs was used. the high-frequency of testing counteracts the low sensitivity by providing multiple chances to test positive. Another important implication of figure 6 is the effect of varying LFD adherence (in this case assuming 'leaky' adherence behaviour). Naturally, this impacts much more strongly on the LFD-only regimes demonstrating the usefulness of the PCR tests as a less-frequent but mandatory and highly sensitive test as a buffer in case LFD adherence is low or falling. Another factor to consider when changing or between testing regimes is how this will Figure 6: Reduction in population infectious potential expressed as a percentage relative to a baseline case with no testing and symptom isolation with probability \\(P_{\\text{isol}}=1.0\\). Testing regimes simulated are from left to right in order of their effectiveness at 100% adherence. (a) and (b) only differ in the model of LFD sensitivity used, as labelled. Each bar is calculated using 10,000 samples, lighter coloured bars are used to show the extra \\(\\Delta\\)IP gained by increasing LFD adherence from a baseline of 40%. A mean PCR turnaround time of 45h is assumed. Error bars indicate 95% confidence intervals of the mean, approximated using 1000 bootstrapping samples. affect adherence levels. For example, if the workforce is performing 60% of the LFD tests that are set out by the testing regime, but then the regime is changed from '2 LFDs + 1 PCR' to 'Daily LFDs only', the adherence rates are likely to fall. In this case, the results in figure 6 can be used to estimate how much they would need to fall for \\(\\Delta\\)IP to go down (for this example, only in the region of approx 5-10%, assuming high LFD sensitivity and 'leaky' adherence). Note that, if 'all-or-nothing' adherence was used instead, the impact of non-adherence for the 'LFD only' regimes would be even greater than shown in figure 6, due to the arguments outlined in section 3.2. Finally, figure 8 shows what happens to the population average infectious Figure 7: The same plot as figure 6 except the lighter bars show the effect of reducing PCR turnaround time (TaT) from a baseline of 72h. A ‘leaky’ adherence of 70% is assumed. Error bars indicate 95% confidence intervals of the mean, approximated using 1000 bootstrapping samples. ness under different testing regimes. As testing frequency is increased, the bulk of infectiousness is pushed earlier in the infection period, as individuals are much more likely to be isolated later in the period. This is an example of how testing and isolation interventions can not only reduce the reproduction number, but also the generation time, as any infections that do occur are more likely to occur earlier in the infectious period. To summarise, we find that the effect of LFD and PCR tests are comparable when PCR tests have a \\(\\sim\\) 2-day turnaround time, in line with other studies [12, 24, 36]. However, differential adherence is likely to be the key determinant of efficacy when switching a PCR for LFD. Observed rates of adherence to workplace testing programmes will likely depend on numer Figure 8: Population mean values \\(\\langle J_{t}\\rangle\\) under different testing and isolation regimes. The blue line shows the baseline infectiousness, with no isolation, and the orange line shows the case with symptomatic isolation with perfect adherence \\(P_{\\text{isol}}=1.0\\). The testing regimes simulated are labelled in the caption and are all simulated assuming ‘leaky’ adherence at 70%. (a) Shows the results using the Ke et al. model of RNA viral load and (b) the Kissler et al. model. All curves are calculated using 10,000 samples and the 95% confidence intervals on the mean is given by the shaded area (when this is thicker than the line). ous factors including how the programme is implemented, the measures in place to support self-isolation and the broader epidemiological context (i.e. prevalence and awareness). There is uncertainty in the parameters used to make these predictions, so to quantify the impacts of parameters uncertainty on \\(\\Delta\\)IP by performing a sensitivity analysis, which is presented in B. This shows that certain parameters are less important, such as the coupled timing on peak viral load and symptom onset, or the viral load growth rate. Unsurprisingly, the \\(\\Delta IP\\) predictions for 2 LFDs per week is most sensitive is most sensitive to LFD sensitivity parameters \\(\\lambda\\) and \\(\\mu_{l}\\). However, the daily LFDs case is most sensitive to the infectiousness parameter \\(h\\). This is because the sensitivity of individual (independent) tests becomes less important as they are repeated regularly, and a key determinant of IP then is the proportion of infectiousness that occurs in the early stages of infection, before isolation can feasibly be triggered, which increases with smaller \\(h\\). Interestingly however, the infectiousness threshold parameter \\(K_{m}\\) does not seem to have as large an effect. In part this is because it has less uncertainty associated with it, but \\(h\\) also has a more profound effect is because it not only does decreasing it increase pre-symptomatic infectiousness (as does decreasing \\(K_{m}\\)), it also reduces the relative infectiousness around peak viral load, thereby decreasing the value of LFD-triggered isolation (which is mostly likely to start near to peak viral load). PCR tests for SARS-CoV-2 generally come at a much higher financial cost than LFD tests because of the associated lab costs, and so are not feasible for sustained deployment by employers or governments. In this context,the potential impact of regular LFD testing is clear and sizeable, so long as policy adherence can be maintained. The two models of LFD test sensitivity change the results, but qualitatively we see that if all people perform 2 LFDs per week (either '2 LFDs' regime at 100% adherence or '3 LFDs' at \\(\\sim 70\\%\\) adherence), then the reproduction number can be halved (at least) and potentially reduced by up to 60-70%. This is a sizeable effect and so regular asymptomatic with LFDs is potentially a cost-effective option at reducing transmission in workplaces. ### Infectious Potential as a predictor of transmission in a simple workplace model It is shown in equation (3) how IP is related to the reproduction number under some simplifying assumptions about transmission. Figure 9 demonstrates that this relationship is well approximated even in the stochastic model outline in section 2.5.1. It compares the probability distributions of outbreak sizes (resulting from a single index case) in a closed workplace of 100 employees under the different LFD only testing regimes. These are presented next to the same results for a model with no testing, but with a reduced contact rate \\(p_{c}\\rightarrow(1-\\Delta\\text{IP})p_{c}\\) where \\(\\Delta\\text{IP}\\) is the reduction in IP predicted for the corresponding testing regime. In other words, the baseline \\(R_{0}\\) value of the workplace is adjusted to match what would be expected if a particular testing regime was in place. We see that the final outbreak sizes are fairly well predicted, even though temporal information about infectiousness (shown in figure 8) is not captured by the simpler model. The key difference between explicit simulations of the testing regimes (in blue) and the approximated versions (in orange) is the heterogeneity in outcomes. Testing is a random process and leads to greater heterogeneity in infectious potential, by simply scaling transmission by the population level \\(\\Delta\\)IP, that heterogeneity is lost. Nonetheless, these results demonstrate the usefulness of \\(\\Delta\\)IP as a measure. All of the testing regimes simulated in figure 9 reduce the workplace reproduction number to less than the critical value for this stochastic model with \\(N=100\\) employees, and this is matched by the predictions given by Figure 9: Violin plots showing the distribution of the outbreak size in a workplace of \\(N=100\\) people given a single index case. Blue violins show the cases where testing is modelled explicitly (except in the ‘no testing’ case), while orange violins show cases with no testing but where the contact rate \\(p_{c}\\) is reduced by a factor \\(\\Delta\\)IP to mimic the testing regime in question. Each violin consists of 10,000 simulations and the white dot shows the median of the distribution. A ‘leaky’ adherence to testing at 70% was assumed. The case shown uses the Ke et al. model of RNA viral load and the ‘high’ sensitivity model of LFD testing. The transmission parameters used were \\(p_{c}=0.296\\) and \\(\\beta_{0}=0.0265\\) giving an approximate \\(R_{wp}\\) value of 3 (with symptom isolation). Note that the ‘no testing’ case still includes symptomatic isolation with \\(P_{\\text{isol}}=1.0\\), and so the baseline \\(R_{0}\\) is not realised. \\(\\Delta\\)IP. Therefore, given some data or model regarding the baseline transmission rate in the setting of interest, calculating \\(\\Delta\\)IP is an efficient way of approximating the impact of potential testing interventions and predicting how frequent testing will have to be to reduce the reproduction number below a threshold value (generally \\(\\gtrsim 1\\) for finite-population models [37]). False-positives are also an important factor when considering the costs of any repeat testing policies as even tests with relatively high specificity performed frequently enough will produce false-positive results. Figure 10 shows a direct calculation of the number of false positives per actual new infection in the population for the testing regimes considered here. The figure demonstrates how a small change in specificity greatly changes the picture. The case in figure 10(b) is close to more recent estimates of LFD specificity [11], suggesting that the rate of false-positives will only become comparable to the number of new infections at very low incidence. Imposing some threshold on this quantity is a measure of how many false positives (and impacts thereof) the policy-maker is willing to accept in search of each infected person. ### Testing to protect vulnerable groups in a two-component work-setting The simple picture of IP\\(\\propto R_{\\rm wp}\\) becomes less straight-forward as we consider workplaces of increasing complexity. In this section we consider the reduced model of a care-home outlined in section 2.5.2. We model the case where the index case is a staff member (as generally residents are more isolated from the wider community [17]) and testing policy only applies to staff. Figure 11 shows the effect of varying the staff-staff contact rate \\(a\\). As expected, reducing \\(a\\) reduces the reproduction number and therefore the final outbreak size, demonstrating how social distancing of staff alone would reduce both staff and resident infections, but have a larger effect on staff infections (figure 11(a)). Regular asymptomatic staff testing is predicted to have a sizeable effect on resident infections, reducing them by 50-60% across the whole range of \\(a\\). Interestingly, in the presence of this effective staff testing intervention, staff social distancing is predicted to have a minimal effect on resident infections. When \\(a\\) is very small (i.e. staff don't interact) most transmission chains will have to involve residents to be successful, hence we see similar infection rates for both groups. At large \\(a\\), staff outbreaks become more common than resident outbreaks (figure 11(b)), however by both reducing staff-staff transmission, and screening residents from infectious staff, staff testing has a larger relative effect on infections in both groups. In figure 12 we focus on the dependence of resident infections on the underlying contact structure, by varying \\(a\\) and \\(b\\) simultaneously to maintain a Figure 10: Average number of false positives per new infection in the population at different rates of incidence. (a) 99.5% specificity, similar to that reported in [1]. (b) 99.95% specificity, similar to that reported in [11]. constant reproduction number in the care-home (see equation (18)). Interestingly, under these constraints, when staff undergo regular testing (in this case the '2 LFDs' regime), there is a much stronger (negative) dependence of resident infections on \\(a\\) observed. This is because infectious staff are more likely to isolate due to the testing and so then resident-resident contacts become the key route for resident infections. Therefore, with staff testing, higher resident-resident contact rates (decreasing \\(a\\)) increases resident cases Figure 11: Summary of staff and resident infections in the two-component model of care-home contacts while varying the relative staff-staff contact rate \\(a\\) and fixing the relative resident-resident contact rate \\(b=1\\). The index case for the outbreak was assumed to be a staff member. (a) The mean number of residents and staff infected in simulations given the staff-staff contact rate \\(a\\), with and without staff testing of 2 LFDs per week (as labelled). The shaded area indicates the 95% confidence intervals in the mean. (b) Violin plots of the resident and staff infections in the same scenarios, divided by the total number of residents and staff respectively, for select values of \\(a\\). The parameters used to generate these plots were: total number of residents \\(N_{r}=30\\) and staff \\(N_{s}=50\\), contact probability \\(p_{c}=0.296\\), and transmission rate \\(\\beta_{0}=0.0265\\). Also, the Ke et al. model of RNA viral load and the ‘high’ sensitivity model of LFD testing. 10,000 simulations were realised to generate these results and a ‘leaky’ adherence to testing at 70% was assumed. because outbreaks can occur in this population relatively unchecked. Therefore, while \\(\\Delta\\)IP is a useful measure for comparing different testing regimes, in the more complex setting of a care home an understanding of the underlying transmission rates between and within staff and resident populations is required to understand how this affects the probability of an outbreak. Nonetheless, given knowledge of these underlying contact/transmission rates, \\(\\Delta\\)IP can still be very useful as a measure of efficacy. In the case presented here, only staff undergo testing and so only staff\\(\\rightarrow\\)staff and staff\\(\\rightarrow\\)resident transmission are reduced by a factor of approximately \\(\\Delta\\)IP. This means that while staff testing will reduce the number of resident infections resulting from Figure 12: Summary of resident infections in the two-component model of care-home contacts where the staff-staff and resident-resident contact rates (\\(a\\) and \\(b\\) respectively) are varied simultaneously as shown in equation (18). The index case for the outbreak was assumed to be a staff member. (a) The mean number of residents infected in simulations given the staff-staff contact rate \\(a\\), with (blue) and without (orange) staff testing of 2 LFDs per week. The shaded area indicates the 95% confidence intervals in the mean. (b) Violin plots of the resident infections in the same scenarios, for select values of \\(a\\). All parameters other than \\(a\\) and \\(b\\) are the same as in figure 11. a new staff introduction of SARS-CoV-2 into the workplace, the relationship between \\(\\Delta\\)IP and resident infections is more complex, limiting the scope of policy advice that can be given for this setting based on \\(\\Delta\\)IP alone. ## 4 Discussion This paper presents a simple viral load-based model of the impact of asymptomatic testing on transmission of SARS-CoV-2, particularly for workplace settings, by using data from repeat dual-testing data in the literature. The results here highlight several important aspects for both modelling testing interventions and making policy decisions regarding such interventions. In terms of modelling implications, in section 3.1, we highlighted that a combination of population heterogeneity and correlation between test-positive probability and infectiousness will increase the overall predicted effect of testing interventions. In short, this is because if people who are more infectious are also more likely to test positive then testing interventions become more efficient at reducing transmission. In section 3.2 we also showed that model predictions can be affected by assumptions around adherence behaviour. In an analogy to models of vaccine effectiveness, we consider two extremes of adherence behaviour, \"all-or-nothing\" and \"leaky\" adherence. Testing is always more effective in a population with leaky adherence (assuming the same overall adherence rate) but the difference between the two cases is only predicted to be significant when testing very frequently (every 1-2 days). Real behaviour is more nuanced than these extremes, and in a population at any one time will likely consist of a continuum of rates of adherence. Nonetheless, highlighting these extremes is important for giving realistic uncertainty bounds for cases when only an overall adherence rate is reported, and for understanding the impact of assumptions that are implicit in models of testing. As for policy implications, in section 3.3 we demonstrate that regular testing can be highly effective at reducing transmission assuming that adherence rates are high. This work suggests that regular testing with good adherence could control outbreaks in workplaces with a baseline \\(R_{wp}\\sim 3\\) (sections 3.4 and 3.5). Estimates of the basic reproduction number for SARS-CoV-2 are in the range 2-4 for the original strain [38] and up to 10 for Omicron variants [39]. Of course the effective reproduction number in specific work-settings may is likely to be lower, depending on the frequency and duration of contacts and symptom isolation behaviour. This paper also highlights that the level of adherence with testing interventions is crucial to their success and also one of the most difficult factors to predict in advance. Numerous factors determine how people engage with testing and self-isolation policies including the cost of isolation (e.g. direct loss of earnings) [40] and perceived social costs of a positive test (e.g. testing positive may require co-habitants to isolate too) [41, 42]. In studies of mass asymptomatic testing of care-home staff it was found that increasing testing frequency reduced adherence [20], and also added to the burden of stress felt by a workforce already overstretched by the pandemic [43]. Therefore the results of modelling studies such as this paper need to be considered in the wider context of their application by decision makers, and balanced against all costs, even when these are difficult to quantify. Comparing our results to other literature, we see that estimates of the effectiveness of LFD testing vary widely, and are context dependent. In large populations (e.g. whole nations or regions), regular mass testing for prolonged periods is likely prohibitively expensive and so test, trace and isolate (TTI) strategies are more feasible. In studies of TTI, timing and fast turnaround of results is key [14, 44], overall efficacy is lower than predicted here due to the targeted nature of testing (and the inherent 'leakiness' of tracing contacts), however it is much more efficient than mass testing, particularly when incidence is low. Even without contact tracing, other 'targeted' testing strategies, while not as effective as mass testing, can reduce incidence significantly [45] for a lower cost. Similarly, surveillance testing, of a combination of symptomatic and non-symptomatic individuals, is an efficient way to reduce the importation of new cases and local outbreaks [46]. Focusing on mass LFD testing, as studied here, we predict a greater impact than the model [12] and more similar to the model (although measured differently here) in [13] as we also use a viral load based model. Models fitted to real data in secondary schools suggests that twice weekly LFD testing would have reduced the school reproduction number by \\(\\sim 40\\%\\) (if adherence reached \\(100\\%\\)), which is less effective than the \\(60\\%\\)-\\(80\\%\\) (figure 3.3) estimate provided here. As shown in table B.2, uncertainty in a number of parameters could explain this difference. The simplifying assumptions in this model are also likely to result in an over-estimate of effectiveness. For example, testing behaviour could be correlated with contact behaviour [47] and could provide false reassurance to those who are 'paucisymptomatic' which would greatly reduce its benefit [48] for the population as a whole. Similarly, infectiousness or testing behaviour may be correlated with symptoms, which could also skew these predictions depending on symptomatic isolation behaviours. Therefore, there is a need for integration of models of behaviour and engagement with testing policies into testing models to better predict its impact. There are other limitations of the models used in this study which need to be highlighted in order to interpret the results. First, the RNA viral load, testing and infectiousness data all pre-dates the emergence of the omicon variant (BA.1 lineages), which are characterised by higher reproduction numbers, shorter serial intervals, and less severe outcomes [49, 50, 51]. The shorter incubation period means repeat asymptomatic testing for omicron is likely to be less effective than predicted here, especially for PCR testing with a high turnaround time. On the other hand, if more people astromatically carry omicron [52], then this will increase testing impact. Second, the relationships used to relate RNA viral load, infectiousness, and test positive probability are not representative of the mechanistic relationships between these quantities. Therefore, the test sensitivity relationship used will likely marginally overestimate the impact of very frequent testing (e.g. daily testing) since it does not take into account possible interdependence of subsequent test results (except for the correlation with RNA viral load). Other determinants can effect sensitivity and some studies suggest culture positive probability is a better indicator of LFD positive-probability than RNA viral load [29, 30, 31, 32]. Similarly, while \"infectious virus shed\" is undoubtedly a factor in infectiousness, it is not the only determinant (as is assumed here). Other determinants of infectiousness (independent of contact rate) such as symptomatology, mode of contact, etc. mean that the relation ship between viral load and infectiousness measured in contact tracing and household transmission studies can be much less sharp than used here (e.g. [15]), although as discussed in [13] both sharp and shallow relationships are plausible depending on the dataset used and different infectiousness profiles can change the relative impact of testing and symptomatic isolation [53]. The sensitivity analysis presented in B shows that decreasing the parameter \\(h\\) (which results in a less sharp relationship between viral load and infectiousness as well as a broader infectiousness profile, see figure B.1) significantly decreases the impact of testing. This change essentially increases the proportion of infectiousness that occurs before an individual is likely to test positive and isolate. Therefore, it is important to compare multiple different models starting with different sets of reasonable assumptions to generate predictions that inform policy and so models based empirical measures of infectiousness or different within host models will be a useful area of future research. Finally, we have not carried through the results on testing policies to their implications on epidemiological outcomes, such as hospitalisations and deaths, which would be required to perform a full cost-benefit analysis of different testing outcomes. In conclusion, repeat asymptomatic testing with LFDs appears to be an effective way to control transmission of SARS-CoV-2 in the workplace, with the important caveat that high levels of adherence to testing policy is likely more important than the exact testing regime implemented. Specificity of the particular tests being used must be taken into consideration for these policies, as even tests with high specificity can result in the same number of false positives as true positives when prevalence is low. The code used for the calculation of \\(\\Delta\\)IP [54] and the workplace simulations [55] is available open-source. As we have shown, the detailed model of \\(\\Delta\\)IP developed here can be used to simulate both the population-level change in effective infectiousness due to a change in testing policy, but also the individual-level effect. Direct interpretations of \\(\\Delta\\)IP should be made with caution because they only quantify the personal reduction in transmission risk. While testing can reduce both ingress into and transmission within the workplace, repeated ingress and internal transmission could still result in a high proportion of individuals becoming infected (albeit at with a slower growth rate than the no testing case) even with testing interventions present and functional, depending on community prevalence and the length of time for which this prevalence is sustained. Nonetheless, calculations of \\(\\Delta\\)IP have the potential to be used in existing epidemiological simulations to project the impact of testing policies without having to simulate the testing an quarantine explicitly, simply as a scale factor on the individual-level or population-level infectiousness parameters, depending on the model being used. **Funding sources** This work was supported by the JUNIPER modelling consortium (grant number MR/V038613/1) and the UK Research and Innovation (UKRI) and National Institute 561 for Health Research (NIHR) COVID-19 Rapid Response call, Grant Ref: MC_PC_19083. ## Acknowledgements We would like to acknowledge the help and support of colleagues Hua Wei, Sarah Daniels, Yang Han, Martie van Tongeren, David Denning, Martyn Regan, and Arpana Verma at the University of Manchester. Additionally we would like thank the Social Care Working Group (a subcommittee of SAGE) for their valuable feedback on this work. In particular we acknowledge several the insights gained from fruitful discussions with Nick Warren as part of his work for the Health and Safety Executive. ## Author Contributions CAW and IH contributed to the model development, interpretation of the results and drafting and revising of the manuscript. CAW performed the data analysis and simulation design. Authors named as part of the \"University of Manchester COVID-19 Modelling Group\" contributed to the generation and discussion of ideas that fed informed this manuscript, but were not directly involved in the writing of it. ## References * (1) T. Peto, COVID-19: Rapid Antigen detection for SARS-CoV-2 by lateral flow assay: a national systematic evaluation for mass-testing, medRxivxiv [Preprint] (2021) 2021.01.13.21249563. URL [https://doi.org/10.1101/2021.01.13.21249563](https://doi.org/10.1101/2021.01.13.21249563) * (2) J. J. Deeks, A. E. Raffle, Lateral flow tests cannot rule out SARS-CoV-2 infection, BMJ 371 (2020). URL [https://doi.org/10.1136/bmj.m4787](https://doi.org/10.1136/bmj.m4787)* (3) J. Wise, Covid-19: Lateral flow tests miss over half of cases, Liverpool pilot data show, BMJ 371 (2020). URL [https://doi.org/10.1136/bmj.m4848](https://doi.org/10.1136/bmj.m4848) * (4) J. Dinnes, J. J. Deeks, A. Adriano, S. Berhane, C. Davenport, S. Dittrich, D. Emperador, Y. Takwoingi, J. Cunningham, S. Beese, J. Dretzke, L. Ferrante di Ruffano, I. M. Harris, M. J. Price, S. Taylor-Phillips, L. Hooft, M. M. Leeflang, R. Spijker, A. Van den Bruel, Rapid, point-of-care antigen and molecular-based tests for diagnosis of SARS-CoV-2 infection, Cochrane Database Syst. Rev. (2020). URL [http://doi.org/10.1002/14651858.CD013705](http://doi.org/10.1002/14651858.CD013705) * (5) J. Ferguson, S. Dunn, A. Best, J. Mirza, B. Percival, M. Mayhew, O. Megram, F. Ashford, T. White, E. Moles-Garcia, L. Crawford, T. Plant, A. Bosworth, M. Kidd, A. Richter, J. Deeks, A. McNally, Validation testing to determine the effectiveness of lateral flow testing for asymptomatic SARS-CoV-2 detection in low prevalence settings, medRxiv [Preprint] (2020) 2020.12.01.20237784. URL [https://doi.org/10.1101/2020.12.01.20237784](https://doi.org/10.1101/2020.12.01.20237784) * (6) M. Garcia-Finana, D. M. Hughes, C. P. Cheyne, G. Burnside, M. Stockbridge, T. A. Fowler, V. L. Fowler, M. H. Wilcox, M. G. Semple, I. Buchan, Performance of the innova sars-cov-2 antigen rapid lateral flow test in the liverpool asymptomatic testing pilot: population based cohort study, BMJ 374 (2021). URL [https://doi.org/10.1136/bmj.n1637](https://doi.org/10.1136/bmj.n1637) * (7) M. J. Mina, T. E. Peto, M. Garcia-Finana, M. G. Semple, I. E. Buchan,Clarifying the evidence on SARS-CoV-2 antigen rapid tests in public health responses to COVID-19, The Lancet 397 (2021) 1425-1427. URL [https://doi.org/10.1016/S0140-6736](https://doi.org/10.1016/S0140-6736)(21)00425-6 * Armstrong [2020] S. Armstrong, Covid-19: Tests on students are highly inaccurate, early findings show, BMJ 371 (2020). URL [https://doi.org/10.1136/bmj.m4941](https://doi.org/10.1136/bmj.m4941) * Kanji et al. [2021] J. N. Kanji, D. T. Proctor, W. Stokes, B. M. Berenger, J. Silvius, G. Tipples, A. M. Joffe, A. A. Venner, Multicenter Postimplementation Assessment of the Positive Predictive Value of SARS-CoV-2 Antigen-Based Point-of-Care Tests Used for Screening of Asymptomatic Continuing Care Staff, Journal of Clinical Microbiology (2021). URL [https://doi.org/10.1128/JCM.01411-21](https://doi.org/10.1128/JCM.01411-21) * Gans et al. [2022] J. S. Gans, A. Goldfarb, A. K. Agrawal, S. Sennik, J. Stein, L. Rosella, False-Positive Results in Rapid Antigen Tests for SARS-CoV-2, JAMA 327 (5) (2022) 485-486. URL [https://doi.org/10.1001/jama.2021.24355](https://doi.org/10.1001/jama.2021.24355) * Wolf et al. [2021] A. Wolf, J. Hulmes, S. Hopkins, Lateral flow device specificity in phase 4 (post-marketing) surveillance, Tech. rep. (2021). URL [https://www.gov.uk/government/publications/lateral-flow-device-specificity-in-phase-4-post-marketing-surveillance](https://www.gov.uk/government/publications/lateral-flow-device-specificity-in-phase-4-post-marketing-surveillance) * Hellewell and Russell [2020] J. Hellewell, T. W. Russell, The SAFER Investigators and Field Study Team, The Crick COVID-19 Consortium, CMMID COVID-19 working group, R. Beale, G. Kelly, C. Houlihan, E. Nastouli, A. J. Kucharski,Estimating the effectiveness of routine asymptomatic PCR testing at different frequencies for the detection of SARS-CoV-2 infections, BMC Medicine 19 (2021) 106. URL [https://doi.org/10.1186/s12916-021-01982-x](https://doi.org/10.1186/s12916-021-01982-x) * Ferretti et al. (2021) L. Ferretti, C. Wymant, A. Nurtay, L. Zhao, R. Hinch, D. Bonsall, M. Kendall, J. Masel, J. Bell, S. Hopkins, A. M. Kilpatrick, T. Peto, L. Abeler-Dorner, C. Fraser, Modelling the effectiveness and social costs of daily lateral flow antigen tests versus quarantine in preventing onward transmission of COVID-19 from traced contacts, Tech. rep. (2021). URL [https://doi.org/10.1101/2021.08.06.21261725](https://doi.org/10.1101/2021.08.06.21261725) * Fyles et al. (2021) M. Fyles, E. Fearon, C. Overton, University of Manchester COVID-19 Modelling Group, T. Wingfield, G. F. Medley, I. Hall, L. Pellis, T. House, Using a household-structured branching process to analyse contact tracing in the SARS-CoV-2 pandemic, Philosophical Transactions of the Royal Society B: Biological Sciences 376 (2021) 20200267. URL [https://doi.org/10.1098/rstb.2020.0267](https://doi.org/10.1098/rstb.2020.0267) * Lee et al. (2021) L. Y. W. Lee, S. Rozmanowski, M. Pang, A. Charlett, C. Anderson, G. J. Hughes, M. Barnard, L. Peto, R. Vipond, A. Sienkiewicz, S. Hopkins, J. Bell, D. W. Crook, N. Gent, A. S. Walker, T. E. A. Peto, D. W. Eyre, Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) Infectivity by Viral Load, S Gene Variants and Demographic Factors, and the Utility of Lateral Flow Devices to Prevent Transmission, Clinical Infectious Diseases 74 (3) (2021) 407-415. URL [https://doi.org/10.1093/cid/ciab421](https://doi.org/10.1093/cid/ciab421)* Evans et al. (2021) S. Evans, E. Agnew, E. Vynnycky, J. Stimson, A. Bhattacharya, C. Rooney, B. Warne, J. Robotham, The impact of testing and infection prevention and control strategies on within-hospital transmission dynamics of COVID-19 in English hospitals, Philosophical Transactions of the Royal Society B: Biological Sciences 376 (1829) (2021) 20200268. URL [https://doi.org/10.1098/rstb.2020.0268](https://doi.org/10.1098/rstb.2020.0268) * Rosello et al. (2022) A. Rosello, R. C. Barnard, D. R. M. Smith, S. Evans, F. Grimm, N. G. Davies, S. R. Deeny, G. M. Knight, W. J. Edmunds, Centre for Mathematical Modelling of Infectious Diseases COVID-19 Modelling Working Group, Impact of non-pharmaceutical interventions on SARS-CoV-2 outbreaks in English care homes: a modelling study, BMC Infectious Diseases 22 (2022) 324. URL [https://doi.org/10.1186/s12879-022-07268-8](https://doi.org/10.1186/s12879-022-07268-8) * Asgary et al. (2021) A. Asgary, M. G. Cojocaru, M. M. Najafabadi, J. Wu, Simulating preventative testing of SARS-CoV-2 in schools: policy implications, BMC Public Health 21 (2021) 125. URL [https://doi.org/10.1186/s12889-020-10153-1](https://doi.org/10.1186/s12889-020-10153-1) * Leng et al. (2022) T. Leng, E. M. Hill, A. Holmes, E. Southall, R. N. Thompson, M. J. Tildesley, M. J. Keeling, L. Dyson, Quantifying pupil-to-pupil SARS-CoV-2 transmission and the impact of lateral flow testing in English secondary schools, Nature Communications 13 (2022) 1106. URL [https://doi.org/10.1038/s41467-022-28731-9](https://doi.org/10.1038/s41467-022-28731-9) * Tulloch et al. (2021) J. S. P. Tulloch, M. Micocci, P. Buckle, K. Lawrenson, P. Kierkegaard, A. McLister, A. L. Gordon, M. Garcia-Finana, S. Peddie, M. Ashton,I. Buchan, P. Parvulescu, Enhanced lateral flow testing strategies in care homes are associated with poor adherence and were insufficient to prevent COVID-19 outbreaks: results from a mixed methods implementation study, Age and Ageing 50 (6) (2021) 1868-1875. URL [https://doi.org/10.1093/ageing/afab162](https://doi.org/10.1093/ageing/afab162) * Ahmed et al. (2020) F. Ahmed, S. Kim, M. P. Nowalk, J. P. King, J. J. VanWormer, M. Gaglani, R. K. Zimmerman, T. Bear, M. L. Jackson, L. A. Jackson, E. Martin, C. Cheng, B. Flannery, J. R. Chung, A. Uzicanin, Paid Leave and Access to Telework as Work Attendance Determinants during Acute Respiratory Illness, United States, 2017-2018, Emerging Infectious Diseases 26 (2020) 26-33. URL [https://doi.org/10.3201/eid2601.190743](https://doi.org/10.3201/eid2601.190743) * Patel et al. (2021) J. Patel, G. Fernandes, D. Sridhar, How can we improve self-isolation and quarantine for covid-19?, BMJ 372 (2021). URL [https://doi.org/10.1136/bmj.n625](https://doi.org/10.1136/bmj.n625) * Daniels et al. (2021) S. Daniels, H. Wei, Y. Han, H. Catt, D. W. Denning, I. Hall, M. Regan, A. Verma, C. A. Whitfield, M. van Tongeren, Risk factors associated with respiratory infectious disease-related presenteeism: a rapid review, BMC Public Health 21 (2021). URL [https://doi.org/10.1186/s12889-021-12008-9](https://doi.org/10.1186/s12889-021-12008-9) * Quilty et al. (2020) B. J. Quilty, S. Clifford, J. Hellewell, T. W. Russell, A. J. Kucharski, S. Flasche, W. J. Edmunds, CMMID COVID-19 working group, Quarantine and testing strategies in contact tracing for SARS-CoV-2: a modelling study, The Lancet Public Health 6 (3) (2021) e175-e183. URL [https://doi.org/10.1016/S2468-2667](https://doi.org/10.1016/S2468-2667)(20)30308-X * (25) Social Care Working Group, SCWG chairs: Summary of role of shielding, 20 december 2021, Tech. rep. (2021). URL [https://www.gov.uk/government/publications/scwg-chairs-summary-of-role-of-shielding-20-december-2021](https://www.gov.uk/government/publications/scwg-chairs-summary-of-role-of-shielding-20-december-2021) * (26) S. M. Kissler, J. R. Fauver, C. Mack, S. W. Olesen, C. Tai, K. Y. Shiue, C. C. Kalinich, S. Jednak, I. M. Ott, C. B. F. Vogels, J. Wohlgemuth, J. Weisberger, J. DiFiori, D. J. Anderson, J. Mancell, D. D. Ho, N. D. Grubaugh, Y. H. Grad, Viral dynamics of acute SARS-CoV-2 infection and applications to diagnostic and public health strategies, PLOS Biology 19 (7) (2021) e3001333. URL [https://doi.org/10.1371/journal.pbio.3001333](https://doi.org/10.1371/journal.pbio.3001333) * (27) R. Ke, P. P. Martinez, R. L. Smith, L. L. Gibson, A. Mirza, M. Conte, N. Gallagher, C. H. Luo, J. Jarrett, A. Conte, T. Liu, M. Farjo, K. K. O. Walden, G. Rendon, C. J. Fields, L. Wang, R. Fredrickson, D. C. Edmonson, M. E. Baughman, K. K. Chiu, H. Choi, K. R. Scardina, S. Bradley, S. L. Gloss, C. Reinhart, J. Yedetore, J. Quicksall, A. N. Owens, J. Broach, B. Barton, P. Lazar, W. J. Heetderks, M. L. Robinson, H. H. Mostafa, Y. C. Manabe, A. Pekosz, D. D. McManus, C. B. Brooke, Daily sampling of early SARS-CoV-2 infection reveals substantial heterogeneity in infectiousness, Tech. rep. (2021). URL [https://doi.org/10.1101/2021.07.12.21260208](https://doi.org/10.1101/2021.07.12.21260208) * (28) S. M. Kissler, Supporting data for \"viral dynamics of acute SARS-CoV2 infection and applications to diagnostic and public health strategies\". URL [https://github.com/gradlab/CtTrajectories/blob/main/output/params_df_split.csv](https://github.com/gradlab/CtTrajectories/blob/main/output/params_df_split.csv) * (29) J. E. Kirby, S. Riedel, S. Dutta, R. Arnaud, A. Cheng, S. Ditelberg, D. J. Hamel, C. A. Chang, P. J. Kanki, SARS-CoV-2 Antigen Tests Predict Infectivity Based on Viral Culture: Comparison of Antigen, PCR Viral Load, and Viral Culture Testing on a Large Sample Cohort, medRxiv [Preprint] (2021) 2021.12.22.21268274. URL [https://doi.org/10.1101/2021.12.22.21268274](https://doi.org/10.1101/2021.12.22.21268274) * (30) A. Pekosz, V. Parvu, M. Li, J. C. Andrews, Y. C. Manabe, S. Kodsi, D. S. Gary, C. Roger-Dalbert, J. Leitch, C. K. Cooper, Antigen-Based Testing but Not Real-Time Polymerase Chain Reaction Correlates With Severe Acute Respiratory Syndrome Coronavirus 2 Viral Culture, Clinical Infectious Diseases 73 (9) (2021) e2861-e2866. URL [https://doi.org/10.1093/cid/ciaa1706](https://doi.org/10.1093/cid/ciaa1706) * (31) S. Pickering, R. Batra, L. B. Snell, B. Merrick, G. Nebbia, S. Douthwaite, A. Patel, M. T. K. Ik, B. Patel, T. Charalampous, A. Alcolea-Medina, M. J. Lista, P. R. Cliff, E. Cunningham, J. Mullen, K. J. Doores, J. D. Edgeworth, M. H. Malim, S. J. Neil, R. P. Galao, Comparative performance of sars cov-2 lateral flow antigen tests demonstrates their utility for high sensitivity detection of infectious virus in clinical specimens, medRxiv [Preprint] (2021) 2021.02.27.21252427. URL [https://doi.org/10.1101/2021.02.27.21252427](https://doi.org/10.1101/2021.02.27.21252427) * (32) B. Killingley, A. Mann, M. Kalinova, A. Boyers, N. Goonawardane,J. Zhou, K. Lindsell, S. S. Hare, J. Brown, R. Frise, E. Smith, C. Hopkins, N. Noulin, B. Londt, T. Wilkinson, S. Harden, H. McShane, M. Baillet, A. Gilbert, M. Jacobs, C. Charman, P. Mande, J. S. Nguyen-Van-Tam, M. G. Semple, R. C. Read, N. M. Ferguson, P. J. Openshaw, G. Rapeport, W. S. Barclay, A. P. Catchpole, C. Chiu, Safety, tolerability and viral kinetics during SARS-CoV-2 human challenge, Tech. rep. (2022). URL https://doi/org/10.21203/rs.3.rs-1121993/v1 * (33) E. Smith, W. Zhen, R. Manji, D. Schron, S. Duong, G. J. Berry, Analytical and Clinical Comparison of Three Nucleic Acid Amplification Tests for SARS-CoV-2 Detection, Journal of Clinical Microbiology 58 (9) (2020). URL [https://doi.org/10.1128/JCM.01134-20](https://doi.org/10.1128/JCM.01134-20) * (34) NHS Test and Trace, Dual-technology TESTING ANALYSIS High-Risk Settings, Tech. rep. (Nov. 2021). * (35) M. Marks, P. Millat-Martinez, D. Ouchi, C. h. Roberts, A. Alemany, M. Corbacho-Monne, M. Ubals, A. Tobias, C. Tebe, E. Ballana, Q. Bassat, B. Baro, M. Vall-Mayans, C. G-Beiras, N. Prat, J. Ara, B. Clotet, O. Mitja, Transmission of COVID-19 in 282 clusters in Catalonia, Spain: a cohort study, Lancet Infect. Dis. 3099 (20) (2021). URL [https://doi.org/10.1016/S1473-3099](https://doi.org/10.1016/S1473-3099)(20)30985-3 * (36) D. B. Larremore, B. Wilder, E. Lester, S. Shehata, J. M. Burke, J. A. Hay, M. Tambe, M. J. Mina, R. Parker, Test sensitivity is secondary to frequency and turnaround time for COVID-19 screening, Science Advances 7 (2021) eabd5393. URL [https://doi.org/10.1126/sciadv.abd5393](https://doi.org/10.1126/sciadv.abd5393) * (37) F. Ball, I. N. Sell, The shape of the size distribution of an epidemic in a finite population, Mathematical Biosciences 123 (2) (1994) 167-181. URL [https://doi.org/10.1016/0025-5564](https://doi.org/10.1016/0025-5564)(94)90010-8 * (38) Z. Du, C. Liu, C. Wang, L. Xu, M. Xu, L. Wang, Y. Bai, X. Xu, E. H. Y. Lau, P. Wu, B. J. Cowling, Reproduction Numbers of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) Variants: A Systematic Review and Meta-analysis, Clinical Infectious Diseases (2022) ciac137. URL [https://doi.org/10.1093/cid/ciac137](https://doi.org/10.1093/cid/ciac137) * (39) Y. Liu, J. Rocklov, The effective reproductive number of the Omicron variant of SARS-CoV-2 is several times relative to Delta, Journal of Travel Medicine 29 (3) (2022) taac037. URL [https://doi.org/10.1093/jtm/taac037](https://doi.org/10.1093/jtm/taac037) * (40) L. E. Smith, H. W. W. Potts, R. Amlot, N. T. Fear, S. Michie, G. J. Rubin, Adherence to the test, trace, and isolate system in the uk: results from 37 nationally representative surveys, BMJ 372 (2021). URL [https://doi.org/10.1136/bmj.n608](https://doi.org/10.1136/bmj.n608) * (41) S. Michie, R. West, M. B. Rogers, C. Bonell, G. J. Rubin, R. Amlot, Reducing SARS-CoV-2 transmission in the UK: A behavioural science approach to identifying options for increasing adherence to social dis tancing and shielding vulnerable people, British Journal of Health Psychology 25 (4) (2020) 945-956. URL [https://doi.org/10.1111/bjhp.12428](https://doi.org/10.1111/bjhp.12428) * Blake et al. (2021) H. Blake, H. Knight, R. Jia, J. Corner, J. R. Morling, C. Denning, J. K. Ball, K. Bolton, G. Figueredo, D. E. Morris, P. Tighe, A. M. Villalon, K. Ayling, K. Vedhara, Students' Views towards Sars-Cov-2 Mass Asymptomatic Testing, Social Distancing and Self-Isolation in a University Setting during the COVID-19 Pandemic: A Qualitative Study, International Journal of Environmental Research and Public Health 18 (8) (2021). URL [https://doi.org/10.3390/ijerph18084182](https://doi.org/10.3390/ijerph18084182) * Kierkegaard et al. (2021) P. Kierkegaard, M. Micocci, A. McLister, J. S. P. Tulloch, P. Parvulescu, A. L. Gordon, P. Buckle, Implementing lateral flow devices in long-term care facilities: experiences from the Liverpool COVID-19 community testing pilot in care homes: a qualitative study, BMC Health Services Research 21 (2021) 1153. doi:10.1186/s12913-021-07191-9. URL [https://doi.org/10.1186/s12913-021-07191-9](https://doi.org/10.1186/s12913-021-07191-9) * Grassly et al. (2019) N. C. Grassly, M. Pons-Salort, E. P. K. Parker, P. J. White, N. M. Ferguson, K. Ainslie, M. Baguelin, S. Bhatt, A. Boonyasiri, N. Brazeau, L. Cattarino, H. Coupland, Z. Cucunuba, G. Cuomo-Dannenburg, A. Dighe, C. Donnelly, S. L. van Elsland, R. FitzJohn, S. Flaxman, K. Fraser, K. Gaythorpe, W. Green, A. Hamlet, W. Hinsley, N. Imai, E. Knock, D. Laydon, T. Mellan, S. Mishra, G. Nedjati-Gilani, P. Nouvellet, L. Okell, M. Ragonnet-Cronin, H. A. Thompson, H. J. T. Unwin,M. Vollmer, E. Volz, C. Walters, Y. Wang, O. J. Watson, C. Whittaker, L. Whittles, X. Xi, Comparison of molecular testing strategies for COVID-19 control: a mathematical modelling study, The Lancet Infectious Diseases 20 (2020) 1381-1389. doi:10.1016/S1473-3099(20)30630-7. URL [https://doi.org/10.1016/S1473-3099](https://doi.org/10.1016/S1473-3099)(20)30630-7 * (45) A. Gharouni, F. M. Abdelmalek, D. J. D. Earn, J. Dushoff, B. M. Bolker, Testing and Isolation Efficacy: Insights from a Simple Epidemic Model, Bulletin of Mathematical Biology 84 (2022) 66. doi:10.1007/s11538-022-01018-2. * (46) F. A. Lovell-Read, S. Funk, U. Obolski, C. A. Donnelly, R. N. Thompson, Interventions targeting non-symptomatic cases can be important to prevent local outbreaks: SARS-CoV-2 as a case study, Journal of The Royal Society Interface 18 (178) (2021) 20201014. doi:10.1098/rsif.2020.1014. URL [https://doi.org/10.1098/rsif.2020.1014](https://doi.org/10.1098/rsif.2020.1014) * (47) C. Berrig, V. Andreasen, B. F. Nielsen, Heterogeneity in testing for infectious diseases, Tech. rep., medRxiv [Preprint] (2022). URL [https://doi.org/10.1101/2022.01.11.22269086](https://doi.org/10.1101/2022.01.11.22269086) * (48) J. P. Skittrall, SARS-CoV-2 screening: effectiveness and risk of increasing transmission, Journal of The Royal Society Interface 18 (2021) 20210164. doi:10.1098/rsif.2021.0164. URL [https://doi.org/10.1098/rsif.2021.0164](https://doi.org/10.1098/rsif.2021.0164)* (49) H. Tanaka, T. Ogata, T. Shibata, H. Nagai, Y. Takahashi, M. Kinoshita, K. Matsubayashi, S. Hattori, C. Taniguchi, Shorter Incubation Period among COVID-19 Cases with the BA.1 Omicron Variant, International Journal of Environmental Research and Public Health 19 (10) (2022). URL [https://doi.org/10.3390/ijerph19106330](https://doi.org/10.3390/ijerph19106330) * (50) J. A. Backer, D. Eggink, S. P. Andeweg, I. K. Veldhuijzen, N. v. Maarseveen, K. Vermaas, B. Vlaemynck, R. Schepers, S. v. d. Hof, C. B. Reusken, J. Wallinga, Shorter serial intervals in SARS-CoV-2 cases with Omicron BA.1 variant compared with Delta variant, the Netherlands, 13 to 26 December 2021, Eurosurveillance 27 (6) (2022) 2200042. URL [https://doi.org/10.2807/1560-7917.ES.2022.27.6.2200042](https://doi.org/10.2807/1560-7917.ES.2022.27.6.2200042) * (51) J. Del Aguila Mejia, R. Wallmann, J. Calvo-Montes, J. Rodriguez-Lozano, T. Valle-Madrazo, A. Aginagalde-Llorente, Secondary Attack Rate, Transmission and Incubation Periods, and Serial Interval of SARS-CoV-2 Omicron Variant, Spain, Emerging Infectious Diseases 28 (6) (2022) 1224-1228. URL [https://doi.org/10.3201/eid2806.220158](https://doi.org/10.3201/eid2806.220158) * (52) N. Garrett, A. Tapley, J. Andriesen, I. Seocharan, L. H. Fisher, L. Bunts, N. Espy, C. L. Wallis, A. K. Randhawa, M. D. Miner, N. Ketter, M. Yacovone, A. Goga, Y. Huang, J. Hural, P. Kotze, L.-G. Bekker, G. E. Gray, L. Corey, Ubuntu Study Team, High Asymptomatic Carriage With the Omicron Variant in South Africa, Clinical Infectious Diseases (2022) ciac237. URL [https://doi.org/10.1093/cid/ciac237](https://doi.org/10.1093/cid/ciac237)* (53) W. S. Hart, P. K. Maini, R. N. Thompson, High infectiousness immediately before COVID-19 symptom onset highlights the importance of continued contact tracing, eLife 10 (2021) e65534. doi:10.7554/eLife.65534. URL [https://doi.org/10.7554/eLife.65534](https://doi.org/10.7554/eLife.65534) * (54) C. A. Whitfield, Model of SARS-CoV-2 viral load dynamics, infectivity profile, and test-positivity. (2022). URL [https://github.com/CarlWhitfield/Viral_load_testing_COV19_model](https://github.com/CarlWhitfield/Viral_load_testing_COV19_model) * (55) C. A. Whitfield, Model of SARS-CoV-2 transmission in delivery workplaces (2022). URL [https://github.com/CarlWhitfield/Workplace_delivery_transmission](https://github.com/CarlWhitfield/Workplace_delivery_transmission) * (56) C. E. Overton, H. B. Stage, S. Ahmad, J. Curran-Sebastian, P. Dark, R. Das, E. Fearon, T. Felton, M. Fyles, N. Gent, I. Hall, T. House, H. Lewkowicz, X. Pang, L. Pellis, R. Sawko, A. Ustianowski, B. Vekaria, L. Webb, Using statistics and mathematical modelling to understand infectious disease outbreaks: Covid-19 as an example, Infectious Disease Modelling 5 (2020) 409-441. URL [https://doi.org/10.1016/j.idm.2020.06.008](https://doi.org/10.1016/j.idm.2020.06.008) * (57) K. B. Pouwels, T. House, E. Pritchard, J. V. Robotham, P. J. Birrell, A. Gelman, K.-D. Vihta, N. Bowers, I. Boreham, H. Thomas, J. Lewis, I. Bell, J. I. Bell, J. N. Newton, J. Farrar, I. Diamond, P. Benton, A. S. Walker, COVID-19 Infection Survey Team, Community prevalence of SARS-CoV-2 in England from April to November, 2020: results from the ONS Coronavirus Infection Survey, The Lancet Public Health 6 (2021) e30-e38. URL [https://doi.org/10.1016/S2468-2667](https://doi.org/10.1016/S2468-2667)(20)30282-6 * Saltelli et al. (2007) A. Saltelli, M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli, M. Saisana, S. Tarantola, Elementary Effects Method, in: Global Sensitivity Analysis. The Primer, John Wiley & Sons, Ltd, 2007, pp. 109-154, chapter 3. doi:10.1002/9780470725184.ch3. URL [https://doi.org/10.1002/9780470725184.ch3](https://doi.org/10.1002/9780470725184.ch3) * Singanayagam et al. (2017) A. Singanayagam, S. Hakki, J. Dunning, K. J. Madon, M. A. Crone, A. Koycheva, N. Derqui-Fernandez, J. L. Barnett, M. G. Whitfield, R. Varro, A. Charlett, R. Kundu, J. Fenn, J. Cutajar, V. Quinn, E. Conibear, W. Barclay, P. S. Freemont, G. P. Taylor, S. Ahmad, M. Zambon, N. M. Ferguson, A. Lalvani, A. Badhan, S. Dustan, C. Tejpal, A. V. Ketkar, J. S. Narean, S. Hammett, E. McDermott, T. Pillay, H. Houston, C. Luca, J. Samuel, S. Bremang, S. Evetts, J. Poh, C. Anderson, D. Jackson, S. Miah, J. Ellis, A. Lackenby, Community transmission and viral load kinetics of the SARS-CoV-2 delta (B.1.617.2) variant in vaccinated and unvaccinated individuals in the UK: a prospective, longitudinal, cohort study, The Lancet Infectious Diseases 22 (2) (2022) 183-195. doi:10.1016/S1473-3099(21)00648-4. URL [https://doi.org/10.1016/S1473-3099](https://doi.org/10.1016/S1473-3099)(21)00648-4 * He et al. (2017) X. He, E. H. Y. Lau, P. Wu, X. Deng, J. Wang, X. Hao, Y. C. Lau, J. Y. Wong, Y. Guan, X. Tan, X. Mo, Y. Chen, B. Liao, W. Chen, F. Hu,Q. Zhang, M. Zhong, Y. Wu, L. Zhao, F. Zhang, B. J. Cowling, F. Li, G. M. Leung, Temporal dynamics in viral shedding and transmissibility of COVID-19, Nature Medicine 26 (5) (2020) 672-675. doi:10.1038/s41591-020-0869-5. URL [http://doi.org/10.1038/s41591-020-0869-5](http://doi.org/10.1038/s41591-020-0869-5) * (61) M. Cevik, M. Tate, O. Lloyd, A. E. Maraolo, J. Schafers, A. Ho, SARS-CoV-2, SARS-CoV, and MERS-CoV viral load dynamics, duration of viral shedding, and infectiousness: a systematic review and meta-analysis, The Lancet Microbe 2 (2021) e13-e22. doi:10.1016/S2666-5247(20)30172-5. URL [https://doi.org/10.1016/S2666-5247](https://doi.org/10.1016/S2666-5247)(20)30172-5 * (62) A. Marc, M. Kerioui, F. Blanquart, J. Bertrand, O. Mitja, M. Corbacho-Monne, M. Marks, J. Guedj, Quantifying the relationship between SARS-CoV-2 viral load and infectiousness, eLife 10 (2021) e69302. doi:10.7554/eLife.69302. URL [https://doi.org/10.7554/eLife.69302](https://doi.org/10.7554/eLife.69302) * (63) A. Goyal, D. B. Reeves, E. F. Cardozo-Ojeda, J. T. Schiffer, B. T. Mayer, Viral load and contact heterogeneity predict SARS-CoV-2 transmission and super-spreading events, eLife 10 (2021) e63537. doi:10.7554/eLife.63537. URL [https://doi.org/10.7554/eLife.63537](https://doi.org/10.7554/eLife.63537) * (64) K. A. Walsh, K. Jordan, B. Clyne, D. Rohde, L. Drummond, P. Byrne, S. Ahern, P. G. Carty, K. K. O'Brien, E. O'Murchu, M. O'Neill, S. M. Smith, M. Ryan, P. Harrington, SARS-CoV-2 detection, viral load and infectivity over the course of an infection, J. Infect. 81 (3) (2020) 357-371. doi:10.1016/j.jinf.2020.06.067. URL [https://doi.org/10.1016/j.jinf.2020.06.067](https://doi.org/10.1016/j.jinf.2020.06.067) * (65) A. E. Benefield, L. A. Skrip, A. Clement, R. A. Althouse, S. Chang, B. M. Althouse, SARS-CoV-2 viral load peaks prior to symptom onset: A systematic review and individual-pooled analysis of coronavirus viral load from 66 studies, medRxiv (2020). doi:10.1101/2020.09.28.2020208. URL [https://doi.org/10.1101/2020.09.28.20202028](https://doi.org/10.1101/2020.09.28.20202028) ## Appendix A Data and Simulation methods ### Parameter values Supplementary table S1 gives the parameter values used in the models of viral load, infectiousness and test sensitivity as described in sections 2.2.1, 2.2.2, and 2.2.3 derived from sources [1; 27; 33; 34; 56; 57]. ### Calculation of Infectious Potential To calculate IP for each individual we discretise equations (3), (5) and (10). We choose a time-step of 1-day for computational efficiency and because this is the shortest time between tests that we consider. In practice, this means we assume that the viral load on day \\(t\\in\\mathbb{Z}\\) is given by \\(V(t)\\) for the whole day. Since the viral load can actually vary quickly, and therefore the infectiousness can vary between the start and end of a day, we account for this by discretise equation (10) as follows \\[J_{t}^{(k)}=\\int_{t-0.5}^{t+0.5}J_{k}[V_{k}(t)]\\mathrm{d}t \\tag{10}\\]where the integral is computed analytically using equations (5) and (10). Thus, the infectiousness on day \\(t\\) is given by the average infectiousness over the 24-hour period. This means that the integral in the calculation of IP (in equation (3)) is discretised as \\[\\text{IP}_{k}\\approx\\frac{1}{\\langle\\tau_{\\text{inf}}\\rangle}\\sum_{t=0}^{\\tau_{ \\text{inf}}^{(k)}}\\left(1-I_{t}^{(k)}\\right)J_{t}^{(k)}\\] (A.2) where \\(I_{t}^{(k)}=0\\) if individual \\(k\\) is at work that day, and \\(I_{t}^{(k)}=1\\) if not. The day \\(t_{\\text{max}}^{(k)}\\) is the last day for which an individual has a viral load exceeding \\(V_{\\text{cut}}\\). To model isolation, test results and symptom isolation are drawn with the relevant probabilities for individual \\(k\\) and if any trigger an isolation, the earliest isolation day becomes \\(t_{\\text{isol}^{(k)}}\\). Note that symptomatic isolation is assumed to begin on the nearest whole number day to the randomly drawn symptom onset time. Similarly, for positive PCRs, isolation begins on the nearest whole number day from the test result (see figure A.1 for a summary of the turnaround times used). For positive LFDs, people isolate on the day they perform their test (so it is assumed to be taken at the start of the day, before any workplace exposure). Once \\(t_{\\text{isol}}^{(k)}\\) has been determined for an individual, we set \\(I_{t}=1\\) for \\(t_{\\text{isol}}\\leq<t_{\\text{isol}}+\\tau_{\\text{isol}}\\) and re-calculate their IP. To calculate \\(\\langle\\tau_{\\text{inf}}\\rangle\\) we generated \\(5\\times 10^{6}\\) trajectories and calculated the average number of days for which individuals had a viral load \\(V_{t}>V_{\\text{cut}}\\), i.e. the period of time they could test positive via PCR. These values are therefore different for the two viral load models, and are given in Supplementary table S1. The code used to perform all of these calculations is available at [54]. ### Workplace outbreak simulations We use the same Julia program to simulate both workplace transmission scenarios outlined in section 2.5 [55]. The simulations proceeds as follows. At initialisation, the following model features are generated 1. Agents are assigned roles and shift patterns * All staff have the same role and boolean shift pattern. Each is drawn a random permutation number from 1-14 to determine when their shift pattern starts. * In the two-component model, all patients are also assigned a nominal \"shift pattern\", however this has value 'true' for every day. 2. If there is a testing regime for staff: Figure A.1: Probability distribution of PCR turnaround times used in this paper in hours. Vertical lines show the median, mean and 95% central interval of this distribution, as labelled. This distribution was created to imitate data collected by NHS Test-and-Trace in October 2021, based on weekly PCR testing of staff working in high-risk public sector jobs. The times quotes are measured from the time the test was taken (at work) until the result was received (electronically) by the member of staff. * Staff are selected at random with probability \\(P_{\\rm not}\\) to be 'non-testers'. * Testing staff are assigned a boolean testing pattern which has the same start day as their shift pattern. * For all days labelled as a testing day, each is changed to a non-testing day with probability \\(P_{\\rm miss}\\). 3. An index case is chosen at random and infected. Upon infection, an agent is assigned the following: * Viral load and infectiousness trajectories (equations (5) and (10)). * Symptom onset time. * Boolean adherence to symptomatic isolation (true with probability \\(p_{\\rm isol}\\)). * If testing: a test positive probability trajectory (equation (12) or Supplementary table S1). The main simulation loop is executed for each day of the simulation, and proceeds as follows: 1. Update infectious state of all individuals moving any to 'Recovered' status who have reached the end of their infectious period. 2. Perform testing for all agents testing that day. For all positive tests generate an isolation time from the current day as \\(\\lfloor\\tau_{d}+u_{01}\\rceil\\) where \\(u_{01}\\sim U(0,1)\\) is a number uniformly distributed beteween 0 and 1, and \\(\\lfloor.\\rceil\\) indicates rounding to the nearest integer (to simulate tests being performed before or after shifts, at random). 3. Update isolation status for any who are due to isolate on this day. 4. Identify all agents 'on shift' on this day. 5. Generate all workplace contacts of infectious agents: * For each infectious agent with role \\(k\\), generate all contacts with each job role \\(m\\) by selecting from those on shift with probability \\(\\mathbf{P}_{k,m}^{(c)}\\). * Calculate the probability that each contact results in infection using the expression in equation (15). 6. Generate all successful workplace infection events at random with the assigned probabilities. 7. For any infectees that are subject to more than one successful infection event, select the recorded infection event at random. 8. Record all infection events, and for every individual infected change their status to 'infected' and their infection time to the current day. Their susceptibility is set to 0. 9. Increment the day and return to step 1 unless the maximum number of days has been simulated or if no infectious agents remain in the simulation. ## Appendix B Sensitivity Analysis ### Method To estimate the sensitivity of the testing model to various parameter assumptions, we use an \"Elementary Effects\" approach [58] for the main 11 model parameters used for the LFD testing model. The prior distributionsfor these parameters have not been possible to estimate, given that most of them only come from a single source and only some of them have been the subject of meta-analyses. Therefore, using the information available we have set plausible ranges for the parameters we test in table B.1, and visualised their effects on model inputs in figure B.1. \\begin{tabular}{|c|c|c|c|} \\hline Parameter & Distribution & Range & Literature values \\\\ \\hline Median peak & & \\(10^{6.4}\\) - & \\(10^{5.6}\\) - \\(10^{7.0}\\)[13], \\(\\sim 10^{7.5}\\) \\\\ VL \\(V_{p}\\) & U[\\(\\log(V_{p})\\)] & \\(10^{8.8}\\) & [26] \\(\\sim 10^{7.6}\\)[27], \\(\\sim 10^{8.0}\\) \\\\ & & copies/ml & [59], \\(\\sim 10^{8.9}\\)[32]. \\\\ \\hline Median peak & U[\\(t_{p}\\)] & 3.0 - 5.0 & \\\\ VL time \\(t_{p}\\) & & days & 3.2 [26], 4.0 [27], 5 [32] \\\\ \\hline Median VL inv. & & 0.25 - 0.35 & 0.17 - 0.23 [13], \\(\\sim\\)0.25 \\\\ growth \\(1/r\\) & U(\\(1/r\\)) & days & based on [32], 0.29 [26], 0.3 \\\\ & & days & [27] \\\\ \\hline Median VL inv. & & 0.41 - 1.0 & Biased towards longer \\\\ decay \\(1/d\\) & U(\\(1/d\\)) & 0.43 & shedding durations than \\\\ & & days & used here [60, 59, 61, 32] \\\\ \\hline Median inf. & & & Lower (not quantified) \\\\ sigmoidal slope & U(\\(h\\)) & 0.27 - 3.0 & [15, 62, 35]. Similar/higher \\\\ \\(h\\) & & & (not quantified) [63, 13] \\\\ \\hline Inf. scale & & \\(10^{5.4}\\) - & \\\\ param. \\(K_{m}\\) & U[\\(\\log(K_{m})\\)] & \\(10^{7.8}\\) & Lower [15, 62], Higher [63] \\\\ & & copies/ml & \\\\ \\hline \\end{tabular} \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline & & & Varying between two \\\\ LFD max. sens. & U(\\(\\lambda\\)) & 0.54 – 0.84 & sources used and \\\\ \\(\\lambda\\) & & & incorporating lower values \\\\ \\hline & & \\(10^{2.4}\\) – & \\\\ LFD sens. & U[\\(\\log_{10}(V_{50}^{(l)})\\)] & \\(10^{5.4}\\) & Varying between two \\\\ cutoff \\(V_{50}^{(l)}\\) & & copies/ml & sources used. \\\\ \\hline & & & \\\\ LFD sigmoidal & U[\\(\\log(s_{l})\\)] & 0.67 – 2.2 & \\begin{tabular}{c} chosen to vary between 2 \\\\ sources used. \\\\ \\end{tabular} \\\\ \\hline & & & Dependent on symptomatic \\\\ Symp. prob. & U(\\(P_{\\rm symp}\\)) & 0.20 – 0.80 & \\begin{tabular}{c} isolation criteria \\\\ vaccination status. \\\\ \\end{tabular} \\\\ \\hline & & & Near to peak viral load \\\\ & & & [64, 65]. \\(\\sim\\) 5 [56] \\\\ \\hline \\end{tabular} \\end{table} Table B.1: List of parameters varies in the elementary effects sensitivity analysis. The ‘distribution’ column shows the assumed parameter distribution that the parameters are evenly sampled across, where U denotes a uniform distribution. The ‘range’ column gives the maximum and minimum values of these distributions used in the sensitivity analysis. The final column provides some justification for the ranges used. Note that only studies where nasal viral load data was collected in the incubation period was used to inform the ranges for peak viral load, timing and growth rate parameters. The Elementary Effects method was performed as follows. The chosen (uniform) prior for each parameter \\(k\\in\\{1,\\ldots,11\\}\\) was split into \\(p=8\\) equal quantiles. We will denote these quantiles for parameter \\(k\\) by the vector \\(\\mathbf{q}_{k}=[0,1/7,\\ldots,1]\\). Then, \\(r=50\\) paths were drawn to sample the parameter quantile space. This was performed by first randomly drawing a starting point \\(\\mathbf{q}^{(0)}\\) (i.e. drawing a number from \\(\\mathbf{q}_{k}\\) for each of the 11 parameters) from the \\(8^{11}\\) possible starting points in parameter-value space. Then each path consists of 12 points in this parameter space by taking steps of size \\(\\Delta q\\) in each of the 11 parameter dimensions in a random order. A step size of \\(\\Delta q=p/(2(p-1))=4/7\\) was chosen to give equal probability sampling across the plausible parameter ranges. The step direction (positive or negative) was determined by the starting point (since \\(\\Delta q>1/2\\) and so if \\(q_{k}^{(0)}>0.5\\) the step for the parameter \\(k\\) has to be negative). The path forms the 11\\(\\times\\)12 matrix \\(\\mathsf{Q}\\). To improve the spread of these \\(r=50\\) paths (i.e. ensure they are well separated in parameter space) we iteratively replaced paths with new random paths as follows 1. Calculate the distance \\(d_{ij}=\\sum_{n_{1}=1}^{12}\\sum_{n_{2}=1}^{12}\\sqrt{\\sum_{k=1}^{11}(Q_{n_{1},k }^{(i)}-Q_{n_{2},k}^{(j)})^{2}}\\) for each pair of paths in the \\(r=50\\) generated. 2. Calculate the path spread squared \\(D_{1,\\ldots,r}^{2}=\\sum_{i=1}^{r}\\sum_{j=i+1}^{r}d_{ij}^{2}\\). 3. Generate a new random path \\(\\mathsf{Q}^{(r+1)}\\). 4. For each path \\(i=1,\\ldots,r\\), replace the path \\(i\\) with the path \\(r+1\\) and recalculate \\(D_{1,\\ldots,i-1,i+1,\\ldots,r+1}^{2}\\). 5. If any \\(D_{1,\\ldots,i-1,i+1,\\ldots,r+1}^{2}>D_{1,\\ldots,r}^{2}\\) for \\(i\\in\\{1,\\ldots,50\\}\\), replace the path \\(i\\) corresponding to the maximum value of \\(D_{1,\\ldots,i-1,i+1,\\ldots,r+1}^{2}\\) with the path \\(r+1\\) and return to step 3. We repeated this process 5000 times, at which point paths were being replaced infrequently (approx. every 50 iterations), suggesting that the paths were somewhat well spread. For each step on each of the final \\(r=40\\) paths, we ran two of the simulated scenarios considered in figure 6, namely the case with 2 LFDs per week and the case with Daily LFDs (with 100% adherence to testing), using 1000 simulations per scenario. The outcome measure we use is \\(\\Delta\\)IP which was calculated once again by simulating \\(10^{5}\\) realisations. The elementary effects for each parameter \\(k\\) and path \\(i\\) are then calculated as follows \\[\\text{EE}_{k}(Q^{(i)})=\\frac{\\Delta\\text{IP}(\\mathbf{Q_{n+1}^{(i)}})-\\Delta \\text{IP}(\\mathbf{Q_{n}^{(i)}})}{Q_{n+1,k}^{(i)}-Q_{n,k}^{(i)}},\\] (B.1) where \\(n+1\\) is the step in path \\(i\\) where the parameter \\(k\\) changes (i.e. \\(Q_{n+1,k}^{(i)}-Q_{n,k}^{(i)}=\\pm\\Delta q\\)). Then the summary statistics of the elementary effects for each parameter are defined as \\[\\mu_{k}^{*} =\\frac{1}{r}\\sum_{i=1}^{r}\\left|\\text{EE}_{k}(Q^{(i)})\\right|\\] (B.2) \\[\\mu_{k} =\\frac{1}{r}\\sum_{i=1}^{r}\\text{EE}_{k}(Q^{(i)})\\] (B.3) \\[\\sigma^{2} =\\frac{1}{r-1}\\sum_{i=1}^{r}[\\text{EE}_{k}(Q^{(i)})-\\mu_{k}]^{2}\\] (B.4) Finally, we repeated this process 10 times in total to estimate the uncertainty in \\(\\mu_{k}^{*}\\), \\(\\mu_{k}\\) and \\(\\sigma_{k}\\). ### Results The summary statistics for the sensitivity analysis are shown in tables 2 and 3. For reference, from figure 6, we see that the baseline values of \\(\\Delta\\)IP for these cases are in the range 0.6-0.9. Therefore, values of \\(\\mu^{*}<0.03\\) correspond to a \\(<5\\%\\) change in the result and can be treated as not having a significant effect on the predictions. In both cases this includes the inverse growth rate (\\(1/r\\)), peak viral load time \\(t_{p}\\), mean symptom onset time \\(\\mu_{s}\\), and the infectiousness scale parameter \\(K_{m}\\). Note that, we do not change the stipulation within the model that symptom onset time must occur within 2-days either side of peak viral load time, which may explain why neither of these parameters have a large effect (it has been highlighted elsewhere that the relevant timing of onset of infectiousness and symptoms has important implications for testing efficacy [53]). \\begin{tabular}{|c|c|c|c|} \\hline \\multirow{2}{*}{Parameter} & \\multicolumn{3}{c|}{\\(\\Delta\\)IP (2 LFDs per week)} \\\\ \\cline{2-4} & \\(\\mu^{*}\\) & \\(\\mu\\) & \\(\\sigma\\) \\\\ \\hline LFD max. sens. \\(\\lambda\\) & 0.200 \\(\\pm\\) 0.004 & 0.200 \\(\\pm\\) 0.004 & 0.073 \\(\\pm\\) 0.003 \\\\ \\hline LFD sens. cutoff \\(V_{50}^{(l)}\\) & 0.191 \\(\\pm\\) 0.004 & -0.188 \\(\\pm\\) & 0.110 \\(\\pm\\) 0.004 \\\\ & & 0.005 & \\\\ \\hline Median peak VL \\(V_{p}\\) & 0.136 \\(\\pm\\) 0.003 & 0.121 \\(\\pm\\) 0.004 & 0.111 \\(\\pm\\) 0.004 \\\\ \\hline Median inf. sigmoidal & 0.125 \\(\\pm\\) 0.006 & 0.113 \\(\\pm\\) 0.006 & 0.116 \\(\\pm\\) 0.006 \\\\ slope \\(h\\) & & & \\\\ \\hline Symp. prob. \\(P_{\\rm symp}\\) & 0.096 \\(\\pm\\) 0.004 & -0.078 \\(\\pm\\) & 0.091 \\(\\pm\\) 0.007 \\\\ & & 0.004 & \\\\ \\hline Median VL inv. decay & 0.084 \\(\\pm\\) 0.005 & 0.017 \\(\\pm\\) 0.004 & 0.108 \\(\\pm\\) 0.007 \\\\ \\hline \\end{tabular} The same 4 parameters also have the largest effect on both cases simulated, namely the LFD maximum sensitivity \\(\\lambda\\), LFD sensitivity threshold \\(V_{50}^{(l)}\\), the peak viral load \\(V_{p}\\), and the slope parameter for the sigmoidal relationship between infectiousness and viral load \\(h\\). In all of these cases, the effects appear to essentially always act in the same direction (i.e. \\(|\\mu|_{k}\\approx\\mu_{k}^{*}\\)). The most obvious effects are the LFD sensitivity parameters, increasing \\(\\lambda\\) and decreasing \\(V_{50}^{(l)}\\) improve the sensitivity of the LFD tests, and so these have a very large impact on \\(\\Delta IP\\). The increase in \\(\\Delta IP\\) is a similar effect, it \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline Inf. scale param. \\(K_{m}\\) & 0.067 \\(\\pm\\) 0.004 & 0.039 \\(\\pm\\) 0.004 & 0.082 \\(\\pm\\) 0.007 \\\\ \\hline LFD sigmoidal slope \\(s_{l}\\) & 0.067 \\(\\pm\\) 0.001 & 0.049 \\(\\pm\\) 0.004 & 0.069 \\(\\pm\\) 0.002 \\\\ \\hline Median VL inv. growth & & & \\\\ \\(1/r\\) & 0.056 \\(\\pm\\) 0.002 & 0.032 \\(\\pm\\) 0.003 & 0.073 \\(\\pm\\) 0.005 \\\\ \\hline Median peak VL time \\(t_{p}\\) & 0.047 \\(\\pm\\) 0.002 & -0.002 \\(\\pm\\) 0.003 & 0.069 \\(\\pm\\) 0.005 \\\\ \\hline Symp. onset \\(\\mu_{s}\\) & 0.044 \\(\\pm\\) 0.002 & -0.005 \\(\\pm\\) 0.002 & 0.061 \\(\\pm\\) 0.003 \\\\ \\hline \\end{tabular} \\end{table} Table B.2: Sensitivity of the \\(\\Delta\\)IP measure to various model parameters in the case of testing with 2 LFDs per week (with 100% adherence) vs. no testing. Results are sorted in descending order of \\(\\mu^{*}\\) value. Values given are the mean of 10 repeated sensitivity analyses \\(\\pm\\) the sample standard deviation (estimated by 100 bootstrap samples). essentially improves the sensitivity of the LFD tests since these are defined as a function of viral load. The most interesting effect is perhaps the parameter \\(h\\). As shown in figure B.1(b) increasing \\(h\\) makes the infectiousness vs. viral load relationship sharper, in effect this concentrates the infectious period around the time where testing is most sensitive, and thus increases \\(\\Delta IP\\). In the opposite case, where \\(h\\) decreases, the infectiousness is more spread out, and people are more likely to be infectious before they test positive. \\begin{tabular}{|c|c|c|c|} \\hline \\multirow{2}{*}{Parameter} & \\multicolumn{3}{c|}{\\(\\Delta\\)IP (Daily LFDs)} \\\\ \\cline{2-4} & \\(\\mu^{*}\\) & \\(\\mu\\) & \\(\\sigma\\) \\\\ \\hline Median inf. sigmoidal & & & \\\\ slope \\(h\\) & 0.170 \\(\\pm\\) 0.007 & 0.169 \\(\\pm\\) 0.007 & 0.132 \\(\\pm\\) 0.006 \\\\ \\hline LFD sens. cutoff \\(V_{50}^{(l)}\\) & 0.151 \\(\\pm\\) 0.003 & -0.148 \\(\\pm\\) 0.003 & 0.111 \\(\\pm\\) 0.004 \\\\ \\hline LFD max. sens. \\(\\lambda\\) & 0.137 \\(\\pm\\) 0.003 & 0.137 \\(\\pm\\) 0.003 & 0.076 \\(\\pm\\) 0.009 \\\\ \\hline Median peak VL \\(V_{p}\\) & 0.112 \\(\\pm\\) 0.003 & 0.068 \\(\\pm\\) 0.005 & 0.124 \\(\\pm\\) 0.004 \\\\ \\hline Symp. prob. \\(P_{\\rm symp}\\) & 0.089 \\(\\pm\\) 0.004 & -0.079 \\(\\pm\\) 0.004 & 0.087 \\(\\pm\\) 0.009 \\\\ \\hline Median VL inv. decay & & -0.035 \\(\\pm\\) 0.004 & 0.118 \\(\\pm\\) 0.006 \\\\ \\(1/d\\) & & 0.004 & \\\\ \\hline Inf. scale param. \\(K_{m}\\) & 0.069 \\(\\pm\\) 0.002 & 0.056 \\(\\pm\\) 0.003 & 0.080 \\(\\pm\\) 0.006 \\\\ \\hline LFD sigmoidal slope \\(s_{l}\\) & 0.047 \\(\\pm\\) 0.002 & 0.034 \\(\\pm\\) 0.002 & 0.054 \\(\\pm\\) 0.005 \\\\ \\hline Median peak VL time \\(t_{p}\\) & 0.037 \\(\\pm\\) 0.002 & -0.010 \\(\\pm\\) 0.002 & 0.053 \\(\\pm\\) 0.005 \\\\ \\hline \\end{tabular} \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline Median VL inv. growth & & & \\\\ \\(1/r\\) & 0.035 \\(\\pm\\) 0.001 & 0.016 \\(\\pm\\) 0.002 & 0.047 \\(\\pm\\) 0.002 \\\\ \\hline Symp. onset \\(\\mu_{s}\\) & 0.032 \\(\\pm\\) 0.001 & 0.006 \\(\\pm\\) 0.002 & 0.046 \\(\\pm\\) 0.003 \\\\ \\hline \\end{tabular} \\end{table} Table B.3: Sensitivity of the \\(\\Delta\\)IP measure to various model parameters in the case of daily testing with LFDs (with 100% adherence) vs. no testing. Results are sorted in descending order of \\(\\mu^{*}\\) value. Values given are the mean of 10 repeated sensitivity analyses \\(\\pm\\) the sample standard deviation (estimated by 100 bootstrap samples). Figure B.1: Visualisation of the parameter ranges used in the sensitivity analysis. In these figures, parameters are varied independently to show their individual influence across all of the values they take in the sensitivity analysis. Darker shading indicates higher values and the colour indicates the parameter that has been varied, as labelled. The shaded area around the curves shows 95% confidence intervals in the mean, estimated using 1000 bootstrapping samples. (a) Population mean viral load trajectories (Ke et al. model - black line) while varying the peak viral load \\(V_{p}\\), peak time \\(t_{p}\\), inverse growth rate \\(1/r\\), and inverse decay rate \\(1/d\\) parameters. (b) Population mean infectiousness relationships (Ke et al. model - black line) while varying the slope \\(h\\) and threshold viral load \\(K_{m}\\) parameters. (c) Test-positive probability relationships (‘high’ and ‘low’ sensitivity models shown by the solid and dashed black lines respectively) while varying the maximum sensitivity \\(\\lambda\\), the threshold viral load \\(V_{50}^{(l)}\\), and slope \\(s_{l}\\) parameters.
Repeat asymptomatic testing in order to identify and quarantine infectious individuals has become a widely-used intervention to control SARS-CoV-2 transmission. In some workplaces, and in particular health and social care settings with vulnerable patients, regular asymptomatic testing has been deployed to staff to reduce the likelihood of workplace outbreaks. We have developed a model based on data available in the literature to predict the potential impact of repeat asymptomatic testing on SARS-CoV-2 transmission. The results highlight features that are important to consider when modelling testing interventions, including population heterogeneity of infectiousness and correlation with test-positive probability, as well as adherence behaviours in response to policy. Furthermore, the model based on the reduction in transmission potential presented here can be used to parameterise existing epidemiological models without them having to explicitly simulate the testing process. Overall, we find that even with different model parameterisations, in theory, regular asymptomatic testing is likely to be a highly effective measure to reduce transmission in workplaces, subject to adherence. This manuscript was submitted as part of a theme issue on \"Modelling COVID-19 and Preparedness for Future Pandemics\".
Provide a brief summary of the text.
227
arxiv-format/9807270v1.md
# Discovery of the compact X-ray source inside the Cygnus Loop Emi Miyata1 Hiroshi Tsunemi1 Ken'ichi Torii1 Kiyoshi Hashimotodani Department of Earth and Space Science, Graduate School of Science, Osaka University 1-1, Machikaneyama, Toyonaka, Osaka 560-0043 E-mail(EM) : miyata @ ess.sci.osaka-u.ac.jp Takeshi Tsuru1 and Katsuji Koyama1 Cosmic Ray Group, Department of Physics, Kyoto University Kitashirakawa-Oiwake-Cho, Sakyo, Kyoto 606-8502 Kazuya Ayani Bisei Astronomical Observatory Ohkura, Bisei, Okayama 714-1411 Kouji Ohta Department of Astronomy, Faculty of Science, Kyoto University Sakyo-ku, Kyoto 606-8502 Michitoshi Yoshida Okayama Astrophysical Observatory Kamogata-cho, Asakuchi-gun, Okayama 719-02 Footnote 1: email: [email protected] Footnote 2: email: [email protected] ## 1 Introduction The supernova (SN) explosion is a source of heavy elements in the galaxy. After the explosion, they are left as supernova remnants (SNRs) which are bright in various wavelengths. The X-ray spectrum of the SNR is useful to perform the diagnostic of the plasma which contains heavy elements. The young SNRs, like Cas-A (Holt et al. 1996), Tycho (Hwang & Gotthelf 1997), and Kepler (Decourchelle et al. 1997), show various emission lines in their spectra. They are mainly originated from the ejecta rather than the interstellar matter (ISM). Whereas, no associated compact source is reported for those young SNRs showing thin thermal emission. There are three historical SNe containing an associated compact source inside: Crab nebula (SN 1054), G11.2-0.3 (SN386), and 3C58 (SN 1181) (Clark & Stephenson 1977). The young SNRs associated with compact sources usually show non-thermal emission. Some of the middle aged SNRs containing compact sources show thermal emission: the Vela SNR (Kahn et al. 1983; Bocchino, Maggio, & Sciortino 1994), and Puppis-A (Petre, Becker, & Winkler 1996). The Cygnus Loop is one of the well studied middle aged SNRs in various wavelengths. There is a high temperature component in its center showing center filled structure above 2 keV (Hatsukade & Tsunemi 1990). Miyata et al. (1998) detected strong emission lines of Si, S, and Fe from the central part of the Cygnus Loop showing high metal abundance. Whereas, Miyata et al. (1994) reported the sub-solar abundance from the North-East limb of the Loop which must correspond to the abundance of the ISM. Taking into account the projection effect, Miyata et al. (1998) concluded that there were Si, S, and Fe rich plasmas in the center of the Loop. The metal rich plasma must have originated from the ejecta of the SN with the progenitor mass of 25 \\(\\rm\\,M_{\\odot}\\), strongly suggesting the presence of a stellar remnant. Thorsett et al. (1994) reported that they discovered a pulsar (PSR J2043\\(+\\) 2740) at 430 MHz which was 21 pc away from the center. This location is out of the shell. The distance inferred from the dispersion measure is consistent with that of the Cygnus Loop, whereas the characteristic age is 1.21 \\(\\times 10^{6}\\) yr (Ray et al., 1996), which is two orders of magnitude longer than that of the Cygnus Loop (1.8\\(\\times 10^{4}\\) yr; Ku et al., 1984). The transverse velocity (3,000 km s\\({}^{-1}\\)) is also too high compared with other pulsars (Lyne & Lorimer, 1994). Therefore, they ruled out the association between PSR J2043\\(+\\) 2740 and the Cygnus Loop. So far, no compact source associated with the Cygnus Loop is established. We have performed the observation project of the whole Cygnus Loop by using the A SCA GIS (Tanaka, Inoue, & Holt, 1994). We present here the discovery of an X-ray compact source which is inside the Cygnus Loop. ## 2 Observation and data analysis We started the observation project to cover the whole Cygnus Loop with the A SCA GIS. It was started in the PV phase observation in 1993 and was completed in the A O5 observation in 1997. So far we performed 28 pointing observations and each observation time was about 10 ks. In the observation performed on June 3, 1997, we found a compact source which was hardly seen in the ROSAT all-sky survey (Aschenbach, 1994). ### Observation The source was detected both in the GISs (GIS-2 & GIS-3) and in the SIS (SIS-1). The SISs were operated in 1-CCD Faint mode, and each sensor observed different sky area. The source was detected at the edge of the SIS-1 FOV whereas it was not detected in the SIS-0. The GISs were operated in PHmode with nominal bit assignment (10-8-8-5-0-0). Figure 1(a) shows the X-ray surface brightness map obtained with the ROSAT all-sky survey (Aschenbach 1994) and the black circle shows the GIS FOV. The X-ray image obtained with the GIS is shown in figure 1(b). We found two compact sources within the GIS FOV. Source-1 shown in figure 1(b) has already detected both with the Einstein Observatory (Ku et al. 1984) and with the ROSAT as shown in figure 1(a). It has been identified as a K 5 star in the Hipparcos catalogue. On the other hand, the brightest source in the GIS FOV has not yet been reported. We tentatively call this source as AX J2049.6+ 2939. We confirmed it as a compact source within the ASCA imaging capability. The 90 % upper limit of the spatial extent was 14\\({}^{\\prime\\prime}\\). The source location (J2000) is \\(\\alpha=20^{\\rm h}49^{\\rm m}3\\)%\\(\\,7\\) and \\(\\delta=29^{\\rm d}38^{\\prime}\\)\\(\\,57^{\\prime\\prime}\\) with error radius of 70\\({}^{\\prime\\prime}\\)(90 % confidence level). It is about 87\\({}^{\\prime}\\) away from the center. ### X-ray spectra The total exposure time was 11 ks after the standard screening criteria. As shown in figure 1(b), emission from the shell region of the Cygnus Loop extended over AX J2049.6+ 2939. We estimated the contamination from the shell region using the ROSAT image assuming the spatial distribution of the X-ray emission from the shell region to be similar between the GIS and the PSPC. We retrieved the ROSAT PSPC archival data through the HEA SARC/GSFC Online Service. The sequence number of data sets is 500255. Figure 1(c) shows the PSPC image with the GIS contour map. We should note that the source-1 was the only source detected inside the GIS FOV with the PSPC. The shell emission in upper half of the GIS FOV is generally stronger than that in lower half. The shell emission in the region of AX J2049.6+2939 shown in the PSPC image is relatively weak. We selected the background region for the spectral analysis shown with a rectangular region in figure 1(b), considering the intensity of the shell region and the statistics. The source intensity is about 0.11 c s\\({}^{-1}\\)/GIS and 0.15 c s\\({}^{-1}\\)/SIS after subtracting the background. We found no intensity variation during our observation period. Figure 2 shows the spectra extracted from a radius of 3\\({}^{\\prime}\\) centered on AX J2049.6+ 2939. There is no clear emission line in the spectrum which is quite contrast to those of other regions of the Cygnus Loop (Miyata 1996). We fitted the data using three models: a thermal bremsstrahlung model, the Raymond & Smith model (Raymond & Smith 1977), and a power law model. The energy bands we adopted were 0.8-9 keV for the GISs and 0.6-9 keV for the SIS. All models gave us acceptable fits folded with the interstellar absorption feature. Results are summarized in Table 1. ### Short-term variation We searched short-term intensity variation using the GIS data. We sacrificed the timing information in the telemetry (timing bit was set to be zero) since the observation was initially intended to obtain the spectrum in detail. When the timing bit is not assigned, the nominal timing resolutions for high and medium bit rate are 62.5 and 500 ms (Ohashi et al. 1996). However, if the observed target is weak enough not to fill the memory capacity which stores event data before sending to telemetry at the constant rate, the photon arrival time can be estimated with the telemetry rate of 3.91 and 31.25 ms for high and medium bit rate as studied by Hirayama et al. (1996). They clearly showed that the photon arrival time could be determined with the telemetry output rate with offset time of 10.6 ms using the data of PSR B0540-69. Since the count rate of our target is much lower than that of PSR B0540-69, we can safely assume the actual timing resolution to be 3.91 and 31.25 ms for high and medium bit rate. We used the data obtained both with high and medium bit rate data for the temporal analysis, resulting the timing resolution of 31.25 ms (32 Hz). After applying the barycentric correction on photon arrival times, we performed the FFT. We found no coherent pulsation. The upper limits (99% confidence level) on the pulsed fraction for the sinusoidal pulse shape are 21 % (0.8-9 keV) and 31 % (2-9 keV), respectively. ### Other X-ray Observations We obtained the Einstein/IPC data set in the vicinity of AX J2049.6+2939 through the HEASARC/GSFC Online Service as the observation ID of 3784. Observation was performed in December 1979. AX J2049.6+2939 was detected and intensity was \\(0.11\\pm 0.01\\) c s\\({}^{-1}\\). This value was slightly less than that expected (\\(0.16\\pm 0.02\\) c s\\({}^{-1}\\)) from the spectral analysis described in section 2.1. The deep mapping on the Cygnus Loop using the ROSAT HRI is under way (Levenson et al. 1997). The mapping data containing AX J2049.6+2939 is now in the archival data set which can be publicly accessed through the HEASARC/GSFC Online Service as the sequence number of 500462. Observationwas performed in November 1995. We noticed three compact sources in the HRI FOV shown in figure 1(c) with the black crosses. One of them coincides with the position of AX J2049.6+2939 determined with ASCA. Therefore, we conclude that this HRI source corresponds to AX J2049.6+2939. Judging from the X-ray spectrum obtained with ASCA, the expected count rate was (8 \\(\\pm\\) 2)\\(\\times\\)10\\({}^{-2}\\) c s\\({}^{-1}\\) whereas the observed value was (1.03 \\(\\pm\\) 0.07)\\(\\times\\)10\\({}^{-2}\\) c s\\({}^{-1}\\). The improved source position is \\(\\alpha\\) = 20\\({}^{\\rm h}\\)49\\({}^{\\rm m}\\)35.5 and \\(\\delta\\) = 29\\({}^{\\rm d}\\)38\\({}^{\\prime}\\)47\\({}^{\\prime\\prime}\\) with error radius of 10\\({}^{\\prime\\prime}\\) (90 % confidence level). We found no short-term variation in the ROSAT HRI data set mainly due to its poor statistics (total number of photon is \\(\\simeq\\)190). ### Counterpart of AX J2049.6+2939 We searched a counterpart in other wavelengths using _Skyview_ supported by HEA SARC/GSFC. There is no radio source in our error box to a limit of 25 mJy at 4850 MHz (NRA O 48 50MHz survey; Condon et al. 1994). There is also no radio sources in any other catalogues available in _W3Browse_ supported by HEA SARC/GSFC. The closest radio source is 7\\(\\farcm\\)7 away from AX J2049.6+2939 in the north-east direction. Looking at the X-ray image in figure 1(b), there is an extended structure in the north-east direction, which might correspond to another X-ray source associated with the radio source. We retrieved the digitized sky survey (Lasker et al. 1990) image through the _Skyview_. There are a few stellar objects in our error region as shown in figure 3. We carried out the low-dispersion spectroscopy for the brightest source (V=+12.6; HST Guide Star Catalogue) with the CCD spectrograph mounted on the 1.0-m telescope of Bisei Astronomical Observatory on September 10 1997. The spectral resolution derived from the instrumental profile is \\(\\lambda/\\Delta\\lambda\\)\\(\\simeq\\) 1200 at 6000-6500 A. The usable wavelength ranges for the two wavelength setting are 3900-7100 A and 5500-8500 A, respectively. The medium dispersion spectroscopic observations were also carried out with the Cassegrain spectrograph (Kosugi et al. 1995) attached to the 188 cm telescope of Okayama Astrophysical Observatory on September 9 and 10 in 1997. The detector used was the SITe \\(512\\times 512\\) CCD with 20 \\(\\mu\\)m pixels. The spectral resolving power was \\(\\approx\\)2000 at 6000 A. The wavelength regions covered were 4600-5400 A and 620-6900 A. The obtained spectra show the presence of clear absorption lines; Ca H, Ca K, G-band, H\\(\\beta\\), Mg b, Na D, and H\\(\\alpha\\) with a recession velocity of \\(\\sim\\) 0 km s\\({}^{-1}\\) which indicate that it is a G star. Taking into account the X-ray intensity and its spectral shape, we conclude that this star has nothing to do with AX J2049.6+2939. We need a deeper observation for much fainter sources within the error region. We assume the upper limit as V\\(\\gtrsim\\) 20 based on the digitized sky survey image. An X-ray / optical (V-band) flux ratio \\(f_{X\\,(0.1-2.4{\\rm keV})}/f_{V}\\) is \\(\\gtrsim\\) 100 where we assume the power law model for the X-ray flux. Correcting for the interstellar extinction to AX J2049.6+2939 gives \\(f_{X\\,(0.1-2.4{\\rm keV})}/f_{V}\\gtrsim\\) 25. ## 3 Discussion and conclusion Through the observation project of the whole Cygnus Loop with the ASCA GIS, two compact sources were detected above the 5 \\(\\sigma\\) level within the X-ray shell. The brightest compact source is AX J2049.6+2939. The other source was identified to be a K5 star. We estimated the chance probability of finding an unrelated X-ray source with \\(\\gtrsim\\) 5.3 \\(\\times\\) 10\\({}^{-13}\\) erg s\\({}^{-1}\\) in 2-10keV, mainly AGN, inside the Cygnus Loop. Based on the Log\\(N\\)-Log\\(S\\) relation studied by Hayashida (1989), the probability is 26 %. This high chance probability cannot exclude the hypothesis of the AGN. Possible long-term variability between ROSAT and ASCA also supports the hypothesis of the AGN. If AX J2049.6+2939 is an AGN, the expected B magnitude is \\(\\sim\\)16-17 based on the study of the correlation between \\(f_{B}\\) and \\(f_{X}\\) (Zamorani et al. 1981). Considering the neutral H column we obtained in the power law model, the B magnitude can be estimated to be \\(\\sim\\)18-17. Furthermore, it should show emission lines in optical region. The optical spectroscopic study for fainter sources is encouraged. Normal stars have \\(f_{X}/f_{V}\\lesssim\\) 1 (Stocke et al. 1991), which is much lower than that of AX J2049.6+2939. In the case of X-ray binaries, the color index, \\(\\xi=B_{0}+2.5{\\rm log}f_{X\\,(2-10{\\rm keV})}\\), is usually introduced to characterize their properties, where \\(B_{0}\\) is the reddening-corrected B magnitude and \\(f_{X\\,(2-10{\\rm keV})}\\) is the 2-10keV X-ray flux. Assuming the power law model and B magnitude of 20, \\(\\xi\\) is \\(\\simeq\\) 18. This value is much higher than the average value of high-mass X-ray binaries (7-15; van Paradijs, & McClintock 1995). On the other hand, obtained \\(\\xi\\) is lower than the average value of low-mass X-ray binaries (\\(\\simeq\\) 22). Thus, AX J2049.6+2939 is not likely an X-ray binary. Flare stars show substantial variability in the X-ray region on the time scale of tens of minutes to hoursin their high states (Pallavicini, Tagliaferri, & Stella 1990). On the other hand, no activities have been detected in the case of AX J2049.6+2939. In quiescent states of flare stars, the optical luminosity was well correlated with the X-ray luminosity (Agrawal, Rao, & Sreekantan 1986). Optical luminosities of all of their sample were 3 or 4 orders of magnitude larger than those of X-ray luminosity whereas the optical luminosity of AX J2049.6+2939 was 4 orders of magnitude smaller than that of the X-ray luminosity. Therefore, the hypothesis of flare star can be ruled out. CVs show a kTe of 10keV or higher (Ishida & Fujimoto 1995) which is much higher than that we obtained. On the contrary, kTe of dwarf novae is generally lower than that of magnetic CVs (Mukai & Shiokawa 1993). Furthermore, they usually show Fe-K emission line with an equivalent width of 500 eV or more. We found no emission line of Fe-K (90 % upper limit is 200 eV). Taking into account these facts, AX J2049.6+2939 is not a CV. Since the spectra are well fitted by a power law model with a photon index of \\(-2.1\\pm 0.1\\), a rotating neutron star can be a plausible candidate for AX J2049.6+2939 (e.g. Makishima et al. 1996; Becker & Trumper 1997). If AX J2049.6+2939 is a neutron star produced in the SN explosion which left the Cygnus Loop, the transverse velocity is estimated to be \\(\\approx\\)950 D\\({}_{770\\rm{pc}}\\)t\\({}^{-1}_{20,000\\rm{yr}}\\) km s\\({}^{-1}\\). In this calculation, we assumed the explosion center is (\\(\\alpha\\) = 20\\({}^{\\rm{h}}\\)49\\({}^{\\rm{m}}\\)15\\({}^{\\rm{s}}\\), \\(\\delta\\) = 30\\({}^{\\rm{d}}\\)51'30'' (19 50); Ku et al. 1984) and the shock wave expanded in spherical symmetry. This assumption is plausible because the X-ray emission at the shell region is originated in the ISM and the proper motion of the ISM is small. Lyne & Lorimer (1994) studied the transverse velocity for 99 pulsars and obtained the average value as 450 \\(\\pm\\) 90 km s\\({}^{-1}\\). However, the transverse velocity of the PSR 224+65 in the Guitar Nebula is estimated to be 986 km s\\({}^{-1}\\) (Harrison et al. 1993), which is larger than our case. Therefore, the hypothesis that AX J2049.6+2939 is a neutron star produced in the SN explosion which left the Cygnus Loop is an acceptable idea. However, the ROSAT observation suggests the possibility of a long-term variability of AX J2049.6+2939. Since the apparent difference of source flux between the ASCA and ROSAT observations might be due to the fact that the source spectrum is not a simple power-law function, we tried another model to characterize the source spectrum. Apparent larger flux in the ASCA band suggests an additional component to the power-law function that dominates at above 2-3 keV. Thus we employed a blackbody component in addition to the power-law function. We found it difficult to independently constrain both the blackbody temperature and the power-law index. Therefore, we assumed the blackbody temperature between 0.1 and 1 keV and estimated the counting rate as observed by the ROSAT HRI. We found that an addition of a blackbody component of temperature \\(kT\\simeq 0.3\\)keV and emitting radius of \\(R=0.1d_{0.7\\rm kpc}\\)km slightly decreased the predicted counting rate for ROSAT HRI compared to a simple power-law model. Since the inferred radius is much smaller than that for a neutron star radius and the inferred temperature is higher than canonical cooling model, if such a blackbody component really exists, it is naturally interpreted as a heated polar cap (e.g., Greiveldinger et al. 1996). There was still about a factor of 3 difference between the predicted and observed counting rate for ROSAT HRI and we concluded that the source was variable. This suggests that the source, AX J 2049.6 + 2939 is not an ordinary rotation powered pulsar such as one in the Crab Nebula. It is well known that an X-ray pulsar 1E 2259 + 586 in the supernova remnant G 109.1-1.0 shows a factor of a few flux variation on timescales of a few years (Corbet et al. 1995). Therefore, if the source AX J 2049.6 + 2939 is indeed a young neutron star associated with the Cygnus Loop, we suggest that it might be an anomalous object such as one in CTB 109. We should note, however, that an anomalous pulsar 1E 2259 + 586 has much softer spectral shape compared to that of AX J 2049.6 + 2939. We reported here the discovery of a compact source within the Cygnus Loop. Based only on the ASCA observational results and optical studies of our error region, we cannot identify AX J2049.6 + 2939 whether it is an AGN or a neutron star. The obtained X-ray spectrum suggests both possibilities whereas the possible long-term variability supports the hypothesis of an AGN. However, optical studies of our error region prefers the hypothesis of a neutron star. It is strongly required to perform the follow-up observation in various wavelengths in order to reveal its nature. The deeper optical observations for other objects inside the error region will clarify whether they are AGNs or not. In the X-ray regions, we expect the pulsation if it is a rotating neutron star. Based on the work done by Seward & Wang (1988), we can estimate the rate of the rotational energy loss from the X-ray luminosity to be \\(10^{35}\\)erg s\\({}^{-1}\\). If we assume the characteristic age of AX J 2049.6 + 2939 to be the same as that of the Cygnus Loop, we expect the pulse period to be \\(\\sim\\)400 ms. Further observation with the ASCA GIS will strongly constrain the pulsedfraction. We thank Profs. S. Kitamoto and K. Hayashida for valuable comments and suggestions. Dr. B. Aschenbach kindly gave us the whole X-ray image of the Cygnus Loop obtained with the ROSAT all-sky survey. We are grateful to all the members of ASCA team for their contributions to the fabrication of the apparatus, the operation of ASCA, and the data acquisition. Part of this research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. ## References * [1] Agrawal, P.C., Rao, A.R., & Sreekantan, B.V. 1986, MNRAS, 219, 225 * [2] Aschenbach, B. 1994, New Horizon of X-ray Astronomy, eds. F. Makino & T. Ohashi (Universal Academy Press), p103 * [3] Becker, W. & Trumper, J. 1997, A&A, 326, 682 * [4] Bocchino, F., Maggio, A., & Sciortino, S. 1994, ApJ, 437, 209 * [5] Charles, P.A., Kahn, S. M., & McKee, C. F. 1985, ApJ, 295, 456 * [6] Clark, D.H., & Stephenson, F.R. 1977, The Historical Supernovae (Oxford: Pergamon) * [7] Condon, J.J., et al. 1994, AJ, 107, 1829 * [8] Corbet, R.H.D., Smale, A.P., Ozaki, M., Koyama, K. & Iwasawa, K. 1995, ApJ, 443, 789 * [9] Decourchelle, A. et al. 1997, X-ray Imaging and Spectroscopy of Cosmic Hot Plasma, p367 * [10] Greiveldinger, C. et al. 1996, ApJ, 465, L35 * [11] Harrison, P.A., Lyne, A.G., & Anderson, B. 1993, MNRAS, 261, 113 * [12] Hatsukade, I. & Tsunemi, H. 1990, ApJ, 362, 556 * [13] Hayashida, K. 1989, Ph. D. thesis, University of Tokyo * [14] Hirayama, M., Nagase, F., Gunji, S., Sekimoto, Y., & Saito, Y. 1996, A SCA News Letter, 4, 18 * [15] Hwang, U., & Gotthelf, E.V. 1997, ApJ, 475, 665 * [16] Holt, S.S., Gotthelf, E.V., Tsunemi, H., & Negoro, H. 1994, PASJ, 46, L151 * [17] Ishida, M., & Fujimoto, R. 1995, Cataclysmic Variables, eds. A. Bianchini, M. Della Valle and M. Orio (Kluwer:Dordrecht), p93 * [18] Kahn, S.M., Brodie, J., Bowyer, S., & Charles, P.A. 1983, ApJ, 269, 212 * [19] Kosugi, G., Ohtani, H., Sasaki, T., Koyano, H., Shimizu, Y., Yoshida, M., Sasaki, M., Aoki, K., & Baba, A. 1995, PASP, 107, 474 * [20] Ku, W H.-M., Kahn, S.M., Pisarski, R., & Long, K.S. 1984, ApJ, 278, 615 * [21] Lasker, B.M., Sturch, C.R., McLean, B.J., Russell, J.L., Jenkner, H., & Shara, M.M. 1990, AJ, 99, 2019 * [22] Levenson, N.A., et al. 1997, ApJ, 484, L304 * [23] Lyne, A.G., & Lorimer, D.R. 1994, Nature, 369, 127 * [24]* [] Makishima, K. et al. 1996, PASJ, 48, 171 * [] Miyata, E., Tsunemi, H., Pisarski, R., & Kissel, S. E. 1994, PASJ, 46, L101 * [] Miyata, E. 1996, Ph. D. thesis, Osaka university * [] Miyata, E., Tsunemi, H., Kohmura, T., Suzuki, S. & Kumagai, S. 1998, PASJ, 50, 257 * [] Mukai, K., & Shiokawa, K. 1993, ApJ, 418, 863 * [] Ohashi, T. et al. 1996, PASJ, 48, 157 * [] Pallavicini, R., Tagliaferri, G., & Stella, L. 1990, A&A, 228, 403 * [] Petre, R., Becker, C.M., & Winkler, P.F. 1996, ApJ, 465, L43 * [] Ray, P.S., et al. 1996, ApJ, 470, 1103 * [] Raymond, J.C., & Smith, B.W. 1977, ApJS, 35, 419 * [] Seward, F.D. & Wang, Z. 1988, ApJ, 332, 199 * [] Stocke, J.T. et al. 1991, ApJS, 76, 813 * [] Tanaka, Y., Inoue, H., & Holt, S.S. 1994, PASJ, 46, L37 * [] Thorsett, S.E., et al. 1994, IAU circular No.6012 * [] van Paradijs, J., & McClintock, J.E. 1995, in X-ray Binaries, ed W.H.G. Lewin, J. van Paradijs, & E.P.J. van den Heuvel (Cambridge: Cambridge Univ. Press ), p58 * [] Zamorani, G., et al. 1981, ApJ, 245, 357Figure 1: (a) X-ray surface brightness map of the Cygnus Loop obtained with the ROSAT all-sky survey (Aschenbach 1994). The GIS FOV is shown by a black circle. (b) X-ray image obtained with the ASCA GIS. The GIS image was smoothed with a Gaussian of \\(\\sigma\\) = 1\\({}^{\\prime}\\). 2-100 % of the peak brightness is logarithmically divided into 15 levels. The rectangular region shows the area accumulated the background in the spectral analysis. (c) X-ray image obtained with the ROSAT PSP C. The contour map is same as (b). Three black crosses correspond to the sources detected with the ROSAT HRI. Figure 2: X-ray spectra obtained with GIS-2, GIS-3, and SIS-1. Solid lines show the best fit curves of a power law model. Lower panel shows the residuals of the fits. Figure 3: The ROSAT HRI contour map superimposed on the digitized sky survey image. The HRI image was smoothed with a Gaussian of \\(\\sigma\\) = 16\\({}^{\\prime\\prime}\\). AX J 2049.6+2939 is at the center of this image and innermost contour level roughly corresponds to the error circle determined with the HRI image. \\begin{table} \\begin{tabular}{l c c c} \\hline \\hline Model & \\(\\chi^{2}\\)/d.o.f & Parameters & \\(N_{\\rm H}\\) [\\(10^{21}\\)cm\\({}^{-2}\\)] \\\\ \\hline Bremsstrahlung & 207.1/194 & kTe = 4.6 \\(\\pm\\) 0.5 & 1.3 \\(\\pm\\) 0.5 \\\\ Raymond \\& Smith & 201.7/193 & kTe = 4.3 \\(\\pm\\) 0.5, Z\\({}^{\\rm n}\\) = 0.2 \\(\\pm\\) 0.1 & 1.5 \\(\\pm\\) 0.5 \\\\ Power law & 191.1/194 & \\(\\Gamma\\) = \\(-\\) 2.1 \\(\\pm\\) 0.1 & 3.1 \\(\\pm\\) 0.6 \\\\ \\hline \\end{tabular} * Noted – Quoted errors are at 90% confidence level. * Abundance of heavy elements relative to the cosmic value \\end{table} Table 1: Fitting results
We detected an X-ray compact source inside the Cygnus Loop during the observation project of the whole Cygnus Loop with the ASCA GIS. The source intensity is 0.11 c s\\({}^{-1}\\) for GIS and 0.15 c s\\({}^{-1}\\) for SIS, which is the strongest in the ASCA band. The X-ray spectra are well fitted by a power law spectrum of a photon index of \\(-2.1\\pm 0.1\\) with neutral H column of\\((3.1\\pm 0.6)\\times 10^{21}\\)cm\\({}^{-2}\\). Taking into account the interstellar absorption feature, this source is X-ray bright mainly above 1 keV suggesting either an AGN or a rotating neutron star. So far, we did not detect intensity variation nor coherent pulsation mainly due to the limited observation time. There are several optical bright stellar objects within the error region of the X-ray image. We carried out the optical spectroscopy for the brightest source (V=+12.6) and found it to be a G star. The follow up deep observation both in optical and in X-ray wavelengths are strongly required. ISM: individual (Cygnus Loop) Supernova Remnants X-ray: stars 1 Footnote 1: affiliation: Present address: NASA TKKSC SURP, 2-1-1 Seng, Tsukuba, Ibaraki, 305-8505 Japan
Condense the content of the following passage.
316
arxiv-format/2010_07495v1.md
# Frequency-specific, valveless flow control in insect-mimetic microfluidic devices Krishnashis Chatterjee1,addr3 Philip M. Graybilladdr4 John J. Sochaaddr3 Rafael V. Davalosaddr3 addr3 and Anne E. Staplesaddr3 ## 1 Introduction Microfluidic technology is expected to play a critical role in the cooling of integrated circuits, allowing Moore's law to persist past 2021, [1, 2] and in other vitally important applications such as lab-on-a-chip interventions in global health and environmental monitoring [3, 4, 5, 6, 7, 8, 9] fundamental topics in microfluidics, such as efficient strategies for mixing and flow control at the microscale, are still current areas of investigation and their solutions are necessary for achieving such applications. One issue is that microfluidic technology suffers from an actuation overhead problem in which microfluidic chips are tethered to extensive off-chip hardware. Such hardware incurs monetary costs and requires physicalspace, which can be especially problematic for lab-on-a-chip systems where space is a premium, for example in scientific payload for planetary probes [61, 62]. State-of-the-art microfluidic large-scale integration (mLSI) and microfluidic very-large-scale integration (mVLSI) chips contain thousands of flow channels that each require three separate actuations to control the rate and direction of the flow within them [10, 11]. Significant progress has been made both in scaling up microfluidic chips to vLSI dimensions, and in reducing the amount of peripheral actuation machinery associated with microfluidic devices. The largest mVLSI chips are now millions of valves per square centimeter [11, 12, 13]. Actuation strategies have been designed that reduce the required actuation load from three actuations per flow channel to one actuation per chip, when combined with check valves [14, 15, 16], enabling advances in pneumatically actuated, passive elastomeric microfluidic devices for mLSI and vLSI chips. In contrast to these engineering efforts, insects can be viewed as nature's testbed for the active handling of fluids at the microscale. The honeybee, as an example, expertly manipulates air, water, nectar, honey, wax, and hemolymph at the microscale. Insect flight is the most demanding activity known, and the aerobic scope of insects is unrivaled in the animal kingdom [17]. The ratio of maximum to basal rate of respiration in many species of locusts, bees, and flies is in the range 70-100 [17, 18] whereas in humans this ratio approaches 20 maximally, and other small mammals and birds attain only about a 7- to 14-fold increase in metabolic rate during maximum exertion [17, 19]. Among many reasons for their superior performance, such as effective coupling of adenosine triphosphate (ATP) hydrolysis and regeneration in the working flight muscles [17], insects generally do not use blood as an intermediate oxygen carrier [20]. Instead, they transport freshly oxygenated air from a series of spiracular openings directly to the tissues through a complex network of thousands of respiratory tracts called tracheae, which ramify and decrease in size as they approach the cells [21]. Although microfluidic device flow channel densities have approached those of insects, actuation efficiency and device performance lag far behind. Here, we sought to benefit from evolutionary advances made by insects in handling fluids at the microscale by incorporating some of the fundamental features of their unique respiratory systems into the design of a series of biomimetic microfluidic devices. Additionally, our devices serve as microfluidic models that can provide new insight into the mechanisms that insects employ to control airflows in the tracheal system. With more than one million described species, insects represent the most diverse group of animals on earth [50]. Correspondingly, their respiratory systems exhibit a diverse array of morphologies and kinematics used for transport of gases to and from the tissues. Their tracheal systems comprise thousands of short sections of tracheal tubes and junctions that connect the ramifying and anastomosing network. Species that employ rhythmic tracheal compression to produce advective flows are able to modulate both actuation frequency and degree of collapse in the system [22, 24, 29, 51, 52, 53, 54]. Furthermore, some species can produce one-directional flows through the network [55], whereby ambient air enters the tracheal system through one spiracle and exits the body through a different spiracle, a physiologically effective mechanism of gas exchange [56]. The discovery that flow direction can be controlled by frequency in a model tracheal network suggests a new hypothesis for flow control in the insect respiratory system. Although visualizing tracheal wall displacement is possible using synchrotron x-rays, visualizing airflow patterns within these small channels in the insect has so far proven to be intractable [52]. Microfluidic models such as the ones presented here therefore provide a powerful new tool for studying advective flow production in insects, akin to the recent microfluidic platforms used to understand alveolar dynamics in human systems [57, 58]. In turn, these models can lead to new principles of device design, demonstrating that insect-inspired microfluidics can provide a platform for'mining' the biodiversity of transport solutions provided by evolution [59]. ## 2 Background and Results ### Single-channel devices Inspired by insect respiratory mechanics (see Figure 1), we designed, fabricated, and tested a total of eleven single-channel devices (as shown in Figure 2(a)) using current state-of-the-art multilayer soft lithography techniques (see Figure 2(c) and supplementary materials). The positive flow is in the \"+\" direction. Devices S2 and S4-9 incorporate tapered flow channels to reproduce directional collapse, as observed in some insects. Devices S1 and S3-11 reproduce the discrete collapse phenomenon by incorporating two discrete collapse locations. Devices S1 and S6-11 incorporate a u-shaped actuation channel in order to produce a time lag between collapses. The specific geometries and representative dimensions of the eleven devices are provided in Supplementary Table 2 and Supplementary Figure 6, respectively. These single-channel devices were meant to capture the fundamental kinematic and actuation strategies occurring in a single insect tracheal pathway. Tracheal collapse, while generally pathological in vertebrates, occurs during a cyclical form of active respiration known as rhythmic tracheal compression (RTC), found in some insects [22]. The collapse is hypothesized to occur in response to the rhythmic abdominal contractions that pressurize the hemolymph in the animal's body cavity, which surrounds the tracheae and causes them to buckle in localized regions [23, 24, 25, 26, 27] (see also Figures 1(a) and 1(c)-(e)). Within a body segment, the hemolymph pressure is a single scalar actuation input that appears to largely control the complex, passive dynamics of the respiratory network [28]. We were motivated by this efficient method of fluid handling and hoped to fabricate microfluidic devices that could simplify complex flow actuation methods at the microscale. To do this, we extended current three-layer PDMS technology by connecting the overlying actuation channels in the top layer to a single, global actuation chamber, so that they are all actuated simultaneously by the same source, at an actuation frequency \\(f\\) and a differential pressure across the elastomeric membrane, \\(\\Delta p\\) (as shown in Figure 2(b)). In addition to using a single pressurized actuation chamber, we incorporated both the directional and discrete collapse phenomena that have been observed [23, 24, 25, 26, 29] and modeled [30, 31, 32, 33] in insects. Directional collapse (Figure 1(c)) is hypothesized to occur because of either a variation in material or structural properties along the axis of the respiratory tract, or because of pressure waves propagating through the hemolymph, or a combination of both phenomena. Here, we added directionality to the collapse of the channel's ceiling in some of the devices by fabricating tapered flow channels (devices S2 and S4-9, shown in Figure (2a)). To produce discrete collapses (Figure 1(d)-(e)), we fabricated devices with two separate sections of elastomeric membrane (devices S1 and S3-11, Figure 2(a)). Some of the discrete collapse devices (S1 and S6-11, Figure 2(a)) exhibited a time lag between the occurrence of the first and second collapses in a contraction cycle. This time lag was accomplished by incorporating u-shaped actuation channels in these devices so that the pressurized gas (air or nitrogen) in the actuation channel would reach one collapse site slightly before the other, owing to the finite time required for the pressure wave to propagate through the gas. Additionally, there is an inherent lag in the timing of the membrane collapses in devices S3-S11 (Figure 2(a)) resulting from the different response times of the elastomeric membranes of different size. We estimated this difference to be \\(\\tau_{\\text{target}}/\\tau_{\\text{small}}\\sim 25\\) in the devices by approximating the deflecting portions of the membranes as rectangular in shape [34]. This estimate was confirmed from imaging in our experiments. The maximum deflections of the larger and the smaller membranes for different actuation pressures are plotted in Supplementary Figure 7. The lighter areas in the image series in Figure 3(d) show the collapsed regions of the ceiling of the microfluidic channel (made up of a thin PDMS membrane), while the darker areas are the uncollapsed regions. We observed that, for the larger collapse sites in the tapered-channel devices, the membrane collapsed at the endsof the membrane first, and the collapse then propagated inward toward the membrane's center during the collapse part of the cycle (Figure 3(d), \\(t\\) = 0-0.022 s). During the re-expansion part of the cycle, however, the membrane re-expanded uniformly (Figure 3(d), \\(t\\) = 0.028-0.072 s). All eleven single-channel devices acted as pumps, producing a unidirectional flow. For a given actuation pressure (both in magnitude and duty cycle, held constant at 0.50 for all experiments), the flow rate in the devices depended on actuation frequency alone (Figure 2(b)). In one device (S4, shown in Figure 2(a)), we were also able to control the flow direction solely by actuation frequency. At an actuation pressure of \\(10.0\\pm 1.0\\) psi and actuation frequencies below a critical actuation frequency of about 4 Hz, device S4 produced forward flow. However, for actuation frequencies above that critical frequency, it produced flow in the reverse direction (Figures 3(a) and 2(e)), thereby acting as a valveless, reversible microfluidic pump. In one device (S11, as shown in Figure 2(a)), we held the actuation frequency and duty cycle constant and varied the actuation pressure, and found that the flow rate could be controlled continuously with actuation pressure (Figure 3(c)). ### Multichannel devices Four multichannel devices were designed and fabricated (Figure 4) inspired by the basic geometric structure of the main thoracic tracheal network found in some beetles (here we use _Plaryuns decentis_, shown in Figure 1(a)), whose specific flow channels were designed after analyzing the results of the single-channel device flow rate experiments. Our aim was to switch flow off and on in an individual branch of the network by varying the global actuation frequency alone. To accomplish this frequency-based flow switching, the designs of many of the individual channels in the network devices were based on the single-channel device S4, the reversible pump. Specifically, the parent channels in devices M2-M4 (paired channels labeled \"C\" in Figure 4(b)-(d)) used the design of device S9, oriented to provide positive flow for all global actuation frequencies (see the caption of Figure 4 for flow direction convention). The daughter channels (pairs labeled \"A\" and \"B\" in Figure 4(b)-(d)) in devices M2-M4 used the design of the single-channel device S4. In device M2, the orientation of the device S4 design in the inner daughter channels (\"B\") mirrored the orientation of the device S4 design in the outer daughter channels (\"A\"). The reasoning for this design choice was that the inner channels would presumably attempt to pump in the negative flow direction at low global actuation frequencies, but meet the positive flow from the parent channels (\"C\"), and the net result would be no flow through channels \"B.\" Then, at frequencies greater than the critical reversal frequency of the \"B\" channels, positive flow would develop in channel \"B.\" Similar considerations were made for the designs of devices M3 and M4. In order to double the number of experiments that could be performed, experiments were performed in only half of the left-right symmetric devices. The devices were divided by etching away the connecting bridge between the two channels labeled \"B\" in Figure 4, resulting in half devices (Figure 5). These network devices were then subjected to the same single (global) square-wave periodic actuation pressure signal as were the single channel devices, and the resulting flow rates in the constitutive channels were measured. Two of the multichannel designs (M1 and M3) produced flow in a single branch of the network only above a critical frequency, demonstrating valveless, frequency-based flow switching. Below the critical frequency (around 0.7 Hz for device M1 and 0.5 Hz for device M3), there was no measurable flow through the channel in question (channel C for device M1 and channel B for device M3). The presence or absence of flow through channel B in device M3 is demonstrated in Figure 5(a)-(b). In Figure 5(a), the actuation frequency is above the critical value, and black fluid can clearly be seen passing from channel B into channel A. In Figure 5(b), the actuation frequency is below the critical value, and only a small amount of the black dye from channel B is seen to pass from the end of channel B, staying localized at the channel's exit. This small amount of leakage likely occurred due to axial diffusion, which is enhanced in oscillating flows even when no net mean flow is present [35]. ## 3 Discussion We fabricated a series of 11 single-channel and four multichannel microfluidic devices that mimicked key features of active ventilations in the insect respiratory system. Each of the single-channel devices acted as a microfluidic pump, transforming a symmetric actuation pressure signal in the control chamber fluid into a unidirectional flow in the fluidic channel. Importantly, this transduction resulted from asymmetry in the network geometry, either in the flow channel itself, or in the part of the actuation chamber in contact with the flow channel. Furthermore, one of the single-channel devices (device S4) exhibited flow reversal above a critical actuation frequency, demonstrating the discovery of a pneumatically actuated, valveless, reversible, microfluidic pump. We hypothesize that the flow direction is determined by two factors: 1) the relative balance of upstream versus downstream hydraulic resistance, as the momentum is injected via the motion of the channel's ceiling roughly at the mid-channel location; and 2) the effect of nonlinear resonant wave interactions, as are found in impedance-mismatch pumping. The kinematic asymmetry observed in the collapse and re-expansion of the elastomeric membrane indicates that the hydraulic resistance in the channel will also exhibit temporal and spatial asymmetry during the collapse and re-expansion parts of the actuation cycle. This reversible flow direction channel design was incorporated into the multichannel devices, resulting in four microfluidic chips that can be operated by frequency control, with the individual channels responding selectively to the single global actuation frequency. In two of these chips, a single channel can be switched on or off via a critical global actuation frequency. The flow rates produced by our top-performing single channel device (S11) and our top-performing multichannel device (M2) compare favorably to historic and current state-of-the-art pneumatically actuated PDMS micropumps (Table 1). The normalized flow rates that we report here are obtained by dividing the flow rate in \\(\\mu\\)Lmin-1 by the cross-sectional area of the flow channel, the actuation pressure, and the actuation frequency, in order to make a fair comparison across devices and experimental conditions. As evidenced by this comparison, the normalized flow rate in our best-performing device (M2) is higher than all but two of the flow rates reported for the similar PDMS micropumps surveyed here. Our results demonstrate a fundamentally different way to pump fluids at the microscale, using a simplified actuation scheme. Multilayer, pneumatically actuated microscale peristaltic pumps built using soft lithography were pioneered by Unger et al. [37]. Although there have been several innovations since then to reduce the number of needed controllers (e.g., [12, 14, 15]) generally these peristaltic pumps require three overlying pneumatic control channels for fine control of flow rate and direction in a single flow channel. Shortly after the first introduction of such pumps, the same group pioneered microfluidic large-scale integration (mLSI) in which multiplexors that work as a binary tree allow control of \\(n\\) fluid channels with only 2 \\(\\log_{2}n\\) control channels, a type of quasi-static control [10]. In [10], control of a channel or chamber indicated that access to the chamber can be switched on or off, accomplished for \\(n\\) chambers using 2 \\(\\log_{2}n\\) control channels. In later studies (e.g., [45]), such control was demonstrated to be possible using just \\(n\\) control lines for \\(n!/(n/2!)^{2}\\) flow chambers. In contrast to these devices, the devices presented here have precisely controlled flow rates and flow directions within multiple individual channels using a single control line, a fundamentally different type of dynamic control. Our results also suggest that, in principle, flow rate and direction of flow in an arbitrarily large number of fluid channels can be controlled with a single control chamber, leading to an \\(n^{0}\\) rule. Peristaltic pumps, such as those by Unger _et al_. and Thorsen _et al_. discussed previously, send a traveling collapse wave along the axis of the flow channel. The flow is always in the same direction as the traveling wave, and the average flow speed is equivalent to the wave speed [46]. Additionally, there is a linear relationship between the frequency of the compression wave and the flow rate it produces [47]. Our micropumps clearly violate these precepts: the wave of ceiling motion producing the flow cannot be described as a travelling wave, because the waveform changes as it propagates along the channel. (See Figure 3(d), which depicts one full cycle of collapse in device S4, clearly demonstrating a heterogeneous waveform.) In addition, the flow can reverse direction and is not always in the same direction as the collapse wave, and the flow rate produced is a nonlinear function of the actuation frequency. Rather than peristaltic pumping, our devices appear to share many features with impedance mismatch pumping, where nonlinear resonant wave interactions drive the flow, and the flow rate produced has a nonlinear relationship to the actuation frequency [47, 48, 49]. In device S4, there are different portions of the flow channel with different material properties and hence impedances, as occurs in impedance mismatch pumping [48]. The waves generated by the actuation of the thin membrane travel along the membrane, and after encountering the stiffer ends (with different impedance), get reflected back. The encounter between the travelling wave and the reflecting wave results in a pressure build up, which drives a flow. It remains to be seen how far these results will scale up toward the full vLSI scale, with a single actuation providing rich, passive control of thousands of flow channels as in insect respiratory systems. Given the many differences between insects' complex three-dimensional respiratory morphology and the planar geometries of current vLSI microfluidic devices, this may require creative modeling efforts. Regardless, insect-inspired control strategies may provide a key to developing microfluidic platforms that carry out heterogeneous fluidic operations in response to a single, global actuation input. For example, such strategies may lead to the development of platforms for carrying out multiple genomic and proteomic analyses in parallel using a single fluid sample, and with a very low actuation cost. These smart, bioinspired control strategies could also lead to the first truly portable, self-contained labs-on-a-chip, providing insect-style control in insect-sized packages. Despite the challenges of transferring complex insect microfluidic control strategies to engineered devices, many more fundamental aspects of insect respiratory systems remain ripe for investigation and application in gaseous microfluidics, including the role of the small (\\(\\sim\\)1 um diameter) but numerous tracheoles, and uneven wall features (e.g., helical or circumferential windings called taneidia), which may contribute to mixing, heat, and mass transfer. The results presented here suggest that we should continue to look to insect respiratory mechanics for clues about efficient geometries and strategies when scaling microfluidics up to three dimensions, advancing a broad range of critical microfluidics applications, such as integrated circuit cooling. ## 3 Materials and Methods ### Microfluidic devices Standard photolithography and microfabrication techniques were used to fabricate the PDMS-based microfluidic devices used in the experiments [60]. Negatively patterned master molds for the actuation and insect-network (fluidic) channels were created using photolithography by spinning SU-8 2035 (MicroChem) on silicon wafers to create a pattern approximately 80 um in depth. Polydimethylsiloxane (PDMS) (Dow Corning, Sylgard 184) was mixed in a 10:1 weight ratio of base to cross-linker. Afterwards, it was cast-molded on the silanized master molds, cured, and slowly peeled off. The inlet and the outlet holes of the microfluidic devices were punched using a 0.75 mm biopsy punch. To create the thin PDMS layer (approximately 14-20 um thick) between the actuation and fluidic channels (Figure 2(b)), a silanized silicon wafer was spin-coated with PDMS, mixed in 5:1 weight ratio of base to cross-linker at 3000 rpm for 60 seconds. The PDMS was then cured on a hot plate at 90\\({}^{\\circ}\\)C for 30 minutes. The actuating channel was bonded to the middle membrane using a plasma cleaner (PDC-001, Harrick Plasma). The actuating channel and membrane assembly was then again plasma-bonded to the fluidic channel after carefully aligning them to their desired positions, after which the entire device was bonded to a glass slide (Figure 1(b)). The microfluidic devices were kept in vacuum prior to experiments. ### Experimental setup The actuating channels were pressurized using nitrogen gas and depressurized by vacuum through a single port, which served both as an inlet and an outlet. The pressure of the nitrogen was regulated via a precision regulator (McMaster Carr, 2227T21). The pressure and vacuum range of the pressure gauge/regulator (as mentioned in the manual) is -30 to 30 psi. In order to switch between positive and vacuum gauge pressure, the actuation channel was connected via tubing (Cole Parmer, AWG 30) to a fast-acting solenoid valve (FESTO, MHE2-MSIH-5/2-QS-4-K 525119 D002). A 24 V power supply was used to power the solenoid valve, which was computer-controlled by a microcontroller using a solid-state relay (Arduino, Board model: UNO R3). ### Experimental Method Before conducting each run, the devices were primed using ethanol to remove bubbles. Food coloring mixed with water was used as the working fluid in the fluidic channels. The inlet and the outlet ports of the fluid channels were connected to short tubes. The flow rate, produced by the actuation of the thin membrane in localized areas on top of the fluid channel, was determined by measuring the displacement of the fluid front in the outlet tube over a fixed amount of time. Displacement was measured by placing the outlet tube parallel to a measuring ruler with graduations. At least three readings were taken for a single data point, and the average of these readings was used to calculate the flow rate. Out of twelve S4 devices that were fabricated and tested, seven devices demonstrated frequency-dependent flow reversal. The five devices that did not demonstrate flow reversal can be attributed to some misalignment in the positioning of the actuation channels on top of the fluidic channels, which had to be done manually within two minutes after taking the components out of the plasma cleaner. The devices were also tested for repeatability. The flow rate data were taken while the frequencies and the pressures were varied from low to high and then again from high to low. The devices showed the same pattern of behavior in both cases. High resolution video of the device performance was captured using an Edgertronic High Speed Video Camera (Sanstreak Corporation) and Nikon lens (AF MICRO NIKKOR, 60 mm, 1:2.8 D). ## Author contributions A.E.S., J.J.S., R.V.D., and K.C. designed the project; K.C. and P.M.G. designed the experiments; K.C., P.M.G, R.V.D., and A.E.S. designed the microfluidic devices; P.M.G. and K.C. fabricated the microfluidic devices; K.C. performed the flow rate experiments. K.C. carried out the flow visualizations with the help of J.J.S.; K.C. and A.E.S. analyzed the resulting data; A.E.S., K. C., and P.M.G. wrote the manuscript; K.C., J.J.S., R.V.D., and P.M.G. revised and edited the manuscript; A.E.S., J.J.S., and R.V.D. oversaw the project; A.E.S. supervised the investigations of fluid mechanics, R.V.D supervised the microfluidic device studies, and J.J.S. supervised the biological content of the study. **Conflict of interest:** The authors declare that they have no conflict of interest. **Data and materials availability:** All data are available in the main text or the supplementary materials. ## Acknowledgements **General:** The authors thank Mohammad Bonakdar and Joel Garrett for useful discussions and research support, and Kate Lusco for designing Fig. 2c. **Funding:** This work was supported by the US National Science Foundation's Chemical, Bioengineering, Environmental and Transport Systems Division (1437387), Emerging Frontiers in Research and Innovation program (0938047), and Integrative Organismal Systems program (1558052). ## References * [1]_International Technology Roadmap for Semiconductors 2.0, Executive Report_, 2015 Editi (2015). * [2]_2. Sarkar D, Xie X, Liu W, Cao W, Kang J, Gong Y, Kraemer S, Ajayan P M, and Banerjee K 2015 Nature **526**, 91. * [3]_3.Whitesides G M 2006 Nature **442**, 368. * [4]_4. DeMello A J, 2006 Nature **442**, 394. * [5]_5. Liu R H, Stremler M A, Sharp K V, Olsen M G, Santiago J G, Adrian R J, Aref H, and Beebe D J 2000 Microelectromechanical Syst. **9**, 190. * [6]_6. Bhatia S N and Ingber D E, 2014 Nat. Biotechnol. **32**, 760. * [7]_7. Maoz B M, Herland A, Fitzgerald E A, Grevesse T, Vidoudez C, Pacheco A R, Sheehy S P, Park T E, Dauth S, Mannix R, Budnik N, Shores K, Cho A, Nawroth J C, Segre D, Budnik B, Ingber D E, and Parker, K K. 2018 Nat. Biotechnol. **36**, 865. * [8]_8. Yager P, Edwards T, Fu E, Helton K, Nelson K, Tam M R, and Weigl B H 2006 Nature **442**, 412. * 2017 Trends Anal. Chem. **95**, 62. * [10]_10. Thorsen T, Maerkl S J, and Quake S R 2002 **298**, 580. * [11]_11. Araci I E and Quake S R 2012 Lab Chip **12**, 2803. * [12]_12. White J A and Streets A M 2018 HardwareX **3**, 135. * [13]_13. Pop P, Minhass W H, and Madsen J 2016 Microfluidic Very Large Scale Integration (VLSI)._ * [14]_14. Leslie D C, Easley C J, Seker E, Karlinsey J M, Utz M, Begley M R, and Landers J P 2009 Nat. Phys. **5**, 231. * [15]_15. Jain R and Lutz B 2017 Lab Chip **17**, 1552. * [16]_16. Phillips R H, Jain R, Browning Y, Shah R, Kauffman P, Dinh D, and Lutz B R 2016 Lab Chip **16**, 3260. * [17]_17. Wegener G 1996 Experientia **52**, 404. * [18]_18. Meetings G and F. O. R. 1898 Scientific, **237**, 1. * [19]_19. Kammer A E and Heinrich B 1978 Adv. In Insect Phys._**13**, 133. * [20]_20. Wien N M 2019 864. * [21]_21. Acorn J H and Sperling F A H 2009 Encycll. Insects 988. * [22]_22. Westneat M W, Betz O, Blob R W, Fezzaa K, Cooper W J, and Lee W K 2003 Science (80-. ). **299**, 558. * [23]_23. Socha J J and De Carlo F 2008 Dev. X-Ray Tomogr. VI_**7078**, 70780A. * [24]_24. Socha J J, Lee W K, Harrison J F, Waters J S, Fezzaa K, and Westneat M W, 2008 J. Exp. Biol. **211**. * [25]_25. Westneat M W, Socha J J, and Lee W K 2008 Annu. Rev. Physiol. **70**, 119. * [26]_26. Socha J J, Forster T D, and Greenlee K J 2010 Respir. Physiol. Neurobiol. **173**, S65. * [27]_27. Webster M R, De Vita R, Twigg J N, and Socha J J 2011 Smart Mater. Struct. **20**. * [28]_28. Pendar H, Aviles J, Adjerid K, Schoenewald C, and Socha J J 2019 Sci. Rep. **9**, 1. * Regul. Integr. Comp. Physiol. **304**. * [30]_30. Aboelkassem Y, Staples A E, and Socha J J, 2011 Am. Soc. Mech. Eng. Press. Vessel. Pip. Div. PVP **4**, 471. * [31]_31. Aboelkassem Y and Staples A E 2013 Bioinspiration and Biomimetics **8**. * [32]_32. Aboelkassem Y and Staples A E 2014 Acta Mech. **225**, 493. * [33]_33. Chatterjee K and Staples A 2018 Acta Mech. **229**, 4113. * [34]_34. Liu X, Li S, and Bao G, J. Lab. 2016 Autom. **21**, 412. 35. Purtell L P 1981 Phys. Fluids **24**, 789. * 36. Lee Y S, Bhattacharjee N, and Folch A 2018 Lab Chip **18**, 1207. * 37. Unger M A, Chou H P, Thorsen T, Scherer A, and Quake S R 2000 Science (80-. ). **288**, 113. * 38. O. C. Jeong and S. Konishi, 2008 J. Micromechanics Microengineering **18**. * 39. Chiou C H, Yeh T Y, and Lin J L, 2015 Micromachines **6**, 216. * 40. Wang C H and Bin Lee G, 2006 J. Micromechanics Microengineering **16**, 341. * 41. Lai H and Folch A 2011 Lab Chip **11**, 336. * 42. Bin Huang S, Wu M H, Cui Z, and Bin Lee G 2008 J. Micromechanics Microengineering **18**. * 43. Huang C W, Huang S B, and Lee, G B 2006 J. Micromechanics Microengineering **16**, 2265. * 44. So H, Pisano A P, and Seo Y H 2014 Lab Chip **14**, 2240. * 45. Melin J and Quake S R 2007 Annu. Rev. Biophys. Biomol. Struct. **36**, 213. * 46. Waldrop L D and Miller L A 2015. * 47. Forouhar A S, Liebling M, Hickerson A, Nasiraei-Moghaddam A, Tsai H J, Hove J R, Fraser S E, Dickinson M E, and Gharib M 2006 Science (80-. ). **312**, 751. * 48. Hickerson A I, Rinderknecht D, and Gharib M 2005 Exp. Fluids **38**, 534. * 49. Li Z, Seo Y, Aydin O, Elhebary M, Kamm R D, Kong H, and Taher Saif M A 2019 Proc. Natl. Acad. Sci. U. S. A. **116**, 1543. * 50. Lener W 1964 Bull. Entomol. Soc. Am. **10**, 233. * 51. Greenlee K J, Socha J J, Eubanks H B, Pedersen P, Lee, W K and Kirkton S D 2013 J. Exp. Biol. **216**, 2293. * 52. Socha J J, Westenat M W, Harrison J F, Waters J S, and Lee W K, 2007 BMC Biol. **5**, 1. * 53. Hochgraf J S, Waters J S, and Socha J J 2018 Yale J. Biol. Med. **91**, 409. * 54. Pendar H, Kenny M C, and Socha J J 2015 Biol. Lett. **11**. * 55. Heinrich E C, McHenry M J, and Bradley T J 2013 J. Exp. Biol. **216**, 4473. * 56. Farmer C G 2015 Physiology **30**, 260. * 57. Fishler R, Mulligan M K, and Sznitman J 2013 J. Biomech. **46**, 2817. * 58. Fishler R, Hofemeier P, Etzion Y, Dubowski Y, and Sznitman J 2015 Sci. Rep. **5**, 1. * 59. Muller R, Abaid N, Boreyko J B, Fowlkes C, Goel A K, Grimm C, Jung S, Kennedy B, Murphy C, Cushing N D, and Han J P 2018 Bioinspiration and Biomimetics **13**. * 60. Duffy D C, McDonald J C, Schueller O J A, and Whitesides G M 1998 Anal. Chem. **70**, 4974. * 61. Mora, M. F., Greer, F., Stockton, A. M., Bryant, S., & Willis, P. A. (2011). Toward total automation of microfluidics for extraterrestial in situ analysis. _Analytical chemistry_, _83_(22), 8636-8641. * 62. Mora, M. F., Stockton, A. M., & Willis, P. A. (2012). Microchip capillary electrophoresis instrumentation for in situ analysis in the search for extraterrestrial life. _Electrophoresis_, _33_(17), 2624-2638. \\begin{table} \\begin{tabular}{|c|l|l|l|} \\hline **N** & **Source (Author, journal, year)** & **Device description** & **Normalized flow rate** **(\\#L/min)/\\(\\mu\\)m\\({}^{2}\\)/Hz/psi** \\\\ \\hline 1 & Lee et. al, Lab Chip 18, 2018 [36] & 3D printed Quake style valve & 9.3 x 10\\({}^{-8}\\) \\\\ \\hline 2 & Unger et al., Science 288, 2000 [37] & Elastomeric peristaltic micropump & 1.0 x 10\\({}^{-7}\\) \\\\ \\hline 3 & Jeong \\& Konishi, Micromech. Microeng. & Peristaltic micropump, actuation regions & 4.1 x 10\\({}^{-7}\\) \\\\ & 18, 2008 [38] & separated by serpentine channels & \\\\ \\hline 4 & Chiou et al., Micromachines 6, 2015 [39] & Double-side mode PDMS micropump & 5.3 x 10\\({}^{-7}\\) \\\\ \\hline 5 & Wang \\& Lee, Micromech. Microeng. 16, & Pneumatically driven peristaltic micropump & \\\\ & 2006 [40] & with serpentine actuation channels & 1.5 x 10\\({}^{-6}\\) \\\\ \\hline 6 & Lai \\& Folch, Lab Chip 11, 2010 [41] & Single-stroke peristaltic PDMS micropumps & 3.2 x 10\\({}^{-6}\\) \\\\ \\hline 7 & Present work (device S11) & & 3.7 x 10\\({}^{-6}\\) \\\\ \\hline 8 & Huang et al., Micromech. Microeng.18, 2008 [42] & Membrane-based serpentine shaped & 7.1 x 10\\({}^{-6}\\) \\\\ \\hline 9 & Present work (device M2) & & 7.1 x 10\\({}^{-6}\\) \\\\ \\hline 10 & Huang et al., Micromech. Microeng. 16, 2006 [43] & Pneumatic micropump with serially connected actuation chambers & 5.7 x 10\\({}^{-6}\\) \\\\ \\hline 11 & So et al., Lab Chip 14, 2014 [44] & Caterpillar locomotion-inspired valveless & \\\\ & & micropump, teardrop-shaped elastomeric & 2.5 x 10\\({}^{-5}\\) \\\\ & & membrane & \\\\ \\hline \\end{tabular} \\end{table} Table 1: Comparison of maximum normalized flow rate among state-of-the-art pneumatically actuated microfluidic pumps. Entry number (N) corresponds to flow rate ranking, with N = 1 indicating the lowest flow rate and N = 11 the highest. Figure 1: **Tracheal collapse in insects and design of insect inspired microfluidic devices.****(a)** Synchrotron x-ray image of the carabid beetle _Plarynx decentis_ head and thorax (top view), with largest thoracic respiratory tracts highlighted in red. Modified from [23]. **(b)** Photograph of insect-inspired microfluidic device (design M2). The red color represents the actuation network and green color represents the insect-inspired fluid network (highlighted in Figure 1(a)). **(c)** Time series images (1–4) of directional tracheal compression in the horned passalus beetle, _Odonotaenius disjunctus_. Modified from [29]. Collapse propagates from lower left of image (red point pair) to upper right (yellow point pair). **(d)** Synchrotron x-ray image of the largest thoracic tracheae, fully inflated, in the carabid beetle, _Pterostichus stygicus_, from [24]. **(e)** Synchrotron x-ray image of the thoracic tracheae, now fully compressed, with two discrete collapse locations indicated. Figure 2: **Single channel devices.****(a)** Schematics of the eleven single channel devices. **(b)** Flow rate versus frequency for all single-channel devices except S4. (Curves color coded to match device schematics in (a); \\(\\Delta p=6.5\\pm 1.5\\) psi for devices S1–3 and S5–11). **(c)** Schematic (side view, not to scale) of three-layer polydimethylsiloxane (PDMS) device. A single pressure source provides periodic pressurization and evacuation of the actuation channels (maroon), deflecting a thin PDMS membrane (dark gray) and generating flow through the insect-inspired network (green). Channel depth is 80 \\(\\mu\\)m for all devices, width varies from 200–1000 \\(\\mu\\)m. Figure 3: **Performance of devices S4 and S11** **(a)** Flow rate versus \\(f\\) for device S4 (\\(\\Delta p=10.0\\ \\pm\\ 1.0\\) psi). The flow in device S4 reverses direction above a critical frequency of approximately 4 Hz. **(b)** Flow rate per cycle versus \\(f\\) for device S4 (\\(\\Delta p=10.0\\ \\pm\\ 1.0\\) psi). **(c)** Flow rate versus (\\(\\Delta p\\) for device S11 at _f_= 4 Hz. Shading ((e) and (f)) represents the error due to the variance of the data. **(d)** Top view of device S4 over a complete collapse cycle at _f_= 7.81 Hz. Figure 4: **Multichannel devices can switch flow in a branch on or off.** Schematics and flow rate data for four multichannel microfluidic devices. All devices are left-right symmetric. Positive flow is in the “+” direction. Shading represents the uncertainty in the data due to measurement error, which increases with flow rate. **(a)** Frequency-dependent channel switching in device M1. Inset shows the flow rate per cycle versus \\(f\\). The switching behavior is seen more clearly on these axes. **(b)** Device M2 produces positive flow through all three channels for every \\(f\\) tested. **(c)** Frequency-dependent flow switching in device M3. **(d)** Device M4 produces negative flow through channel A and positive flow through channels B and C at every \\(f\\) tested. All devices were tested at \\(\\Delta p=14.0\\ \\underline{\\underline{\\rightarrow}}\\ 1.0\\) psi
Inexpensive, portable lab-on-a-chip devices would revolutionize fields like environmental monitoring and global health, but current microfluidic chips are tethered to extensive off-chip hardware. Insects, however, are self-contained and expertly manipulate fluids at the microscale using largely unexplored methods. We fabricated a series of microfluidic devices that mimic key features of insect respiratory kinematics observed by synchrotron-radiation imaging, including the collapse of portions of multiple respiratory tracts in response to a single fluctuating pressure signal. In one single-channel device, the flow rate and direction could be controlled by the actuation frequency alone, without the use of internal valves. Additionally, we fabricated multichannel chips whose individual channels responded selectively (on with a variable, frequency-dependent flow rate, or off) to a single, global actuation frequency. Our results demonstrate that insect-mimetic designs have the potential to drastically reduce the actuation overhead for microfluidic chips, and that insect respiratory systems may share features with impedance-mismatch pumps. Microfluidics, Insect respiration, Biomimetic, Frequency-driven + Footnote †: journal: XXX XXX 0000-0002-0002-3895]Krishnashis Chatterjee1,addr3 Philip M. Graybilladdr4 John J. Sochaaddr3 Rafael V. Davalosaddr3
Write a summary of the passage below.
291
arxiv-format/2006_07284v2.md
###### The 2019/20 Australian wildfires generated a persistent smoke-charged vortex rising up to 35 km altitude Sergey Khaykin1, Bernard Legras2, Silvia Bucci2, Pasquale Sellitto3, Lars Isaksen4, Florent Tence1, Slimane Bekki1, Adam Bourassa5, Landon Rieger5, Daniel Zawada5, Julien Jumelet1, Sophie Godin-Beekmann1 [MISSING_PAGE_POST] The impact of wildfire-driven thunderstorms on the global stratosphere has been deemed small until the North American wildfires in August 2017. Pyro-cumulonimbus (pyroCb) clouds from that event caused stratospheric perturbations an order of magnitude larger than the previous benchmarks of extreme pyroCb activity and approached the effect of moderate volcanic eruption[1, 2]. Volcanic eruptions inject ash and sulphur which is oxidized and condenses to form submicron-sized aerosol droplets in the stratosphere. With the PyroCb, intense fire-driven convection lifts combustion products in gaseous form as well as particulate matter including organic and black carbon, smoke aerosols and condensed water. The solar heating of the highly absorptive black carbon propels the smoke-laden air parcels upward[1], which, combined with horizontal transport[3, 4], leads to a more efficient meridional dispersion of these aerosols and prolongs their stratospheric residence time[5]. The Australian bushfires that raged in December 2019 - January 2020 have put a new benchmark on the magnitude of stratospheric perturbations. In this study we use various satellite observations to quantify the magnitude of hemispheric-scale perturbation of stratospheric gaseous compounds and aerosol loading caused by these wildfires. The radiative forcing of the stratospheric smoke is estimated using a radiative transfer model supplied by satellite observations of aerosol optical properties. Finally, using the operational forecasting system of the European Centre for Medium-Range Weather Forecasts (ECMWF)[6] we show that the solar heating of an intense smoke patch has led to generation of a quasi-ellipsoidal anticyclonic vortex which lofted a confined bubble of carbonaceous aerosols and water vapour up to 35 km altitude in about three months. ## 2 Results ### Large-scale perturbation of the stratosphere The Australian wildfire season 2019/2020 was marked by an unprecedented burn area of 5.8 million hectares (21% of Australia's temperate forests)[7] and exceptionally strong PyroCb activity in the south-east of the continent[8]. The strongest PyroCb outbreak occurred on the New Year's Eve (Fig. 1a) and on the 1st of January the instantaneous horizontal extent of the stratospheric cloud amounted to 2.5 million km\\({}^{2}\\) as inferred from nadir-viewing TROPOMI[9] satellite measurement (Fig. 1b). On that day, an opaque cloud of smoke was detected in the stratosphere by the CALIOP space-based laser radar (lidar)[10] at altitudes reaching 17.6 km (Fig. 2). Another PyroCb outbreak with stratospheric impact, although less vigorous, took place on 4 January 2020 and on 7 January, the horizontal extent of the stratospheric smoke cloud peaked at 6.1 million km\\({}^{2}\\) (Fig. 1b) extending over much of the Southern midlatitudes (Fig. 2 and Supplementary Fig. 4c). The high-altitude injections of smoke rapidly tripled the stratospheric aerosol optical depth (SAOD) in the southern extra tropics. The SAOD perturbation has by far exceeded the effect on stratospheric aerosol load produced by the North American wildfires in 2017, putting the Australian event on par with the strongest volcanic eruptions in the last 25 years (Fig. 3), i.e. since the leveling off of stratospheric aerosol load after a major eruption of Mount Pinatubo in 1991[12]. Three months after the PyroCb event, the SAOD perturbation has remained at the volcanic levels, gradually decreasing with a rate similar to the decay of stratospheric aerosol produced by moderate volcanic eruptions. Using aerosol extinction profiles retrieved from the limb-viewing NASA OMPS-LP instrument[13] we find the total aerosol particle mass lofted into the so-called stratospheric \"overworld\"[14] (above 380 K isentropic level corresponding to \\(\\sim\\)12-17 km altitude) is 0.4\\(\\pm\\)0.2 Tg (Fig. 4), which is nearly three times larger than the estimates for the previous record-high North American wildfires[2]. The increase in the stratospheric abundance of the gaseous combustion products, derived from the NASA Microwave Limb Sounder (MLS) satellite observations[16], is as remarkable as the aerosol increase. Fig. 4 puts in evidence that the stratospheric masses of carbon monoxide (CO) and acetonitrile (CH\\({}_{3}\\)CN) bounded within the southern extra-tropics increase abruptly by 1.5\\(\\pm\\)0.9 Tg (\\(\\sim\\)20% of the pre-event levels) and 3.7\\(\\pm\\)2.0 Gg (\\(\\sim\\)5%), respectively, during the first week of 2020. The injected mass of water was estimated at 27\\(\\pm\\)10 Tg that is about 3% of the total mass of stratospheric overworld water vapour in the southern extratropics (see Methods). The gases and particles injected by the PyroCbs were advected by the prevailing westerly winds in the lower stratosphere. The patches of smoke dispersed across all of the Southern hemisphere extra-tropics in less than two weeks with the fastest patches returning back over Australia by 13 January 2020, whereas the carbon-rich core remained bounded within midlatitudes as shown in Fig. 2. During the following months, most of the particulate material dwelled in the lower stratosphere, the larger and heavier particles sedimented to lower altitudes while the carbon-rich fraction ascended from 15 to 35 km due to solar heating of black carbon (Supplementary Figure 1). **Radiative forcing** The large amount of aerosols produced a significant radiative forcing (RF), which we quantified using explicit radiative transfer modelling based on the measured aerosol optical properties (Supplementary notes 1). In the latitude band between 25-60\\({}^{\\circ}\\)S, an average cloud-free reference monthly radiative forcing as large as about -1.0 W/m\\({}^{2}\\) at the top of the atmosphere (TOA) and -3.0 W/m\\({}^{2}\\) at the surface is found in February 2020 (Supplementary Figure 2). This can be attributed to perturbation to the stratospheric aerosol layer by the Australian fires plumes. The area-weighted global-equivalent cloud-free RF is estimated (Supplementary table 1) to values as large as -0.31\\(\\pm\\)0.09 W/m\\({}^{2}\\)(TOA) and -0.98\\(\\pm\\)0.17 W/m\\({}^{2}\\)(at the surface). It is important to notice that these estimations don't take the presence of clouds into account and are to be taken as purely reference values. For typical average cloud cover in the area affected by the plume[17], the surface all-sky RF can be reduced to \\(\\sim\\)50% and the TOA all-sky RF to \\(\\sim\\)30-50% of the clear-sky RF estimations[18] (see Supplementary notes 1 for details). From the perspective of the stratospheric aerosol layer perturbation, the global TOA RF produced by the Australian fires 2019/2020 is larger than the RF produced by all documented wildfire events and of the same order of magnitude of moderate volcanic eruptions during the last three decades (that have an integrated effect estimated at[19] -0.19\\(\\pm\\)0.09 W/m\\({}^{2}\\), or smaller[20]). In contrast to the non-absorbing volcanic sulphates, the carbonaceous wildfire aerosols absorb the incoming solar radiation, leading to yet more substantial radiative forcing at the surface, due to the additional large amount of energy absorbed in the plume. This can be linked to the ascent of a smoke cloud in the stratosphere. This is discussed in the next section. **Rising bubble of smoke** The primary patch of smoke originating from the New Year's Eve PyroCb event followed an extraordinary dynamical evolution. By the 4 January 2020, en route across the Southern Pacific, the core plume started to encapsulate into a compact bubble-like structure, which was identified using CALIOP observations on 7 January 2020 as an isolated 4-km tall and 1000 km wide structure (Fig. 5a). Over the next 3 months, this smoke bubble crossed the Pacific and hovered above the tip of South America for a week. It then followed a 10-week westbound round-the-world journey that could be tracked until the beginning of April 2020 (Supplementary notes 2, Supplementary Figure 3), travelling over 66,000 km. The large amount of sunlight-absorbing black carbon contained in the smoke cloud provided a localized heating that forced the air mass to rise through the stratosphere. With an initial ascent rate of about 0.45 km/day, the bubble of aerosol continuously ascended during the three months with an average rate of 0.2 km/day. While remaining compact, the bubble was leaking material from its bottom part, leaving an aerosol trail that was progressively dispersed and diluted, filling the whole mid-latitude austral stratosphere up to 30 km (Supplementary Figure 1, see also Fig. 10b). The rise ceased in late March 2020 when the top of the bubble reached 36 km altitude (Fig. 5b, Supplementary Figure 3). This is substantially higher than any coherent volcanic aerosol or smoke plume observed since the major eruption of Pinatubo in 1991. Along with the carbonaceous aerosols, the bubble entrained tropospheric moisture in the form of ice aggregates injected by the overshooting PyroCbs. In the warmer stratosphere, the ice (detected by MLS sensor as high as 22 km, cf. Fig. 5b) eventually evaporated, enriching the air mass with water vapour. This led to extraordinary high water vapour mixing ratios emerging across the stratosphere within the rising smoke bubble (Fig.5b). The decay of CO within the bubble (Fig. 5c) was faster than that of water vapour, reflecting the fact that, unlike water vapour, the carbon monoxide was subject to photochemical oxidation whose efficiency increases sharply with altitude[21]. Temperature profiles from GNSS radio occultation sensors exhibit a clear dipolar anomaly within the bubble with a warm pole at its bottom and a cold pole at its top (Fig. 5d). Although counterintuitive from the pure radiative transfer perspective, the observed temperature dipole within the heated cloud represents an expected thermal signature of a synoptic-scale vortex. ### The vortex The compact shape of the smoke bubble could only be maintained through an efficient confinement process. The meteorological analysis of the real-time operational ECMWF integrated forecasting system (IFS)[6] reveals that a localized anticyclonic vortex was associated with the smoke bubble during all its travel, moving and rising with it (Figs. 5, 6a-b & Supplementary Figure 5). With a peak vorticity of 10\\({}^{-4}\\) s-1 (Figs. 6c & 7b) and a maximum anomalous wind speed of 13 m s-1 (Fig. 6d) during most of its lifetime, the vortex had a turnover time of about 36 hours. It has therefore survived about 60 turnover times demonstrating a remarkable stability and resilience against perturbations. The ascent was surprisingly linear in potential temperature at a rate of 5.94\\(\\pm\\)0.07 K day-1 (Fig. 7a). This corresponds to a heating rate \\(dT/dt\\) which varies from about 3 K day -1 at the beginning of January to 1.5 K day -1 at the end of March. The altitude rise is from 16 to 33 km for the vortex centroid and from 17 to 36 km for the top of the bubble according to CALIOP. The upper envelope of the OMPS detection of the bubble is seen as the cyan curve on Fig. 7a. This envelope is always above the top detected from CALIOP. Such a bias is expected as OMPS-LP is a limb instrument that scans a much wider area than the narrow CALIOP track. Both CALIOP and OMPS-LP detect that the top of the bubble rises initially faster than the vortex core, by about 10 K day -1. This period corresponds to the initial travel of the bubble to the tip of South America. In terms of altitude ascent, the rates 10 and 5.94 K day -1 translate approximately as 0.45 and 0.2 km day -1. The confining properties of the vortex are confirmed by the co-located anomalies in tracers and aerosol from the TROPOMI instrument. The satellite observations (Supplementary notes 3.3 & Supplementary Figure 4a,b) reveals an isolated enhancement of the aerosol absorbing index and of the CO columnar content, as well as the presence of a deep mini ozone hole, depleted by up to 100 DU. All three features were captured in the same position by the ECMWF analyses (Fig. 6e & Supplementary Figures 4 and 7). ### Companion vortices It is worth noticing that the vortex was not a single event. It had several companions, albeit of smaller magnitude and duration, also caused by localized smoke clouds. The most noticeable lasted one month and travelled the hemisphere. Another one found a path across Antarctica where it was subject to the strong aerosol heating of permanent daylight and rose up to 27 km. The second vortex is borne from the smoke cloud that found its way to the stratosphere during the PyroCb event of 4-5 January 2020. This cloud initially travelled north east passing north of New Zealand before taking a south easterly direction crossing the path of the cloud emitted on 31 December 2019 and the main vortex. Fig. 8 a-b shows that a vortex-like structure can be spotted as early as 7 January, coinciding with the location of a compact bubble according to CALIOP. Subsequently the bubble crossed the path of the first vortex on 16 January while rising and intensifying (see Fig. 8 c-d) and travelled straight eastward crossing the Atlantic and the Indian Ocean until it reached the longitude of Australia by the end of January where it disappeared after travelling all the way round the globe. During this travel, the altitude of the vortex centroid rose from 15 to 19 km and the top of the bubble, as seen from CALIOP, reaches up to 20 km. The third bubble has been first detected by CALIOP on 7 January at 69\\({}^{\\circ}\\) S / 160\\({}^{\\circ}\\) W. It then moved over Antarctica (Fig. 9a) until the end of January where it spent a week over the Antarctic Peninsula before moving to the tip of South America, shortly after this region was visited by the main vortex. It eventually moved to the Atlantic where it dissipated by 25 February (Fig. 9b). The bubble was accompanied by a vortex during its whole life cycle as seen in Fig. 9c. Although the magnitude of this vortex was modest compared to the main one and even the second one in terms of maximum vorticity (Fig. 9e), it performed a very significant ascent from 18 to 26 km (Fig. 9d). We attribute this effect to the very effective aerosol heating received during the essentially permanent daylight of the first period over Antarctica. The simultaneous rise of the main vortex and the third vortex is very clear from the OMPS-LP latitude-altitude sections in Fig. 10c-d. The combined trajectories of the main vortex and its two companions are shown in Fig. 10e. Fig. 10a shows how the trajectories of the vortices form the skeleton of the dispersion path of the smoke plume in the stratosphere. Fig. 10b shows that OMPS-LP follows closely the evolution of the top altitude of all the vortices under the shape of well-defined branches in a longitude-time Hovmoller diagram. It is worth noticing that both the main and the third vortex spent some time wandering in the vicinity of the Drake passage at the beginning of February. Such a stagnation situation is prone to sensitivity. The IFS forecasts during the end of January predicted that the main vortex would cross to the Atlantic, while instead it did not and began moving westward over the Pacific as it reached a higher altitude where easterly winds prevail. Severalsecondary branches that seem to separate from the main one followed by the main vortex are also visible in this diagram. A detailed inspection reveals that they are indeed associated with patches left behind by the main bubble as it moved upward. It is apparent from several of the panels of Fig. 7a that the top part of the bubble remained always compact while the bottom part was constantly leaking material. Fig. 10c-d shows latitude altitude cross sections of OMPS aerosols on 16 January and 1 February. The fast rise of the main vortex during that period is visible near 50S while the second vortex is located at lower altitude and the third vortex corresponds to the towering structure by 75-80S. ## Discussion Long-lived anticyclones have also been observed as very rare events in the summer Arctic stratosphere[22, 23, 24], but they are much larger structures of about 2000 km, very near the pole, and do not display any of the specific characters of the self-generated smoke-charged vortex. According to geophysical fluid dynamics theory[25], a local heating in the austral stratosphere is expected to produce positive potential vorticity aloft and destroy it beneath. The positive potential vorticity is partially realized as anticyclonic rotation and partially as temperature stratification. The negative potential vorticity is apparently dispersed away with the tail of the bubble. The ECMWF analysed thermal structure (Figs. 6f & Supplementary Figure 7c) shows the same dipole as in the GNSS-RO satellite profiles with the same amplitude. The observed vortex is quite similar to known isolated ellipsoidal solutions of the quasi-geostrophic equations[26] which explain some of the long-lived vortices in the ocean[27], but such a structure is described here for the first time in the atmosphere. The ECMWF analysis uses climatological aerosol fields, so it does not take the aerosol emissions by the Australian wildfires into account, nor the satellite measurements of aerosol extinction. Thus, the replication of the vortex by the analysis was due to assimilation of temperature, wind and ozone measurements from operational satellites. We found that the radio-occultation temperature profiling satellite constellation played the key role in the successful reconstruction of this unusual stratospheric phenomenon by ECMWF analyses (Supplementary notes 4.2). The analyses were also influenced by a few radiosonde profiles and wind profiles in the lower stratosphere by the ESA Aeolus satellite's Doppler wind lidar[28], which provided an observational evidence of the anticyclonic vortex. The ECMWF IFS produce analyses using a 4-dimensional variational data assimilation method[29], combined with a high-resolution time-evolving forecast model that predicts the atmospheric dynamics. The assimilation system updates the state of the atmosphere using satellite data and in situ observations. As aerosols are not assimilated in the IFS, the ECMWF forecast was not expected to maintain the vortex as observed. Indeed, as shown in Fig. 7a,b, the forecast consistently predicts a decay of the vortex amplitude and fails to predict its rise except on 3 March where the vorticity centroid underwent a jump following the stretching and breaking of the vortex under the effect of vertical shear a few days earlier. As the nature of this event was purely dynamical, it was correctly predicted. This trend would lead to a rapid loss of the vortex and is corrected by the assimilation of new observations, providing an additional forcing that ensures maintenance and rise of the structure. The ozone hole is forced by the assimilation of satellite observations of ozone (Supplementary Figure 6b,f). The physical mechanism leading to the ozone hole must be a combination of the uplift of ozone-poor tropospheric air and ozone-depleting chemistry in the smoke cloud. The generation of the smoke-charged vortex is reported in another study[30], of which we were made aware during the review process. They use MLS, OMPS and CALIOP satellite instruments as well as the US Navy Global Environmental Model (NAVGEM) analysis to characterize the chemical composition, thermal structure and the spatiotemporal evolution of the main vortex. They show the evolution of the vortex until 10\\({}^{\\text{th}}\\) March 2020, that is three weeks before it has reached its apogee and collapsed, as follows from our analysis. In contrast to the respective study[30], we provide a more comprehensive analysis of the smoke vortices, quantify the large-scale perturbation of the stratospheric composition and radiative balance, and describe the impact of satellite data assimilation. While the ref. [30] reveals a number of remarkable similarities with respect to our approach and analysis, they report the average diabatic ascent rate of 8 K day \\({}^{-1}\\) as opposed to 5.9 K day \\({}^{-1}\\) reported here. An important point regarding the vortex dynamics and maintenance, which is left without mention in ref. [30], is the need of a mechanism to suppress the negative potential vorticity produced by the isolated heating as discussed hereinbefore. ## 2 Conclusions The observed and modeled planetary-scale repercussions of the Australian PyroCb outbreak around the turn of 2020 revolutionize the current understanding and recognition of the climate-altering potential of the wildfires. A single stratospheric overshoot of combustion products produced by the New Year's eve PyroCb event has led to an unprecedented hemispheric-scale perturbation of the stratospheric gaseous and aerosol composition, radiative balance and dynamical circulation with a prolonged effect. Whilst rivaling the volcanic eruptions in terms of stratospheric aerosol load perturbation, this exceptionally strong wildfire event had a substantial impact on a number of other climate-driving stratospheric variables such as water vapour, carbon monoxide and ozone. As the frequency and intensity of the Australian wildfires is expected to increase in the changing climate[31], it is possible that this type of extraordinary event will occur again in the future eventually becoming a significant contributor to the global stratospheric composition. This work reports the self-organization of an absorbing smoke cloud as a persistent coherent bubble coupled with a vortex that produces confinement preserving the compactness of the cloud. This structure is maintained and rises in the calm summer stratosphere due to its internal heating by solar absorption. The intensity, the duration and the extended vertical and horizontal path of this event certainly ranks it as extraordinary. More detailed studies will be necessary to understand fully the accompanying dynamical processes, in particular how a single sign vorticity structure emerges as a response to heating. Whether stratospheric smoke vortices have already occurred during previous large forest fires is to be explored. **Authors contributions:** SK investigated the impact on the stratospheric gaseous composition, aerosol optical parameters and thermodynamical fields using OMPS-LP, MLS, GNSS-RO, SAGE III and Aeolus satellite observations. BL investigated the smoke bubble in the CALIOP data and diagnosed the vortex and its dynamics in the IFS analysis. SBu analysed the TROPOMI data. PS calculated the radiative forcing. LI analysed the forcing of the vortex by the IFS assimilation. FT, SBe and JJ have provided the estimates of injected mass of aerosols and gases using MLS and OMPS-LP data. AB, LR and DZ provided OMPS-LP data and a detailed insight into limb-scatter data quality aspects. SGB contributed to the results on aerosol perturbation in the stratosphere. SK, BL, SBu, PS, LI, SBe wrote the paper. All the authors contributed to the final version. **Methods** * **OMPS-LP**: The Ozone Mapping and Profiler Suite Limb Profiler (OMPS-LP) on the Suomi National Polar-orbiting Partnership (Suomi-NPP) satellite, which has been in operation since April 2012, measures vertical images of limb scattered sunlight[32]. Aerosol extinction coefficient and ozone number density profiles are retrieved from the limb radiance using a two-dimensional tomographic inversion[13] and a forward model that accounts for multiple scattering developed at the University of Saskatchewan[33]. This OMPS-LP USask aerosol product is retrieved at 746 nm and has a vertical resolution of 1-2 km throughout the stratosphere. The aerosol extinction profiles are exploited to analyze the spatiotemporal evolution of the smoke plumes and for computing the mass of particulate matter lofted above the 380 K potential temperature level corresponding to \\(\\sim\\)12-17 km altitude in the extratropics, see Fig. 2). The aerosol mass is derived from the aerosol extinction data assuming a particle mass extinction coefficient of 4.5 m\\({}^{2}\\) g-1 (ref. 2). * **SAGEIII:** The Stratospheric Aerosol and Gas Experiment (SAGE) III provides stratospheric aerosol extinction coefficient profiles using solar occultation observations from the International Space Station (ISS)[34]. These measurements, available since February 2017, are provided for nine wavelength bands from 385-1,550 nm and have a vertical resolution of approximately 0.7 km. The SAGE III/ISS instrument and the data products have characteristics nearly identical to those from the SAGE III Meteor mission[35]. Here we use the 754 nm wavelength band for quantifying the error of OMPS-LP aerosol extinction retrieval (using 16-84 percentiles) and the wavelength pair 1019/521 nm for deriving the Angstrom exponent, which is used for the radiative forcing calculations. * **CALIOP**: The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) is a two-wavelength polarization lidar on board the CALIPSO mission[10] that performs global profiling of aerosols and clouds in the troposphere and lower stratosphere. We use the total attenuated 532 nm backscatter level 1 product V3.40 which is available in near real time with a delay of a few days (doi:10.5067/CALIOP/CALIPSO/CAL_LID_L1-VALSTAGE1-V3-40). The along track horizontal / vertical resolution are respectively 1 km / 60 m between 8.5 and 20.1 km, 1.667 km / 180 m between 20.1 and 30.1 km, and 5 km / 300 m resolution between 30.1 km and 40 km. The L1 product oversamples these layers with an actual uniform horizontal resolution of 333 m. * 82\\({}^{\\circ}\\)N) measurement of vertical profiles of various atmospheric gaseous compounds (including H\\({}_{2}\\)O, CO and CH\\({}_{3}\\)CN) cloud ice, geopotential height, and temperature of the atmosphere. The measurements yield around to 3500 profiles per day for each species with a vertical resolution of \\(\\sim\\)3 - 5 km. For tracking the smoke bubble, we selected profiles bearing CO enhancements in the stratosphere exceeding 400 ppbv and/or H\\({}_{2}\\)O enhancement exceeding 12 ppmv with respect to the pre-event conditions (late December 2020). Stratospheric mass loads of H\\({}_{2}\\)O, CO and CH\\({}_{3}\\)CN are derived from MLS volume mixing ratio measurements of species in log pressure space, molecular mass of the compound and the air number density derived from MLS temperature profile on pressure levels above the 380 K isentropic level and between 20\\({}^{\\circ}\\)S and 82\\({}^{\\circ}\\)S. The error bars on the mass of injection are estimated by combining accuracies on the measurements and the mean standard deviations over 20-day periods before and after the sharp increase. The error bar on the CH\\({}_{3}\\)CN mass above 380K is only calculated on the standard deviation because accuracies on CH\\({}_{3}\\)CN measurements are extremely large. The error bar on the aerosol mass takes also into account the uncertainty on the particle mass extinction coefficient (1.5 m\\({}^{2}\\)g\\({}^{-1}\\)). * 500 nm), the near infrared (710 - 770 nm) and the shortwave infrared (2314 - 2382 nm) providing therefore observation of key atmospheric constituents, among which we use O\\({}_{3}\\), CO and aerosol index at high spatial resolution (7\\(\\times\\)3.5 km\\({}^{2}\\) at nadir for the UV, visible and near-infrared bands, 7\\(\\times\\)7 km\\({}^{2}\\) at nadir for shortwave infrared bands) [36]. Here we exploit the Aerosol Index, and the CO and O\\({}_{3}\\) columnar values from the offline Level 2 data. The Aerosol Absorbing Index (AI) is a quantity based on the spectral contrast between a given pair of UV wavelength (in our case the 340-380 nm wavelengths couple). The retrieval is based on the ratio of the measured top of the atmosphere reflectance (for the shortest wavelength) and a pre-calculated theoretical reflectance for a Rayleigh scattering-only atmosphere (assumed equal for both wavelengths). When the residual value between the observed and modelled values is positive, it indicates the presence of UV-absorbing aerosols, like dust and smoke. Negative residual values may indicate the presence of non-absorbing aerosols, while values close to zero are found in the presence of clouds. AI is dependent upon aerosol layer characteristics such as the aerosol optical thickness, the aerosol single scattering albedo, the aerosol layer height and the underlying surface albedo. Providing a daily global coverage and a very high spatial resolution (7\\(\\times\\)3.5 km\\({}^{2}\\) at the nadir), the AI from TROPOMI is ideal to follow the evolution of smoke, dust, volcanic ash, or aerosol plumes. Ozone: Copernicus Sentinel-5P (processed by ESA), 2018, TROPOMI Level 2 Ozone Total Column products. Version 01. European Space Agency. [https://doi.org/10.5270/S5P-fqouvyz](https://doi.org/10.5270/S5P-fqouvyz) Carbon Monoxide: Copernicus Sentinel-5P (processed by ESA), 2018, TROPOMILevel 2 Carbon Monoxide total column products. Version 01. European Space Agency. [https://doi.org/10.5270/S5P-1hkp7rp](https://doi.org/10.5270/S5P-1hkp7rp) Aerosol Index: Copernicus Sentinel-5P (processed by ESA), 2018, TROPOMILevel 2 Ultraviolet Aerosol Index products. Version 01. European Space Agency. [https://doi.org/10.5270/S5P-Owafvaf](https://doi.org/10.5270/S5P-Owafvaf) * **GNSS-RO**: We use Global Navigation Satellite System (GNSS) Radio Occultation (RO) dry temperature profiles acquired onboard Metop A/B/C satellites and processed in near real time mode at EUMETSAT RO Meteorology Satellite Application Facility (ROM SAF)[37]. For computing the composited temperature perturbation within the smoke bubble we use temperature profiles collocated with the vortex centroid as identified using IFS analyses (8 hours, 400 km collocation criteria). The perturbation is computed as departure from a mean temperature profile within the corresponding spatiotemporal bin (3-day, 3\\({}^{\\circ}\\) latitude, 40\\({}^{\\circ}\\) longitude). * **ECMWF IFS** is the operational configuration of the ECMWF global Numerical Weather Prediction system (46R1, [https://www.ecmwf.int/en/publications/ifs-documentation](https://www.ecmwf.int/en/publications/ifs-documentation)). It consists of an atmosphere-land-wave-ocean forecast model and an analysis system that provides an accurate estimate of the initial state. The forecast model has a 9 km horizontal resolution grid and 137 vertical levels, with a top around 80 km altitude. The analysis is based on a 4-dimensional variational method, run twice daily using more than 25 million observations per cycle, primarily from satellites. The IFS produces high-resolution operational 10-day forecasts twice daily. * **Radiative transfer calculations:** The equinox-equivalent daily-average shortwave (integrated between 300 and 3000 nm) surface and top of the atmosphere (TOA) direct radiative forcing (RF) are estimated using the UVSPEC (UltraViolet SPECtrum) radiative transfer model in the libRadtran (library for Radiative transfer) implementation[38] and a similar methodology as in ref. 39 and ref. 4. Baseline and fire-perturbed simulations are carried out with different aerosol layers: the average OMPS-LP aerosols extinction coefficient profiles, for January and February 2019 (baseline simulation) and January and February 2020 (fire-perturbed simulation). The spectral variability of the aerosol extinction is modeled using the measured Angstrom exponent from SAGE III for January 2020 (fire-perturbed simulation) and typical background values inferred from SAGE III (baseline simulation). Different hypotheses have been considered for the non-measured optical parameters of fire aerosols: single scattering albedo from 0.85 to 0.95 (typical of wildfire aerosols, see e.g. ref. 40) and a Heyney-Greenstein phase function with an asymmetry parameter of 0.70. More information about the UVSPEC runs can be found in the Supplementary Material. The daily-average shortwave TOA radiative forcing for the fire-perturbed aerosol layer is calculated as the SZA-averaged upward diffuse irradiance for a baseline simulation without the investigated aerosols minus that with aerosols, integrated over the whole shortwave spectral range. The shortwave surface radiative forcing is calculated as the SZA-average downward global (direct plus diffuse) irradiance with aerosols minus the baseline, integrated over the whole spectral range. * **Mass estimation** The stratospheric masses of CO, CH\\({}_{3}\\)CN, and H\\({}_{2}\\)O are derived from the MLS species mixing ratios vertical profiles combined with MLS vertical profiles of pressure and temperature on the standard 37 pressure levels. MLS data are filtered according to the recommendations from the MLS team ([https://mls.jpl.nasa.gov/data/v4-2_data_quality_document.pdf](https://mls.jpl.nasa.gov/data/v4-2_data_quality_document.pdf)). CH\\({}_{3}\\)CN measurements are not recommended below 46 hPa however have already been used successfully in a study on combustion products from Australian bush fires in the stratosphere[41]. The mass calculation is performed as follows. First, we calculate for each profile the partial column of species (CO, CH\\({}_{3}\\)CN, H\\({}_{2}\\)O) and air for each measurement layer (vertical resolution of \\(\\sim\\)3 km); the air partial column is derived from the difference in pressure between the top and bottom of the layer. Then, adding the partial columns in a profile, we derive the total column of species and air above the 380 K potential temperature (\\(\\sim\\) 12.. 17 km altitude), the so-called stratospheric \"overworld\"[14]. The ratio of the species and air total columns gives us the mean volume mixing ratios (VMR) of the species over the stratospheric profile. Finally, we calculate the mean VMR of all the profiles between 20 \\({}^{\\circ}\\)S and 82 \\({}^{\\circ}\\)S (which is the southernmost latitude of the MLS sampling)and multiply it by the molecular mass of the species and the total number of air molecules above 380 K and between 20 \\({}^{\\circ}\\)S and 82 \\({}^{\\circ}\\)S following ref. [42] to obtain the total mass burden of the species plotted in Fig. 4. The aerosol mass is derived from the OMPS satellite aerosol extinction data assuming a particle mass extinction coefficient of 4.5 m \\({}^{2}\\)g \\({}^{-1}\\) following ref. [2]. Standard deviations of the injected mass are estimated by combining accuracies on the measurements and the mean standard deviations over 20-day periods before and after the sharp increase. The standard deviation of the CH\\({}_{3}\\)CN mass above 380 K is only calculated from the standard deviation because accuracies on CH\\({}_{3}\\)CN measurements are extremely large. The standard deviation of the aerosol mass takes also into account the uncertainty on the particle mass extinction coefficient (error=1.5 m \\({}^{2}\\)g \\({}^{-1}\\)). ## Data availability MLS data are publicly available at [http://disc.sci.gsfc.nasa.gov/Aura/data-holdings/MLS](http://disc.sci.gsfc.nasa.gov/Aura/data-holdings/MLS); GNSS-RO data at [https://www.romsaf.org/product_archive.php](https://www.romsaf.org/product_archive.php); OMPS-LP data at [ftp://odin-osiris.usask.ca/](ftp://odin-osiris.usask.ca/) with login/password osirislevel2user/hugin ; SAGE III data at doi:10.5067/ISS/SAGEIII/SOLAR_BINARY_L2-V5.1 ; CALIOP data at doi:10.5067/CALIOP/CALIPSO/CAL_LID_L1-VALSTAGE1-V3-40 ; TROPOMI ozone data at [https://doi.org/10.5270/S5P-fqouvyz](https://doi.org/10.5270/S5P-fqouvyz) ; carbon monoxide data at [https://doi.org/10.5270/S5P-1hkp7rp](https://doi.org/10.5270/S5P-1hkp7rp) ; Aerosol Index data at [https://doi.org/10.5270/S5P-0wafvaf](https://doi.org/10.5270/S5P-0wafvaf). The extracted ECMWF data used in this work are available at [https://doi.org/10.5281/zenodo.3958214](https://doi.org/10.5281/zenodo.3958214) ## Code availability LibRadTran code exploited for radiative forcing calculations is available at [http://www.libradtran.org/doku.php?id=download](http://www.libradtran.org/doku.php?id=download) The processing code for CALIOP, ECMWF data is available at [https://github.com/bernard-legras/STC-Australia](https://github.com/bernard-legras/STC-Australia) with dependencies in [https://github.com/bernard-legras/STC/tree/master/pylib](https://github.com/bernard-legras/STC/tree/master/pylib). The processing code for TRIPOMI is available at [https://github.com/silviabucci/TROPOMI-routines](https://github.com/silviabucci/TROPOMI-routines) The processing codes for MLS, OMPS-LP, SAGEIII, GNSS-RO are available at [https://doi.org/10.5281/zenodo.3959259](https://doi.org/10.5281/zenodo.3959259). The processing code for the estimation of injected masses using MLS data is available at [https://doi.org/10.5281/zenodo.3959350](https://doi.org/10.5281/zenodo.3959350) CALIOP data were provided by the ICARE/AERIS data centre. The TROPOMI data were provided by the Copernicus Open Access Web [https://scihub.copernicus.eu/](https://scihub.copernicus.eu/). We thank the EUMETSAT's Radio Occultation Meteorology Satellite Application Facility (ROM SAF) for providing NRT temperature profile data. MLS data are provided by the NASA Goddard Space Flight Center Earth Sciences (GES) Data and Information Services Center (DISC). We thank the OMPS-LP team at NASA Goddard for producing and distributing high quality Level 1 radiances, and the SAGE III/ISS team at NASA Langley for data production and advice, in particular Dave Flittner. We acknowledge the support of ANR grant 17-CE01-0015. The providers of the libRadtran suite ([http://www.libradtran.org/](http://www.libradtran.org/)) are gratefully acknowledged. We acknowledge discussions with Guillaume Lapeyre, Riwal Plougonven and Aurelien Podglajen. ## Supplementary Information is available for this paper. The authors declare no competing interests. ## References * [1] Khaykin, S. M. _et al._ Stratospheric Smoke With Unprecedentedly High Backscatter Observed by Lidars Above Southern France. _Geophys. Res. Lett._**45**, 1639-1646, doi: 10.1002/2017GL076763 (2018). * [2] Peterson, D. A. _et al._ Wildfire-driven thunderstorms cause a volcano-like stratospheric injection of smoke. _Npj Clim. Atmospheric Sci._**1**, 30, doi: 10.1038/s41612-018-0039-3 (2018). * [3] Bourassa, A. E. _et al._ Satellite limb observations of unprecedented forest fire aerosol in the stratosphere. _J. of Geophys. Res. Atmospheres_, **124**, 9510-9519, (2019). * [4] Kloss, C. _et al._ Transport of the 2017 Canadian wildfire plume to the tropics via the Asian monsoon circulation. _Atmos. Chem. Phys._**19**, 13547-13567, doi: 10.5194/acp-19-13547-2019 (2019). * [5] Yu, P. _et al._ Black carbon lofts wildfire smoke high into the stratosphere to form a persistent plume. _Science_**365**, 587-590, doi: 10.1126/science.aax1748 (2019). * [6] Bauer, P., Thorpe, A., and Brunet, G. The quiet revolution of numerical weather prediction. _Nature_**525**, 47-55, doi: 10.1038/nature14956 (2015). * [7] Boer, M.M., Resco de Dios, V., and Bradstock, R.A. Unprecedented burn area of Australian mega forest fires. _Nat. Clim. Chang._**10**, 171-172, doi: 10.1038/s41558-020-0716-1 (2020). * [8] Veefkind, J. P. _et al._ TROPOMI on the ESA Sentinel-5 Precursor: A GMES mission for global observations of the atmospheric composition for climate, air quality and ozone layer applications. _Remote Sensing of Environment_**120**, 70-83, doi: 10.1016/j.rse.2011.09.027 (2012). * [9] NASA Earth Observatory Australian smoke plume sets records. [https://earthobservatory.nasa.gov/images/146235/australian-smoke-plume-sets-records](https://earthobservatory.nasa.gov/images/146235/australian-smoke-plume-sets-records) (2020) * [10] Winker, D. M. _et al._ The CALIPSO Mission: A Global 3D View of Aerosols and Clouds. _Bull. Am. Meteorol. Soc._**91**, 1211-1230, doi: 10.1175/2010BAMS3009.1 (2010). * [11] Chouza, F. _et al._ Long-term (1999-2019) variability of stratospheric aerosol over Mauna Loa, Hawaii, as seen by two co-located lidars and satellite measurements, _Atmos. Chem. Phys._, 20, 6821-6839, doi:10.5194/acp-20-6821-2020 (2020). * [12] McCormick, M. P., Thomason, L. W., and Trepte, C. R. Atmospheric effects of the Mt Pinatubo eruption. _Nature_**373**, 399-404, doi: 10.1038/373399a0 (1995). * [13] Zawada, D. J., Rieger, L. A., Bourassa, A. E., and Degenstein, D. A.: Tomographic retrievals of ozone with the OMPS Limb Profiler: algorithm description and preliminary results, _Atmos. Meas. Tech._, **11**, 2375-2393, doi: 10.5194/amt-11-2375-2018 (2018). * [14] Holton, J.R. _et al._ Stratosphere-troposphere exchange _Rev. Geophys._**33**, 403-439, doi: 10.1029/95RG02097 (1995). * [15] Guerette, EE-A _et al._ Emissions of trace gases from Australian temperate forest fires: emission factors and dependence on modified combustion efficiency, _Atmos. Chem. Phys._, **18**, 3717-3735, doi: 10.5194/acp-18-3717-2018 (2018) * [16] Waters, J. W., _et al._, The Earth Observing System Microwave Limb Sounder (EOS MLS) on the Aura satellite, _IEEE Trans. Geosci. Remote Sens._, **44**, 1106-1121 (2006). * [17] King, M. D., Platnick, S., Menzel, P., Ackerman, S. A., and Hubanks, P. A. Spatial and temporal distribution of clouds observed by MODIS onboard the Terra and Aqua satellites, _IEEE Transactions on Geoscience and Remote Sensing_, **51**, 3826-3852 (2013) * [18] Haywood, J. M. _et al._, Observations of the eruption of the Sarychev volcano and simulations using the HadGEM2 climate model, _J. Geophys. Res._, **115**, D21212, doi: 10.1029/2011JD017016 (2010)* [19] Ridley, D. A. _et al._ Total volcanic stratospheric aerosol optical depths and implications for global climate change: Uncertainty in volcanic climate forcing. _Geophys. Res. Lett._**41**, 7763-7769, doi: 10.1002/2014GL061541 (2014). * [20] Schmidt, A. _et al._ Volcanic Radiative Forcing From 1979 to 2015. _J. Geophys. Res. Atmospheres_**123**, 12491-12508, doi: 10.1029/2018JD028776 (2018). * [21] Brasseur, G., and S. Solomon, Aeronomy of the Middle Atmosphere: Chemistry and Physics of the Stratosphere and Mesosphere, Third edition, (644 pages), Springer-Verlag (2005). * [22] Manney, G. L. _et al._ EOS Microwave Limb Sounder observations of \"frozen-in\" anticyclonic air in Arctic summer. _Geophys. Res. Lett._**33**, L06810, doi: 10.1029/2005GL025418 (2006). * [23] Allen, D. R. _et al._ Modeling the Frozen-In Anticyclone in the 2005 Arctic Summer Stratosphere. _Atmos. Chem. Phys._**11**, 4557-4576, doi: 10.1029/2005GL025418 (2011). * [24] Thieblemont, R., Orsolini, Y. J., Hauchecorne, A., Drouin, M.-A., and Huret, N. A climatology of frozen-in anticyclones in the spring arctic stratosphere over the period 1960-2011. _J. Geophys. Res. Atmospheres_**118**, 1299-1311, doi: 10.1002/jgrd.50156 (2013). * [25] Hoskins, B. J., McIntyre, M. E., and Robertson, A. W. On the use and significance of isentropic potential vorticity maps. _Q. J. R. Meteorol. Soc._**111**, 877-946, doi: 10.1002/qj.49711147002 (1985). * [26] Dritschel, D. G., Reinaud, J. N., and McKiver, W. J. The quasi-geostrophic ellipsoidal vortex model. _J. Fluid Mech._**505**, 201-223, doi: 10.1017/S0022112004008377 (2004). * [27] Meunier, T. _et al._ Intrathermocline Eddies Embedded Within an Anticyclonic Vortex Ring. _Geophys. Res. Lett._**45**, 7624-7633, doi: 10.1029/2018GL077527 (2018). * [28] Stoffelen, A. _et al._ The Atmospheric Dynamics Mission for Global Wind Field Measurement. _Bull. Am. Meteorol. Soc._**86**, 73-88, doi: 10.1175/BAMS-86-1-73 (2005). * [29] Rabier, F., Jarvinen, H., Klinker, E., Mahfouf, J.-F., and Simmons, A. The ECMWF operational implementation of four-dimensional variational assimilation. I: Experimental results with simplified physics. _Q. J. R. Meteorol. Soc._**126**, 1143-1170, doi: 10.1002/qj.49712656415 (2000). * [30] Kablick, G. P. III, Allen, D. R., Fromm, M. D., & Nedoluha, G. E. Australian pyroCb smoke generates synoptic-scale stratospheric anticyclones. Geophysical Research Letters, 47, e2020GL088101. https://doi. org/10.1029/2020GL088101(2020). * [31] Dowdy, A.J., Ye, H., Pepler, A. et al. Future changes in extreme weather and pyroconvection risk factors for Australian wildfires. _Sci Rep._**9,** 10073 (2019). * [32] Flynn, L. E., Seftor, C. J., Larsen, J. C., and Xu, P.: The ozone mapping and profiler suite, in: Earth Science Satellite Remote Sensing, Springer, Berlin, Heidelberg, 279-296 (2006). * [33] Bourassa, A. E., Degenstein, D. A., and Llewellyn, E. J.: SASKTRAN: A spherical geometry radiative transfer code for efficient estimation of limb scattered sunlight. _Journal of Quantitative Spectroscopy and Radiative Transfer_, **109**(1), 52-73 (2008). * [34] Cisewski, M. _et al._ The Stratospheric Aerosol and Gas Experiment (SAGE III) on the International Space Station (ISS) Mission, Proc. SPIE 9241, Sensors, Systems, and Next-Generation Satellites XVIII, 924107, doi: 10.1117/12.2073131 (2014). * [35] Thomason, L. W., Moore, J. R., Pitts, M. C., Zawodny, J. M., and Chiou, E. W.: An evaluation of the SAGE III version 4 aerosol extinction coefficient and water vapor data products. _Atmos. Chem. Phys._, **10**(5), 2159-2173, doi:10.5194/acp-10-2159-2010 (2010). * [36] Stein Zweers, D.C., TROPOMI ATBD of the UV aerosol index. S5P-KNMI-L2-0008-RP, CIC-7430-ATBD_UAVAAI, V1.1, KNMI, Utrecht, The Netherland, URL: [http://www.tropomi.eu/sites/default/files/files/S5P-KNMI-L2-0008-RP-TROPOMI_ATBD_UVAI-1.1.0-20180615_signed.pdf](http://www.tropomi.eu/sites/default/files/files/S5P-KNMI-L2-0008-RP-TROPOMI_ATBD_UVAI-1.1.0-20180615_signed.pdf). * [37] Gleisner H., Lauritsen, K. B., Nielsen, J. K., and Syndegaard, S.: Evaluation of the 15-year ROM SAF monthly mean GPS radio occultation climate data record, _Atmos. Meas. Tech.._, **13**, 3081-3098, doi:10.5194/amt-13-3081-2020 (2020). * [38] Emde, C. _et al._ The libRadtran software package for radiative transfer calculations (version 2.0.1). _Geosci. Model Dev._**9**, 1647-1672 (2016)* [39] Sellitto, P. _et al._ Synergistic use of Lagrangian dispersion and radiative transfer modelling with satellite and surface remote sensing measurements for the investigation of volcanic plumes: the Mount Etna eruption of 25-27 October 2013. _Atmos. Chem. Phys._**16**, 6841-6861, doi: 10.5194/acp-16-6841-2016 (2016). * [40] Ditas, J. _et al._ Strong impact of wildfires on the abundance and aging of black carbon in the lowermost stratosphere. _Proc. Natl. Acad. Sci._**115**, E11595-E11603 (2018). * [41] Pumphrey, H. C., M. L. Santee, N. J. Livesey, M. J. Schwartz, and W. G. Read (2011). Microwave Limb Sounder observations of biomass-burning products from the Australian bush fires of February 2009. _Atmos. Chem. Phys._, **11**(13), 6285-6296, doi:10.5194/acp-11-6285-2011 (2011) * [42] Jacob, D. J. Introduction to Atmospheric Chemistry. Princeton University Press, Princeton, N.J., 266 p. ISBN: 978-0-691-00185-2 (1999)Figure 1: **Time evolution of the smoke clouds as observed by TROPOMI satellite instrument.** a) time evolution of the total surface covered by the aerosol plumes with Absorbing Aerosol Index AAI\\(>\\)3 over the southern hemisphere. This threshold is chosen to follow the evolution of the main plume that was characterized by values of AI up to 10. The plumes show a sharp gradient in AI at the borders, where the AI value rapidly decreases, allowing to clearly define the boundaries of the aerosol cloud. b) 95th percentile of the aerosol close to the Eastern Australian coastal region (150\\({}^{\\circ}\\)-155 E 20\\({}^{\\circ}\\)-40\\({}^{\\circ}\\) S) where the extreme PyroCb activity took place. The main aerosol injections occurred between 30 and 31 December 2019, producing a plume that reached a first maximal spatial extension on the 2 January 2020, and between 4 and 5 January, when a second event produced an additional aerosol cloud that, combined with the first one, caused a total absorbing aerosol coverage that reached a maximum of 6 millions km\\({}^{2}\\) of extension on the 7 January. The plumes then gradually dissipated and diluted, decreasing in their AI values, until the third week of January, when the AI signal from the aerosol clouds is no more visible by TROPOMI, with the exception of few bubbles of confined aerosol (see next sections) Figure 2: **Latitude-altitude evolution of the smoke plumes in the stratosphere.** The pixels, colour coded by date, indicate doubling of aerosol extinction with respect to December 2019 levels for data where aerosol to molecular extinction ratio is 1 or higher. The black circles with date-colour filling indicate the locations of high amounts of water vapour and/or carbon monoxide detected by MLS (see Methods). The black contour encircles the locations of aerosol bubble detections by CALIOP lidar (see Fig. 5a and Supplementary Figure 3). The cross marks the latitude-altitude extent of the stratospheric cloud detected by CALIOP on the 1st January (see Fig 5a). The grey solid and black dashed curves indicate respectively the zonal-mean 380 K isentrope and the lapse rate tropopause for the January-March 2020 period. Figure 3: **Perturbation of the stratospheric aerosol optical depth (SAOD) due to Australian fires and the strongest events since 1991**. The curves represent the SAOD perturbation at 746 nm following the Australian wildfires, the previous record-breaking Canadian wildfires in 2017 and the strongest volcanic eruptions in the last 29 years (Calbuco, 2015 and Raikoke, 2019 [ref. 11]).The time series are computed from OMPS-LP aerosol extinction profiles as weekly-mean departures of aerosol optical depth above 380 K isentropic level (see Fig. 2) from the levels on the week preceding the event. The weekly averages are computed over equivalent-area latitude bands (as indicated in the panel) roughly corresponding to the meridional extent of stratospheric aerosol perturbation for each event. The shading indicates a 30% uncertainty in the calculated SAOD, as estimated from SAGE III coincident comparisons (See Methods). **Figure 4. Time evolution of the daily total mass of CO, CH\\({}_{3}\\)CN, H\\({}_{2}\\)O and aerosols above the 380 K potential temperature, between 20 \\({}^{\\circ}\\)S and 82 \\({}^{\\circ}\\)S. The dotted and solid lines correspond to daily data and 1-week smoothed data respectively. Envelopes represent two standard deviations over the 1-week window (See Methods). As shown in this figure, the levels of CO, CH\\({}_{3}\\)CN, H\\({}_{2}\\)O, and aerosols started to increase simultaneously and kept increasing during \\(\\sim\\)2-3 weeks, a duration corresponding probably to the time taken by products injected in the lowermost stratosphere to ascend above 380 K. The stratospheric masses of carbon monoxide (CO) and acetonitrile (CH\\({}_{3}\\)CN) bounded within the southern extratropics increase abruptly by 1.5\\(\\pm\\)0.9 Tg and 3.7\\(\\pm\\)2.0 Gg respectively during the first week of 2020. This gives a CO/CH\\({}_{3}\\)CN mass ratio of 0.0025, consistently with previous estimates for temperate Australian wildfires[15]. The injected mass of water was estimated at 27\\(\\pm\\)10 Tg that is about 3% of the total mass of stratospheric overworld water vapour in the southern extratropics. The shading shows that the amplitude of fluctuations increases sharply during the sharp rise of species masses, reflecting the fact that sampling of the bubble by MLS is more random than on a more homogeneous field. The lagging increase of the aerosol mass is due to the fact that the OMPS-LP extinction retrieval saturates at extinction values above 0.01 km \\({}^{-1}\\). Profiles are therefore truncated below any altitude exceeding this value, which can lead to an underestimation of the early aerosol plume when it is at its thickest. This artifact, which explains the slower increase of aerosol mass than gases, persists until mid-February when the plume is sufficiently dispersed so that OMPS-LP extinction measurements no longer saturate.** ## 6 Conclusion Figure 5: **Vertical evolution of the smoke bubble, its chemical composition and thermal structure**. a) Selection of CALIOP attenuated scattering ratio profiles for clear intersections of the bubble by the orbit except the first panel of 1st of January that shows the dense and compact plume on its first day in the stratosphere. The attenuated scattering ratio is calculated by dividing the attenuated backscattering coefficient by the calculated molecular backscattering. The data are further filtered horizontally by an 81-pixel moving median filter to remove the noise. The crosses show the projected interpolated location of the vortex vorticity centroid from the ECMWF operational analysis onto the orbit plane at the same time and the white contour shows the projected contour of the half maximum vorticity value in the pane passing by the vortex centroid and parallel to the orbit plane (See Fig. 6). b) Evolution of the water vapour mixing ratio within the rising bubble based on MLS bubble detections (see Methods). The dashed contours show the equivalent mixing ratio of ice water derived from MLS ice water content vertical profiles collocated with the bubble. The thick dashed curve marks the top altitude of the aerosol bubble determined as the level where OMPS-LP extinction triples that of the nearest upper altitude level. c) Evolution of carbon monoxide (MLS) within the bubble. The centroid and the vertical boundaries of the aerosol bubble determined using CALIOP data are overplotted as circles and bars respectively. d) Composited temperature perturbation within the smoke bubble from Metop GNSS radio occultation (RO) temperature profiles collocated with the smoke bubble (see Methods). The black line shows the centroid of vortex detected from ECMWF data (see Supplementary notes 4.1 and Supplementary Figure 7b). Figure 6: Spatiotemporal evolution of the vortex and its thermodynamical properties. a) Composite horizontal sections of the vortex. The background shows the relative vorticity field on 24 Jan 2020 6UTC from the ECMWF operational analysis on the surface 46.5 hPa (21.3 km at the location of the vortex) corresponding to the level of highest vorticity in the vortex. The boxes show the vorticity field at other times as horizontal sections at the level of maximum vorticity centroid projected onto the background field. The yellow curve is the twice-daily sampled trajectory of the vortex centroid. The red dots show the location of the CALIOP bubble centroid for all the cases where it is clearly intersected by the orbit. The magenta crosses show the location of the center of the compact aerosol index anomaly as seen from TROPOMI (Supplementary notes 3). * Composite vertical section of the vortex. The background is here the longitude-altitude section of the vortex on 24 Jan 2020 6 UTC at the latitude \\(47^{\\circ}\\)S. The boxes show vertical sections at the same time as panel a) at the latitude of maximum vorticity. The black, red and white dots show, respectively, the CALIOP bubble top, centroid and bottom. * Composite of the vortex vorticity in the longitude altitude plane at the level and at the latitude of the vortex centroid performed during the most active period of the vortex between 14 Jan and 22 February. * Same as c) for the meridional wind deviation with respect to the mean in the displayed box. * Same as c) but for the ozone mixing ratio deviation with respect to the zonal mean. * Same as c) but for the temperature deviation with respect to the zonal mean. Figure 7: **Time evolution of the altitude and vorticity of the main vortex**. The two panels show the potential temperature (a) and vorticity (b) as a function of time. All the quantities are defined at the vortex centroid where the vorticity is maximum. In panel a) the red squares show the position of the aerosol bubble centroid according to CALIOP. The CALIOP centroid is defined by averaging the most extreme top, bottom, south and north edges. The arrows show the extension of the bubble in potential temperature space. The cyan line shows the upper envelope of the bubble as detected by OMPS-LP, the green line is a linear fit to the ascent of the vortex and the gray line shows a 10 K day \\({}^{-1}\\) curve. In panels (a) and (b), the black lines show the ECMWF 10-day forecast evolution, plotted every four days. The forecast evolution is only shown for the period where it maintains the vortex. The slight discrepancy between the analysis and the initial point of the forecast is because the 10-day forecast is produced from a slightly inferior 6-hour analysis, due to real time constraints. Figure 8: **Spatiotemporal evolution of the second vortex**. (a) Time evolution from eight matching sections of CALIOP. (b) Composite of TROPOMI Aerosol Index (AI) at the location of the vortex for six dates that do not necessarily match that of CALIOP. (c) Time evolution of the vortex according the ECMWF analysis from ten vorticity snapshots at the level of maximum vorticity. The background is shown for 24 January. In b) and c) the trajectory of the AI centroid is shown as magenta crosses and the trajectory of the IFS vortex is shown as the black and yellow curves, respectively. (d) Altitude of the vortex core as a function of time. (e) Maximum vorticity at the vortex core as a function of time. Figure 9: **Spatiotemporal evolution of the third vortex** (a) Time evolution of the vortex from 7 matching sections of CALIOP during its first period over Antarctica. We use here daily orbits of CALIOP, hence the high level of noise. The x-axis is mapped over longitude due to the proximity of the pole. (b) Time evolution of the third vortex from 7 matching sections of CALIOP during its second period. (c) Time evolution of the vortex according the ECMWF analysis from eight vorticity snapshots at the level of maximum vorticity. The background is shown for 7 February where the main vortex is also visible. The yellow curve shows the trajectory of the vortex in the IFS, the magenta crosses mark the TROPOMI AI centroid (d) Altitude of the vortex core as a function of time. (e) Maximum vorticity of the vortex core as a function of time. Figure 10: **Three-dimensional evolution of the three vortices from OMPS-LP and ECMWF IFS.** (a) Time-latitude section of zonal-mean stratospheric aerosol optical depth (above the tropopause) from OMPS-LP measurements. The markers show locations of smoke-charged vortices identified using ECMWF vorticity fields. (b) Longitude-temporal evolution (Hovmöller diagram) of the maximum altitude of smoke plume inferred from OMPS-LP extinction data within 30 \\({}^{\\circ}\\)S-60 \\({}^{\\circ}\\) above 15 km where aerosol to molecular extinction ratio exceeds 5. The markers indicate the locations of the smoke-charged vortices identified using ECMWF vorticity fields. (c) Latitude-altitude section of zonal-mean aerosol to molecular extinction ratio above the local tropopause from OMPS-LP measurements for 16 January 2020. The thick and the thin curves indicate, respectfully, the zonal-mean lapse-rate tropopause and the 380 K potential temperature level. The markers show the positions of the main vortex and of the two companion vortices. (d) Same as (c) for 1 February 2020. (e) Trajectories of the three vortices with colour-coded date in the longitude-latitude plane. Supplement material to \"The 2019/2020 Australian wildfires generated a persistent smoke-charged vortex rising to 35 km altitude\" Sergey Khaykin, Bernard Legras, Silvia Bucci, Pasquale Sellitto, Lars Isaksen, Florent Tence, Slimane Bekki, Adam Bourassa, Landon Rieger, Daniel Zawada, Julien Jumelet & Sophie Godin-Beekmann 29 July 2020Supplementary Figure 2: **Radiative Forcing** Equinox-equivalent daily-average regional RF at TOA and surface due to Australian fire perturbations to the stratospheric aerosol layer. Data is averaged over the months of January and February. These estimations are provided for the following regions (latitude bands): 15 to 25\\({}^{\\circ}\\)S, 25 to 60\\({}^{\\circ}\\)S and 60 to 80\\({}^{\\circ}\\)S. The error bars show the variability of our estimations when varying our hypotheses on non-measured quantities (single scattering albedo and asymmetry parameter). Figure 5: **Time evolution of the vortex in the IFS** Time evolution of the geographical position of the main vortex as seen from the IFS (blue line) and from CALIOP (red squares). (a) latitude, (b) longitude as a function of time. All the quantities are defined at the vortex centroid where the vorticity is maximum. The red squares in the two panels show the position of the aerosol bubble centroid according to CALIOP. The CALIOP centroid is defined by averaging the most extreme top, bottom, south and north edges. The arrows in the latitude and potential temperature panels show the extension of the bubble in those direction. As the orbit almost follows a meridian, the longitude extension cannot be retrieved from CALIOP. ## 3 Results Figure 6: **Forcing by assimilation of the observations in the IFS** (a) Right panel: mean GPS-RO bending angle departure in the area of the vortex normalised by the observation error as a function of altitude. In black, the departure from the background (prior to the assimilation). In red, the departure of the analysis (posterior to the assimilation). The column on the left indicates the number of observations. Left panel: prior and posterior standard deviation. (b) Probability density distribution of the METOP-B/GOME-2 departure in the area of the vortex with respect to the background (left panel) and with respect to the analysis (right panel). (c) Same as (a) but for the radiosonde temperature using pressure as vertical axis. (d) Same as (c) but for the radiosonde meridional wind. (e) Same as (a) but for the METOP-B IASI brightness temperature in the longwave channels. The vertical axis shows the channel number. (f) Same as (e) but for the brightness temperature in the ozone channels. Data averaged over the period 11-18 January for panels (a), (b), (e) and (f). Data averaged over the period 25-31 January for panels (c) and (d). ## 4 Conclusion Figure 7: **Ozone and temperature** Upper panel a): horizontal composite chart of the ozone anomaly. Mid panel b): Vertical longitude-altitude composite section of the ozone anomaly. Bottom panel c): Vertical longitude-altitude composite section of the temperature anomaly. These panels are built exactly like Fig. 6a-b of the main text. The anomalies are defined according to the zonal average at the same time. When averaged in time the composite image of Fig. 6c of the main text is obtained. ## Supplementary Note 1 Radiative forcing The equinox-equivalent daily-average shortwave surface and top of the atmosphere (TOA) direct radiative forcing (RF) are estimated using the UVSPEC (UltraViolet SPECtrum) radiative transfer model in the libRadtran (library for Radiative transfer) implementation [1] and a similar methodology as in [2, 3]. All radiative parameters are computed in the shortwave range 300 to 3000 nm, at 0.1 nm spectral resolution, based on the input solar spectrum of [4]. The background atmospheric state is set using the AFGL (Air Force Geophysics Laboratory) summer mid- or high-latitudes climatological standards [5], depending on the latitude range. As a first-order reference estimate, clear-sky conditions are considered. A shortwave surface albedo of 0.07, typical of sea surface (most of the fire plume disperse over ocean), is used [6]. The RF is very sensitive to the surface albedo and using a fixed albedo value might introduce uncertainties to our estimations. For the radiative forcing calculations, a baseline simulation is first carried out, with the mentioned setup and average aerosols extinction coefficient (750 nm) profiles from OMPS, for January and February 2019. A fire-perturbed run is then performed, using the measured Australian fire aerosols extinction coefficient (750 nm) profiles from OMPS, for January and February 2020. The OMPS USask retrievals are used, for both baseline and fire-perturbed periods. The spectral variability of the aerosol extinction, during fire-perturbed periods, is modelled using the measured Angstrom exponent from SAGE III, for January 2020. Typical non-perturbed values of the Angstrom exponent are used for the baseline period. Only perturbations of the optical properties of stratospheric aerosol are considered in the present study (OMPS profiles are averaged only using altitude levels from tropopause plus 1 km, in order to avoid possible cloud perturbations). Different perturbed runs have been performed using three different values of the single scattering albedo (0.85, 0.90 and 0.95) and a Heyney-Greenstein phase function with an asymmetry parameter of 0.70. These are typical values of biomass burning aerosol optical properties [e.g. 3, 7, and references therein]. For both the baseline and fire plumes configurations, we run multiple times the radiative transfer simulations at different solar zenith angles (SZA). The daily-average shortwave TOA clear-sky RF for the fire-perturbed aerosol layer is calculated as the difference in SZA-averaged upward diffuse irradiance between the baseline simulation (i.e. without the considered aerosols) and the simulation with aerosols, integrated over the whole shortwave spectral range. The shortwave surface clear-sky RF is calculated as the difference in the SZA-average downward global (direct plus diffuse) irradiance between the simulation with aerosols and the baseline simulation, integrated over the whole spectral range. The clear-sky RF is estimated for both January and February 2020, by comparing with baseline layers of January and February 2019, respectively. Different latitude bands are considered separately, 15 to 25\\({}^{\\circ}\\)S, 25 to 60\\({}^{\\circ}\\)S and 60 to 80\\({}^{\\circ}\\)S. We exclude the latitude band 80 to 90\\({}^{\\circ}\\)S because OMPS observations are not available in this band. It is supposed that the Australian fires have no impact in the northern hemisphere. The equinox-equivalent daily-average clear-sky RF at TOA and surface due to Australian fire perturbations to the stratospheric aerosol layer, averaged over the months of January and February, is shown in Supplementary Figure 2, for the three latitude bands mentioned above. In the latitude band between 25 and 60\\({}^{\\circ}\\)S, an average monthly radiative forcing (RF) as large as about -1.0 W m\\({}^{-2}\\) at TOA and -3.0 W m\\({}^{-2}\\) at the surface is found in February 2020, that can be attributed to the Australian fires plumes perturbation of the stratospheric aerosol layer. Lower RF values are found at more northern and southern latitude bands, in the Southern Hemisphere. The RF in January are 30 to 50% lower than in February. Based on these clear-sky estimations with detailed radiative transfer calculations, all-sky RF can be derived using the average monthly cloud fraction, in the same latitude bands defined above, and the parameterisation of [8]. Typical cloud fractions in the area affected by the plume are between 40 and 50%, at lower latitudes, up to more than 70%, at the highest latitudes in the Southern Hemisphere [9]. Based on that, an all-sky RF reduced to about 50% of the clear-sky RF, is likely. Using these regional (latitudinal-limited) RF estimations, we calculate the corresponding area-weighted global-equivalent RF, for January and February 2020. Values are given in Supplementary Table 1, for the detailed clear-sky reference runs. The area-weighted global-equivalent clear-sky RF due to the Australian fires plumes peaks in February, with values as large as -0.31\\(\\pm\\)0.09 W m\\({}^{-2}\\), at TOA, and -0.98\\(\\pm\\)0.17 W m\\({}^{-2}\\), at the surface. Values as large as -0.2 W m\\({}^{-2}\\), at TOA, and -0.5 W m\\({}^{-2}\\), at the surface, are expected in all-sky conditions. Due to the mentioned spatial limitation of our analyses (we exclude a possible impact in the northern hemisphere and between 80 and 90\\({}^{\\circ}\\)S), these global-equivalent clear-sky RF might be slightly underestimated. The all-sky RF, being obtained with a simple parameterisation in terms of the integrated cloud fraction, must be taken with caution and are to be considered a first estimation of the overall RF of Australian fires 2019/2020. Work is ongoing to explicitly simulate the radiative impact of clouds, in our simulations, and to incorporate measured clouds properties. ## Supplementary Note 2 Late evolution of the smoke bubble from CALIOP On 7 March, the aerosol bubble was over the Atlantic and entered the region of the South Atlantic Anomaly where CALIOP retrieval is too noisy to allow a detection [10]. It emerged about one week later in the Pacific where it could be detected again from 14 March. In the mean time, it was still followed by OMPS (see Fig. 10c of the main text). Supplementary Figure 3 shows a series of CALIOP sections after this date from 16 March until 4 April 2020. Between 28 March and 1 April the bubble split in two parts due to vertical shear, as it occurred by the end of February. The top part was followed for a few more days until 4 April when it reached 55\\({}^{\\circ}\\)E after which it was lost in CALIOP data. The bottom part travelled westward but slower and dispersed over the Indian Ocean. By 13 April, a number of patches could still be seen between 30 and 32 km and 25\\({}^{\\circ}\\)S-30\\({}^{\\circ}\\)S over the Indian Ocean (not shown). Supplementary Figure 5d shows that the IFS analysed vortex after 16 March is much weaker than during the period displayed in Fig. 5a of the main text. It is also less compact and stands out much less relative to the environment, as shown by the extension of the half maximum vorticity contour in Supplementary Figure 3. By 1 April a collocated vorticity maximum could still be detected in the IFS analysis, but could hardly be considered as a vortex. In other words, even if the presence of a vortical structure is suggested from the persisting confinement, the IFS fails to track it as the temperature disturbance gets too weak to be identified by the observing systems. This is consistent with the loss of the GPS-RO temperature signal seen in Fig. 5d of the main text. ## Supplementary Note 3 Vortices and tracer confinement as seen from TROPOMI On the top of TROPOMI AI data (see Methods) used to follow the aerosol plume, we also look at CO and O\\({}_{3}\\) tracers to investigate the impact of the fires on stratospheric gaseous composition. ### Supplementary Note 3.1 CO columnar content TROPOMI provides total CO vertical columns, exploiting clear-sky and cloudy-sky Earth radiance measurements. The retrieval of the CO content is based on the Shortwave Infrared CO Retrieval (SICOR) algorithm [11, 12, 13]. The algorithm takes in account the sensitivity of the CO measurements to the atmospheric scattering due to cloud presence employing a two-stream radiative transfer solver. The algorithm is an evolution of the CO retrieval algorithm used for the Scanning Imaging Absorption Spectrometer for Atmospheric Chatrography (SCIAMACHY)[14] but with a specific improvement in the CO retrieval for cloudy and aerosol loaded atmospheres. The CO total column densities are retrieved simultaneously with effective cloud parameters (cloud optical thickness and cloud center height) by means of a scattering forward simulation. The inversion exploits the monthly vertical profiles of CO, spatially averaged on a 3x2 degrees grid, from the chemical transport model TMS [15] for a profile scaling approach. The algorithm has been extensively tested against SCIAMACHY, which covers the same spectral region with the same spectral resolution as TROPOMI but with a lower signal-to-noise ratio, lower radiometric accuracy, and lower spatial resolution [16, 17]. More details on the CO columnar product can be found in [18]. ### Supplementary Note 3.2 Ozone columnar content The TROPOMI Ozone level 2 offline data are retrieved using the GODFIT (GOME-type Direct FITting) algorithm. This is based on the tuning of the simulated radiances in the Huggins bands (fitting window: 325-335 nm) by varying some of the key atmospheric parameters in the state vector to better fit the observations. These parameters include the total ozone, the effective scene albedo and the effective temperature. This approach gives improved retrievals accuracy with respect to the classical Differential Optical Absorption Spectroscopy approach under extreme geophysical conditions as large ozone optical depths. In the offline version, the data are then filtered and kept only if specific criteria are fullfilled (total column density positive but less than 1008.52 DU, respective ozone effective temperature variable greater than 180 K but less than 260 K, ring scale factor positive but less than 0.15, effective albedo is greater than -0.5 but less than 1.5 [19]. More details on the algorithm and on the quality of the datasets can be found in [20, 21, 22]. ### Supplementary Note 3.3 Aerosol and tracers confinement in the vortex Supplementary Figure 4 shows the anomaly in the atmospheric composition linked to the vortex on two different days after the plume injection (17 January and 3 February in 4a and 4b, respectively). On both days the satellite images indicate an alteration of the atmospheric composition at the same location as the vortex position identified bythe IFS. Supplementary Figure 4a captures the vortex 18 days after the plume injection. Besides the clear aerosol confinement inside the vortex, the compact structure is very visible in the CO total column with values that are enhanced in the vortex with respect to the surrounding background (\\(\\,2\\cdot 10^{22}\\) mol-m\\({}^{-2}\\) with respect to a background lower than \\(1\\cdot 10^{22}\\) mol-m\\({}^{-2}\\)). The IFS vortex centroid is on this day located at 19.8 km and the O\\({}_{3}\\) depletion caused by the atmospheric composition perturbation is also visible on the total column, with a reduction of 60 DU with respect to the surroundings (that reach values of around 330 DU). On the 3 of February (15 days after) the vortex is higher in altitude (centroid from IFS at 24.3 km) and still keeps its confinement power. The vorticity patch still matches the high AI value and CO columnar content enhancement, although with lower gradients with respect to the previous days. This is due to the natural dilution, decaying and leaking of the tracers and the aerosol content. The ozone mini hole, on the other hand, has increasing amplitude with respect to the previous days, with a reduction of 75 DU with respect to the surroundings (around 325 DU). In this day the ozone hole was the most evident, mostly due to the increase on the surrounding O\\({}_{3}\\) values. In the following days the vortex kept gradually losing the aerosol and tracer confinement toward the last days of February when TROPOMI lost the ability to clearly distinguish the vortex in both tracers and aerosol images. This corresponds to a split of the aerosol bubble in two parts as observed by CALIOP (see Fig. 5a of the main text) ### Hemispheric impact of the aerosol plume Supplementary Figure 4c shows the time composite of the daily TROPOMI AI from 6 January, the day following the second big injection of the smoke plumes. The chart depicts the time evolution of the aerosol plumes released in the atmosphere and clearly shows that the perturbation affected a large fraction of the southern hemisphere. Most of the plume spread and dispersed over the Pacific ocean portion between Australia and South America. Nevertheless parts of the plumes detached from the main one, forming various bubbles of aerosol and sometimes generating vortices that travelled all around the hemisphere (see Fig. 10c of the main text). The daily track of the aerosol centroid for the two main bubbles shows indeed how both moved in different directions and speeds, transporting part of the injected material through all the longitudes. While the main vortex moved slowly, perturbing mostly the southern Pacific for around one month and half, the second vortex made a complete tour around the globe in about three weeks. In addition to those main events, other smaller bubbles were identified and are discussed in the main text (Figs. 8 and 9). ## Supplementary Note 4 Vortex in the IFS ### Time evolution Supplementary Figure 5 shows the temporal evolution of the vortex in the IFS analysis and of the aerosol bubble from CALIOP data. The results illustrate how the bubble trajectory was closely followed with no significant deviation during the whole period. The path from 4 January to 1 April 2020 completed more than one and a half times the Earth's circumference (66,000 km during 88 days). The time evolution of the potential temperature and of the vorticity are displayed in Fig. 7 of the main text. ### Forcing by assimilation of the observations Numerical weather prediction requires good knowledge of the initial state of the atmosphere, land and ocean to provide the starting point for the forecast model. This is achieved by the data assimilation process that adjusts a short-range forecast (typically 6-12 hours) to be in closer agreement with available observations. The ECMWF is using 4-dimensional variational analysis method [23] that uses around 25 million observations each 12-hours to perform this adjustment towards the true state of the atmosphere. Supplementary Figure 6 shows examples of the rttude/longitude box around the vortex. Supplementary Figure 6 panels a and c-f show the vertical distribution of observation-minus-background (o-b, black curves) and observation-minus-analysis (o-a, red curves). The left panels show the random component (standard deviation) and the right panels show the systematic error (bias). The data assimilation scheme is extracting useful information from the observations if the random errors are smaller in the analysis (red curves) and/or the bias is reduced (closer to to zero line). It is evident that the analysis of the vortex is improved from using the data from GPS radio-occulations (panel a), radiosonde temperatures and winds from Falkland Islands (panel c,d) and IASI radiances sensitive to long wave temperatures (panel e) and ozone (panel f). Panel b shows that GQME-2 ozone measurements from METOP also contributed to the improved ozone analysis near the vortex. ## Supplementary Note 5 Ozone and temperature Supplementary Figure 7a-b shows the evolution of ozone in the ECMWF analysis in the same way as Fig.3a-b of the main text. Ozone is depleted with respect to the environment by as much as 3.5 mg kg\\({}^{-1}\\) (or 2.1 ppmv, see also Fig. 3c of the main text). We saw above in Supplementary Section Supplementary Note 4.2 that this depletion is maintained by the assimilation of satellite observations. Like vorticity, the ozone distribution sways according to the deformations of the vortex and keeps an ovoid shape in the vertical plane during all the displayed period. Supplementary Figure 7c shows that the temperature distribution maintains instead a dipolar structure, with a cold pole above and a warm pole below, during the entire period of the vortex evolution. It is noticeable that in the displayed sections it is the separation line rather that the axis between the warm and the cold pole that aligns with the vortex vorticity and ozone axis. ## References * [1] Emde, C., Buras-Schnell, R., Kylling, A., Mayer, B., Gasteiger, J., Hamann, U., Kylling, J., Richter, B., Pause, C., Dowling, T. & Bugliaro, L. The libRadtran software package for radiative transfer calculations (version 2.0.1). _Geoscientific Model Development_**9,** 1647-1672 (2016). * [2] Sellitto, P., di Sarra, A., Corradini, S., Boichu, M., Herbin, H., Dubuisson, P., Seze, G., Meloni, D., Monteleone, F., Merucci, L., Rusalem, J., Salerno, G., Briole, P. & Legras, B. Synergistic Use of Lagrangian Dispersion and Radiative Transfer Modelling with Satellite and Surface Remote Sensing Measurements for the Investigation of Volcanic Plumes: The Mount Etna Eruption of 25-27 October 2013. _Atmospheric Chemistry and Physics_**16,** 6841-6861 (2016). * [3] Kloss, C., Berthet, G., Sellitto, P., Ploeger, F., Bucci, S., Khaykin, S., Jegou, F., Taha, G., Thomason, L. W., Barret, B., Le Flochmoen, E., von Hobe, M., Bossolasco, A., Begue, N. & Legras, B. Transport of the 2017 Canadian Wildfire Plume to the Tropics via the Asian Monsoon Circulation. _Atmospheric Chemistry and Physics_**19,** 13547-13567 (2019). * [4] Kurucz, R. L. in _Infrared Solar Physics_ (eds Rabin, D. M., Jefferies, J. T. & Lindsey, C.) _IAUS Book Series_**154**, 523-531 (Springer Netherlands, Dordrecht, 1994). ISBN: 978-0-7923-2523-9. doi:10.1007/978-94-011-1926-9.62. <[http://link.springer.com/10.1007/978-94-011-1926-9.62](http://link.springer.com/10.1007/978-94-011-1926-9.62)>. * [5] Anderson, G. P., Clough, S. A., Kneizys, F. X., Chetwynd, J. H. & Shettle, E. P. _AFGL Atmospheric Constituent Profiles (0-120 Km)_**964** (Air Force Geophysics Laboratory, 1986). <[http://www.dtic.mil/cgi-bin/GetTRDoc?](http://www.dtic.mil/cgi-bin/GetTRDoc?) AD=ADA175173>. * [6] Briegleb, B. & Ramanathan, V. Spectral and Diurnal Variations in Clear Sky Planetary Albedo. _Journal of Applied Meteorology_**21**, 1160-1171 (1982). * [7] Ditas, J., Ma, N., Zhang, Y., Assmann, D., Neumaier, M., Riede, H., Karu, E., Williams, J., Scharffe, D., Wang, Q., Saturno, J., Schwarz, J. P., Katich, J. M., McMeeking, G. R., Zahn, A., Hermann, M., Brenninkmeijer, C. A. M., Andreae, M. O., Poschl, U., Su, H. & Cheng, Y. Strong Impact of Wildfires on the Abundance and Aging of Black Carbon in the Lowermost Stratosphere. _Proceedings of the National Academy of Sciences_**115,** E11595-E11603 (2018). * [8] Andersson, S. M., Martinsson, B. G., Vernier, J.-P., Friberg, J., Brenninkmeijer, C. A. M., Hermann, M., van Velthoven, P. F. J. & Zahn, A. Significant radiative impact of volcanic aerosol in the lowermost stratosphere. _Nature Communications_**6,** 7692 (2015). * [9] King, M. D., Platnick, S., Menzel, W. P., Ackerman, S. A. & Hubanks, P. A. Spatial and Temporal Distribution of Clouds Observed by MODIS Onboard the Terra and Aqua Satellites. _IEEE Transactions on Geoscience and Remote Sensing_**51,** 3826-3852 (2013). * [10] Noel, V., Chepfer, H., Hoareau, C., Reverdy, M. & Cesana, G. Effects of Solar Activity on Noise in CALIOP Profiles above the South Atlantic Anomaly. _Atmospheric Measurement Techniques_**7,** 1597-1603 (2014). * [11] Vidot, J., Landgraf, J., Hasekamp, O., Butz, A., Galli, A., Tol, P. & Aben, I. Carbon monoxide from shortwave infrared reflectance measurements: A new retrieval approach for clear sky and partially cloudy atmospheres. _Remote Sensing of Environment_**120,** 255-266 (2012). * [12] Landgraf, J., Aan de Brugh, J., Scheepmaker, R., Borsdorff, T., Hu, H., Houweling, S., Butz, A., Aben, I. & Hasekamp, O. Carbon monoxide total column retrievals from TROPOMI shortwaveinfrared measurements. _Atmospheric Measurement Techniques_**9,** 4955-4975 (2016). * [13] Landgraf, J., aan de Brugh, J., Borsdorff, T., Houweling, S. & Hasekamp, O. _Algorithm theoretical baseline document for Sentinel-5Precursor: Carbon monoxide total column retrieval_ SRON-SSP-LEV2-RP-002, Cl-7430-ATBD, V1.10 (SRON, Utrecht, The Netherlands, 2018). <[http://www.tropomi.eu/sites/default/files/files/SRON_SSP_LEV2_RP_002_issue1.10_CO_signed.pdf](http://www.tropomi.eu/sites/default/files/files/SRON_SSP_LEV2_RP_002_issue1.10_CO_signed.pdf)>. * [14] Gloudemans, A. M. S., de Laat, A. T. J., Schrijver, H., Aben, I., Meirink, J. F. & van der Werf, G. R. SCIAMACHY CO over land and oceans: 2003-2007 interannual variability. _Atmospheric Chemistry and Physics_**9**, 3799-3813 (2009). * [15] Krol, M., Houweling, S., Bregman, B., van den Broek, M., Segers, A., van Velthoven, P., Peters, W., Dentener, F. & Bergamaschi, P. The two-way nested global chemistry-transport zoom model TM5: algorithm and applications. _Atmospheric Chemistry and Physics_**5,** 417-432 (2005). * [16] Borsdorff, T., Tol, P., Williams, J. E., de Laat, J., aan de Brugh, J., Nedelec, P., Aben, I. & Landgraf, J. Carbon monoxide total columns from SCIAMACHY 2.3 \\(\\mu\\)m atmospheric reflectance measurements: towards a full-mission data product (2003-2012). _Atmospheric Measurement Techniques_**9,** 227-248 (2016). * [17] Borsdorff, T., aan de Brugh, J., Hu, H., Nedelec, P., Aben, I. & Landgraf, J. Carbon monoxide column retrieval for clear-sky and cloudy atmospheres: a full-mission data set from SCIAMACHY 2.3 \\(\\mu\\)m reflectance measurements. _Atmospheric Measurement Techniques_**10**, 1769-1782 (2017). * [18] Borsdorff, T., aan de Brugh, J., Hu, H., Aben, I., Hasekamp, O. & Landgraf, J. Measuring Carbon Monoxide With TROPOM: First Results and a Comparison With ECMWF-IFS Analysis Data. _Geophysical Research Letters_**45**, 2826-2832 (2018). * [19] Lerot, C., Heue, K.-P., Verhoelst, T., Lambert, J.-C., Balis, D., Garane, K., Granville, G., Koukouili, M.-E., Loyola, D., Romanh, F., Van Roozendael, M., Xu, J., Zimmer, W., Bazureau, A., Fioletov, V., Goutail, F., Pommereau, J.-P., Zerefos, C., Pazmino, A., McLinden, C., Saavedra de Miguel, L., Dehn, A. & Zehner, C. _SSP Mission performance Centre Product Readme OFFL Total Ozone_ S5P-MPC-BIRA-PRF-O3-OFFL, V01.01.07 (2019). <[http://www.tropomi.eu/documents/prf](http://www.tropomi.eu/documents/prf)>. * [20] Lerot, C., Van Roozendael, M., Spurr, R., Loyola, D., Coldewey-Egbers, M., Kochenova, S., van Gent, J., Koukouili, M., Balis, D., Lambert, J.-C., Granville, J. & Zehner, C. Homogenized total ozone data records from the European sensors GOME/ERS-2, SCIAMACHY/Ernvisat, and GOME-2/MetOp-A: CCI REPROCESSED TOTAL OZONE DATA SETS. _Journal of Geophysical Research: Atmospheres_**119**, 1639-1662 (2014). * Part 1: Ground-based validation of total ozone column data products. _Atmospheric Measurement Techniques_**11**, 1385-1402 (2018). * [22] Garane, K., Koukouili, M.-E., Verhoelst, T., Lerot, C., Heue, K.-P., Fioletov, V., Balis, D., Bais, A., Bazureau, A., Dehn, A., Goutail, F., Granville, J., Griffin, D., Hubert, D., Keppens, A., Lambert, J.-C., Loyola, D., McLinden, C., Pazmino, A., Pommereau, J.-P., Redondas, A., Romanh, F., Valkes, P., Van Roozendael, M., Xu, J., Zehner, C., Zerefos, C. & Zimmer, W. TROPOM/SSP total ozone column data: global ground-based validation and consistency with other satellite missions. _Atmospheric Measurement Techniques_**12**, 5263-5287 (2019). * [23] Rabier, F., Jarvinen, H., Klinker, E., Mahfouf, J.-F. & Simmons, A. The ECMWF Operational Implementation of Four-Dimensional Variational Assimilation. I: Experimental Results with Simplified Physics. _Quarterly Journal of the Royal Meteorological Society_**126**, 1143-1170. issn: 00359009, 1477870X (2000).
The Australian bushfires around the turn of the year 2020 generated an unprecedented perturbation of stratospheric composition, dynamical circulation and radiative balance. Here we show from satellite observations that the resulting planetary-scale blocking of solar radiation by the smoke is larger than any previously documented wildfires and of the same order as the radiative forcing produced by moderate volcanic eruptions. A striking effect of the solar heating of an intense smoke patch was the generation of a self-maintained anticyclonic vortex measuring 1000 km in diameter and featuring its own ozone hole. The highly stable vortex persisted in the stratosphere for over 13 weeks, travelled 66,000 km and lifted a confined bubble of smoke and moisture to 35 km altitude. Its evolution was tracked by several satellite-based sensors and was successfully resolved by the European Centre for Medium-Range Weather Forecasts operational system, primarily based on satellite data. Because wildfires are expected to increase in frequency and strength in a changing climate, we suggest that extraordinary events of this type may contribute significantly to the global stratospheric composition in the coming decades.
Summarize the following text.
223
arxiv-format/0505597v1.md
# Asymptotic analysis of the GI/M/1/n loss system as n increases to infinity Vyacheslav M. Abramov ([email protected]) 24/6 Balfour st., Petach Tiqva 49350, Israel ## 1 Introduction Consider \\(GI/M/1/n\\) queueing system denoting by \\(A(x)\\) the probability distribution function of interarrival time and by \\(\\lambda\\) the reciprocal of the expected interarrival time, \\(\\alpha(s)=\\int_{0}^{\\infty}\\mathrm{e}^{-sx}\\mathrm{d}A(x)\\). The parameter of the service time distribution will be denoted \\(\\mu\\), and load of the system is \\(\\rho=\\lambda/\\mu\\). The size of buffer \\(n\\) includes the position for server. Denote also \\(\\rho_{m}=\\mu^{m}\\int_{0}^{\\infty}x^{m}\\mathrm{d}A(x)\\), \\(m=1,2, \\), \\((\\rho_{1}=\\rho^{-1})\\). The explicit representation for the loss probability in terms of generating function was obtained by Miyazawa [12]. Namely, he showed that whenever the value of load \\(\\rho\\), the loss probability \\(p_{n}\\) always exists and has the representation \\[p_{n}=\\frac{1}{\\sum_{j=0}^{n}\\pi_{j}}, \\tag{1}\\] where the generating function \\(\\Pi(z)\\) of \\(\\pi_{j}\\), \\(j=0,1, \\), is the following \\[\\Pi(z)=\\sum_{j=0}^{\\infty}\\pi_{j}z^{j}=\\frac{(1-z)\\alpha(\\mu-\\mu z)}{\\alpha( \\mu-\\mu z)-z},\\ \\ |z|<\\sigma, \\tag{2}\\] \\(\\sigma\\) is the minimum nonnegative solution of the functional equation \\(z=\\alpha(\\mu-\\mu z)\\). This solution is the following. It belongs to the open interval (0,1) if \\(\\lambda<\\mu\\), and it is equal to 1 otherwise. In the recent papers Choi and Kim [8] and Choi et al [9] study the questions related to the asymptotic behavior of the sequence \\(\\{\\pi_{j}\\}\\) as\\(j\\to\\infty\\). Namely, they study asymptotic behavior of the loss probability \\(p_{n}\\), \\(n\\to\\infty\\), as well as obtain the convergence rate of the stationary distributions of the \\(GI/M/1/n\\) queueing system to those of the \\(GI/M/1\\) queueing system as \\(n\\to\\infty\\). The analysis of [8] and [9] is based on the theory of analytic functions. The approach of this paper is based on Tauberian theorems with remainder permitting us to simplify the proof of the results of the mentioned paper of Choi et al [9] as well as to obtain some new results on asymptotic behavior of the loss probability. For the asymptotic behavior of the loss probability in \\(M/GI/1/n\\) queue see Abramov [1], [2], Asmussen [6], Takagi [16], Tomko [17], Willmot [18] etc. For the asymptotic analysis of more general than \\(M/GI/1/n\\) queueing systems see Abramov [3], Baiocchi [7] etc. Study of the loss probability and its asymptotic analysis is motivated by growing development of communication systems. The results of our study can be applied to the problems of flow control, performance evaluation, redundancy. For application of the loss probability to such kind of problems see Ait-Hellal et al [4], Altman and Jean-Marie [5], Cidon et al [10], Gurewitz et al [11]. ## 2 Auxiliary results. Tauberian theorems In this section we represent the asymptotic results of Takacs [15, p.22-23] (see Lemma 2.1 below), and Tauberian theorems of Postnikov [13, Section 25], (see Lemmas 2.2 and 2.3 below). Let \\(Q_{j}\\), \\(j=0,1, \\), be a sequence of real numbers satisfying the recurrent relation \\[Q_{n}=\\sum_{j=0}^{n}r_{j}Q_{n-j+1}, \\tag{2.1}\\] where \\(r_{j}\\), \\(j=0,1, \\), are nonnegative numbers, \\(r_{0}>0\\), \\(r_{0}+r_{1}+ =1\\), and \\(Q_{0}>0\\) is an arbitrary real number. Denote \\(r(z)=\\sum_{j=0}^{\\infty}r_{j}z^{j}\\), \\(|z|\\leq 1\\), \\(\\gamma_{m}=r^{(m)}(1-0)=\\lim_{z\\uparrow 1}r^{(m)}(z)\\), where \\(r^{(m)}(z)\\) is the \\(m\\)th derivative of \\(r(z)\\). Then for \\(Q(z)=\\sum_{j=0}^{\\infty}Q_{j}z^{j}\\), the generating function of \\(Q_{j}\\), \\(j=0,1, \\), we have the following representation \\[Q(z)=\\frac{Q_{0}r(z)}{r(z)-z}. \\tag{2.2}\\] The statements below are known theorems on asymptotic behavior of the sequence \\(\\{Q_{j}\\}\\) as \\(j\\to\\infty\\). Lemma 2.1 below joins two results by Takacs [15]: Theorem 5 on p. 22 and relation (35) on p. 23. **Lemma 2.1** (Takacs [15]). _If \\(\\gamma_{1}<1\\) then_ \\[\\lim_{n\\to\\infty}Q_{n}=\\frac{Q_{0}}{1-\\gamma_{1}}.\\] _If \\(\\gamma_{1}=1\\) and \\(\\gamma_{2}<\\infty\\) then_ \\[\\lim_{n\\to\\infty}\\frac{Q_{n}}{n}=\\frac{2Q_{0}}{\\gamma_{2}}.\\] _If \\(\\gamma_{1}>1\\) then_ \\[\\lim_{n\\to\\infty}\\left(Q_{n}-\\frac{Q_{0}}{\\delta^{n}[1-r^{\\prime}(\\delta)]} \\right)=\\frac{Q_{0}}{1-\\gamma_{1}},\\] _where \\(\\delta\\) is the least (absolute) root of equation \\(z=r(z)\\)._ **Lemma 2.2** (Postnikov [13]). _Let \\(\\gamma_{1}=1\\) and \\(\\gamma_{3}<\\infty\\). Then as \\(n\\to\\infty\\)_ \\[Q_{n}=\\frac{2Q_{0}}{\\gamma_{2}}n+O(\\log n).\\] **Lemma 2.3** (Postnikov [13]). _Let \\(\\gamma_{1}=1\\), \\(\\gamma_{2}<\\infty\\) and \\(r_{0}+r_{1}<1\\). Then as \\(n\\to\\infty\\)_ \\[Q_{n+1}-Q_{n}=\\frac{2Q_{0}}{\\gamma_{2}}+o(1).\\] ## 3 The main results on asymptotic behavior of the loss probability Let us study (1.1) and (1.2) more carefully. Represent (1.2) as the difference of two terms \\[\\Pi(z)=\\frac{(1-z)\\alpha(\\mu-\\mu z)}{\\alpha(\\mu-\\mu z)-z}=\\frac{\\alpha(\\mu- \\mu z)}{\\alpha(\\mu-\\mu z)-z}-z\\frac{\\alpha(\\mu-\\mu z)}{\\alpha(\\mu-\\mu z)-z}\\] \\[=\\widetilde{\\Pi}(z)-z\\widetilde{\\Pi}(z), \\tag{3.1}\\] where \\[\\widetilde{\\Pi}(z)=\\sum_{j=0}^{\\infty}\\widetilde{\\pi}_{j}z^{j}=\\frac{\\alpha( \\mu-\\mu z)}{\\alpha(\\mu-\\mu z)-z}. \\tag{3.2}\\] Note also that \\[\\pi_{0}=\\widetilde{\\pi}_{0}=1,\\]\\[\\pi_{j+1}=\\widetilde{\\pi}_{j+1}-\\widetilde{\\pi}_{j},\\quad j\\geq 0. \\tag{3.3}\\] Therefore, \\[\\sum_{j=0}^{n}\\pi_{j}=\\widetilde{\\pi}_{n},\\] and \\[p_{n}=\\frac{1}{\\widetilde{\\pi}_{n}}. \\tag{3.4}\\] Now the application of Lemma 2.1 yields the following **Theorem 3.1.**_In the case where \\(\\rho<1\\) as \\(n\\to\\infty\\) we have_ \\[p_{n}=\\frac{(1-\\rho)[1+\\mu\\alpha^{\\prime}(\\mu-\\mu\\sigma)]\\sigma^{n}}{1-\\rho- \\rho[1+\\mu\\alpha^{\\prime}(\\mu-\\mu\\sigma)]\\sigma^{n}}+o(\\sigma^{2n}). \\tag{3.5}\\] _In the case where \\(\\rho_{2}<\\infty\\) and \\(\\rho=1\\) we have_ \\[\\lim_{n\\to\\infty}np_{n}=\\frac{\\rho_{2}}{2}. \\tag{3.6}\\] _In the case where \\(\\rho>1\\) we have_ \\[\\lim_{n\\to\\infty}p_{n}=\\frac{\\rho-1}{\\rho}. \\tag{3.7}\\] **Proof.** Indeed, it follows from (3.1), (3.2) and (3.3) that \\(\\widetilde{\\pi}_{0}=1\\), and \\[\\widetilde{\\pi}_{k}=\\sum_{i=0}^{k}\\frac{(-\\mu)^{i}}{i!}\\alpha^{(i)}(\\mu) \\widetilde{\\pi}_{k-i+1}, \\tag{3.8}\\] where \\(\\alpha^{(i)}(\\mu)\\) denotes the \\(i\\)th derivative of \\(\\alpha(\\mu)\\). Note also that \\(\\alpha(\\mu)>0\\), the terms \\((-\\mu)^{i}\\alpha^{(i)}(\\mu)/i!\\) are nonnegative for all \\(i\\geq 1\\), and \\[\\sum_{i=0}^{\\infty}\\frac{(-\\mu)^{i}}{i!}\\alpha^{(i)}(\\mu)=\\sum_{i=0}^{\\infty} \\int_{0}^{\\infty}\\mathrm{e}^{-\\mu x}\\frac{(\\mu x)^{i}}{i!}\\mathrm{d}A(x)\\] \\[=\\int_{0}^{\\infty}\\sum_{i=0}^{\\infty}\\mathrm{e}^{-\\mu x}\\frac{(\\mu x)^{i}}{i! }\\mathrm{d}A(x)=1. \\tag{3.9}\\] Therefore one can apply Lemma 2.1. Then in the case of \\(\\rho<1\\) one can write \\[\\lim_{n\\to\\infty}\\Big{(}\\widetilde{\\pi}_{n}-\\frac{1}{\\sigma^{n}[1+\\mu\\alpha^{ \\prime}(\\mu-\\mu\\sigma)]}\\Big{)}=\\frac{\\rho}{\\rho-1}, \\tag{3.10}\\] and for large \\(n\\) relation (3.10) can be rewritten in the form of the estimation \\[\\widetilde{\\pi}_{n}=\\Big{[}\\frac{1}{\\sigma^{n}[1+\\mu\\alpha^{\\prime}(\\mu-\\mu \\sigma)]}+\\frac{\\rho}{\\rho-1}\\Big{]}[1+o(\\sigma^{n})]. \\tag{3.11}\\]In turn, from (3.11) for large \\(n\\) we obtain \\[p_{n} = \\frac{1}{\\bar{\\pi}_{n}}=\\frac{(1-\\rho)[1+\\mu\\alpha^{\\prime}(\\mu-\\mu \\sigma)]\\sigma^{n}}{1-\\rho-\\rho[1+\\mu\\alpha^{\\prime}(\\mu-\\mu\\sigma)]\\sigma^{n} }[1+o(\\sigma^{n})]\\] \\[= \\frac{(1-\\rho)[1+\\mu\\alpha^{\\prime}(\\mu-\\mu\\sigma)]\\sigma^{n}}{1- \\rho-\\rho[1+\\mu\\alpha^{\\prime}(\\mu-\\mu\\sigma)]\\sigma^{n}}+o(\\sigma^{2n}).\\] Thus (3.5) is proved. The limiting relations (3.6) and (3.7) follow immediately by application of Lemma 2.1. Theorem 3.1 is proved. The following two theorems improve limiting relation (3.6). From Lemma 2.2 we have the following **Theorem 3.2.**_Assume that \\(\\rho=1\\) and \\(\\rho_{3}<\\infty\\). Then as \\(n\\to\\infty\\)_ \\[p_{n}=\\frac{\\rho_{2}}{2n}+O\\Big{(}\\frac{\\log n}{n^{2}}\\Big{)}. \\tag{3.12}\\] **Proof.** The result follows immediately by application of Lemma 2.2. Subsequently, from Lemma 2.3 we have **Theorem 3.3.**_Assume that \\(\\rho=1\\) and \\(\\rho_{2}<\\infty\\). Then as \\(n\\to\\infty\\)_ \\[\\frac{1}{p_{n+1}}-\\frac{1}{p_{n}}=\\frac{2}{\\rho_{2}}+o(1). \\tag{3.13}\\] **Proof**. The theorem will be proved if we show that for all \\(\\mu>0\\) \\[\\alpha(\\mu)-\\mu\\alpha^{\\prime}(\\mu)<1. \\tag{3.14}\\] Taking into account (3.9) and the fact that \\((-\\mu)^{i}\\alpha^{(i)}(\\mu)/i!\\geq 0\\) for all \\(i\\geq 0\\), one can write \\[\\alpha(\\mu)-\\mu\\alpha^{\\prime}(\\mu)\\leq 1. \\tag{3.15}\\] Thus, we have to show that for some \\(\\mu_{0}>0\\) the equality \\[\\alpha(\\mu_{0})-\\mu_{0}\\alpha^{\\prime}(\\mu_{0})=1 \\tag{3.16}\\] is not a case. Indeed, since \\(\\alpha(\\mu)-\\mu\\alpha^{\\prime}(\\mu)\\) is an analytic function then, according to the theorem on maximum absolute value of analytic function, the equality \\(\\alpha(\\mu)-\\mu\\alpha^{\\prime}(\\mu)=1\\) is valid for all \\(\\mu>0\\). This means that (3.16) is valid if and only if \\(\\alpha^{(i)}(\\mu)=0\\) for all \\(i\\geq 2\\) and for all \\(\\mu>0\\), and therefore \\(\\alpha(\\mu)\\) is a linear function, i.e. \\(\\alpha(\\mu)=c_{0}+c_{1}\\mu\\), where \\(c_{0}\\) and \\(c_{1}\\) are some constants. However, since \\(|\\alpha(\\mu)|\\leq 1\\) we obtain \\(c_{0}=1\\), \\(c_{1}=0\\). This is a trivial case where the probability distribution function \\(A(x)\\) is concentrated in point \\(0\\). Therefore (3.16) is not a case, and hence (3.14) holds. Theorem 3.3 is proved. We have also the following **Theorem 3.4**. _Let \\(\\rho=1-\\epsilon\\), where \\(\\epsilon>0\\), and \\(\\epsilon n\\to C>0\\) as \\(n\\to\\infty\\) and \\(\\epsilon\\to 0\\). Assume that \\(\\rho_{3}=\\rho_{3}(n)\\) is a bounded function and there exists \\(\\widetilde{\\rho}_{2}=\\lim_{n\\to\\infty}\\rho_{2}(n)\\). Then,_ \\[p_{n}=\\frac{\\epsilon e^{-2C/\\widetilde{\\rho}_{2}}}{1-e^{-2C/\\widetilde{\\rho} _{2}}}[1+o(1)]. \\tag{3.17}\\] **Proof.** It was shown in Subhankulov [14, p. 326] that if \\(\\rho^{-1}=1+\\epsilon\\), \\(\\epsilon>0\\) and \\(\\epsilon\\to 0\\), \\(\\rho_{3}(n)\\) is a bounded function, and there exists \\(\\widetilde{\\rho}_{2}=\\lim_{n\\to\\infty}\\rho_{2}(n)\\) then \\[\\sigma=1-\\frac{2\\epsilon}{\\widetilde{\\rho}_{2}}+O(\\epsilon^{2}), \\tag{3.18}\\] where \\(\\sigma=\\sigma(n)\\) is the minimum root of the functional equation \\(z-\\alpha(\\mu-\\mu z)=0\\), \\(|z|\\leq 1\\), and where the parameter \\(\\mu\\) and the function \\(\\alpha(z)\\), both or one of them, are assumed to depend on \\(n\\). Therefore, (3.18) is also valid under the assumptions of the theorem. Then after some algebra one can obtain \\[[1+\\mu\\alpha^{\\prime}(\\mu-\\mu\\sigma)]\\sigma^{n}=\\epsilon\\mathrm{e}^{-2C/ \\widetilde{\\rho}_{2}}[1+o(1)],\\] and the result easily follows from estimation (3.11). **Theorem 3.5**. _Let \\(\\rho=1-\\epsilon\\), where \\(\\epsilon>0\\), and \\(\\epsilon n\\to 0\\) as \\(n\\to\\infty\\) and \\(\\epsilon\\to 0\\). Assume that \\(\\rho_{3}=\\rho_{3}(n)\\) is a bounded function and there exists \\(\\widetilde{\\rho}_{2}=\\lim_{n\\to\\infty}\\rho_{2}(n)\\). Then_ \\[p_{n}=\\frac{\\widetilde{\\rho}_{2}}{2n}+o\\Big{(}\\frac{1}{n}\\Big{)}. \\tag{3.19}\\] **Proof.** The proof follows by expanding of the main term of asymptotic relation (3.17) for small \\(C\\). ## 4 Discussion We obtained a number of asymptotic results related to the loss probability for the \\(GI/M/1/n\\) queueing system by using Tauberian theoremswith remainder. Asymptotic relations (3.6) and (3.7) of Theorem 3.1 are the same as correspondent asymptotic relations of Theorem 3 of [9]. Asymptotic relation (3.5) of Theorem 3.1 improves correspondent asymptotic relation of Theorem 3 of [9], however it can be deduced from Theorem 3.1 of [8] and the second equation on p. 1016 of [8]. Under additional condition \\(\\rho_{3}<\\infty\\) the statement (3.12) of Theorem 3.2 is new. It improves the result of [9] under \\(\\rho=1\\): the remainder obtained in Theorem 3.2 is \\(O(\\log n/n^{2})\\) whereas under condition \\(\\rho_{2}<\\infty\\) the remainder obtained in Theorem 3 of [9] is \\(o(n^{-1})\\). Asymptotic relation (3.13) of Theorem 3.3 coincides with intermediate asymptotic relation on p. 441 of [9]. Theorems 3.4 and 3.5 are new. They provide asymptotic results where the load \\(\\rho\\) is close to 1. **Acknowledgement** The author thanks the anonymous referees for a number of valuable comments. **References** [1] V.M.Abramov, _Investigation of a Queueing System with Service Depending on Queue-Length_, (Donish, Dushanbe, 1991) (in Russian). [2] V.M.Abramov, On a property of a refusals stream, J. Appl. Probab. 34 (1997) 800-805. [3] V.M.Abramov, Asymptotic behavior of the number of lost packets, submitted for publication. [4] O.Ait-Hellal, E.Altman, A.Jean-Marie and I.A.Kurkova, On loss probabilities in presence of redundant packets and several traffic sources, Perform. Eval. 36-37 (1999) 485-518. [5] E.Altman and A.Jean-Marie, Loss probabilities for messages with redundant packets feeding a finite buffer, IEEE J. Select. Areas Commun. 16 (1998) 778-787. [6] S.Asmussen, Equilibrium properties of the \\(M/G/1\\) queue, Z. Wahrscheinlichkeitstheorie 58 (1981) 267-281. [7] A.Baiocchi, Analysis of the loss probability of the \\(MAP/G/1/K\\) queue, part I: Asymptotic theory, Stochastic Models 10 (1994) 867-893. [8] B.D.Choi and B.Kim, Sharp results on convergence rates for the distribution of \\(GI/M/1/K\\) queues as \\(K\\) tends to infinity, J. Appl. Probab. 37 (2000) 1010-1019. [9] B.D.Choi, B.Kim and I.-S.Wee, Asymptotic behavior of loss probability in \\(GI/M/1/K\\) queue as \\(K\\) tends to infinity, Queueing Systems 36 (2000) 437-442. [10] I.Cidon, A.Khamisy and M.Sidi, Analysis of packet loss processes in high-speed networks, IEEE Trans. Inform. Theory 39 (1993) 98-108. [11] O.Gurewitz, M.Sidi and I.Cidon, The ballot theorem strikes again: Packet loss process distribution, IEEE Trans. Inform. Theory 46 (2000) 2588-2595. [12] M.Miyazawa, Complementary generating functions for the \\(M^{X}/GI/1/k\\) and \\(GI/M^{Y}/1/k\\) queues and their application to the comparison for loss probabilities, J. Appl. Probab. 27 (1990) 684-692. [13] A.G.Postnikov, Tauberian theory and its application. Proc. Steklov Math. Inst. 144 (1979) 1-148. [14] M.A.Subhankulov, _Tauberian Theorems with Remainder_. (Nauka, Moscow, 1976) (in Russian). [15] L.Takacs, _Combinatorial Methods in the Theory of Stochastic Processes_. (Wiley, New York, 1967). [16] H.Takagi, _Queueing Analysis_, Vol. 2. (Elsevier Science, Amsterdam, 1993). [17] J.Tomko, One limit theorem in queueing problem as input rate increases infinitely. Studia Sci. Math. Hungarica 2 (1967) 447-454 (in Russian). [18] G.E.Willmot, A note on the equilibrium \\(M/G/1\\) queue-length, J. Appl. Probab. 25 (1988) 228-231.
This paper provides the asymptotic analysis of the loss probability in the \\(GI/M/1/n\\) queueing system as \\(n\\) increases to infinity. The approach of this paper is alternative to that of the recent papers of Choi and Kim [8] and Choi et al [9] and based on application of modern Tauberian theorems with remainder. This enables us to simplify the proofs of the results on asymptotic behavior of the loss probability of the abovementioned paper of Choi and Kim [9] as well as to obtain some new results. L (0,1) 25/09/2018 18:39; p.1 Loss system, \\(GI/M/1/n\\) queue, asymptotic analysis, Tauberian theorems with remainder
Condense the content of the following passage.
166
arxiv-format/1803_06657v3.md
# SDF-MAN: Semi-supervised Disparity Fusion with Multi-scale Adversarial Networks Can Pu Runzi Song Radim Tylecek Nanbo Li Robert B. Fisher ## 1 Introduction With recent improvements in depth sensing devices, depth information is now easily accessible (In a stereo camera pair depth and disparity are interchangeable measures: _depth = focal_length \\(\\times\\) baseline / disparity_. When data is from a sensor like time of flight sensor, the depths can be converted into disparities using a constant focal length and baseline). However, each sensor has its own advantages and disadvantages, with the result that no algorithm can perform accurately and robustly in all general scenes. For example, active illumination devices such as ToF (Time of Flight) sensors and structured light cameras [1] estimate the depth information accurately regardless of the scene content but struggle on low reflective surfaces or outdoors. Stereo vision algorithms [2, 3, 4, 5, 6] work better outdoors and perform accurately on high texture areas but behave poorly in repetitive or textureless regions. Monocular vision algorithms [7] work robustly in textureless areas but tend to produce blurry depth edges. Thus, fusing multiple depth maps from different kinds of algorithms or devices and utilizing their complementary strengths to get more accurate depth information is a valuable technique for various applications. The traditional pipeline for the majority of the fusion algorithms [8, 9, 10, 11, 12] is: (1) estimate disparities from the different sensors, (2) estimate associated confidence maps, and (3) apply a specific fusion algorithm based on the confidence maps to get a refined disparity map. This approach has three potential problems. Primarily, estimating the confidence maps for different sensors is a hard task with limited robustness and accuracy. Second, estimating the disparity relationship among pixels in a general scene is hard without prior knowledge. Finally, there is no common methodology for different kinds of depth fusion, such as stereo-stereo fusion, monocular-stereo fusion and stereo-ToF fusion. Thus, researchers have designed different methods for different fusion tasks. The recent fusion method [13] based on end to end deep learning has provided a general solution to different kinds of fusion but has limited accuracyand robustness, in part due to not exploiting other associated information to help the network make judgments. It also did not exploit the disparity relationship among pixels. In this paper, an architecture similar to a Generative Adversarial Network (GAN) [14] (generator is replaced by a refiner network without random noise input) is proposed to solve the three problems listed above, by designing an efficient network structure and a robust object function. In addition to the raw disparity maps the network input also includes other image information, i.e., the original intensity and gradient images (see Figure 1), in order to facilitate the selection of a more accurate disparity value from the input disparity images. This avoids having to design a manual confidence measure for different sensors and allows a common methodology for different kinds of sensor. To preserve and exploit the local information better, some successful ideas about local structure from Unet [15] and Densenet [16] have been used. To help the network refine the disparity maps accurately and robustly a novel objective function was designed. Gradient information is incorporated as a weight into the \\(L_{1}\\) distance to force the disparity values at the edges to get closer to the ground truth. A smoothness term helps the network propagate the accurate disparity values at edges to adjacent areas, which inpaints regions with invalid disparity values. The Wasserstein distance [17, 18] replaced the Jensen-Shannon divergence [14] for GAN loss to reduce training difficulties and avoid mode collapse. With the discriminator network classifying input samples in different receptive fields and scales, the disparity Markov Random Field in the refined disparity map gives a better estimate of the real distribution. Our semi-supervised approach trains the discriminator network to produce the refined disparity map using not only the labeled data but also the unlabeled data along with the ground truth of the labeled data. It requires less labeled training data but still achieves accuracy similar to the proposed fully-supervised method or better performance when using the same amount of labeled data with additional unlabeled data compared with the supervised method, as shown in the experimental results. Section 2 reviews key previous disparity fusion algorithms and also recent advances in GAN networks. Section 3 presents the new fusion model including the objective function and network structure. Section 4 presents the results of experiments conducted with synthetic and real data (Table 1) for stereo-monocular fusion, stereo-ToF fusion and stereo-stereo fusion. Contributions: We have: 1. Improved fusion accuracy by using a network that learns the disparity relationships among pixels without any prior knowledge. 2. Reduced the labeled data requirement drastically by using the proposed semi-supervised strategy. 3. Increased robustness by fusing intensity and gradient information as well as depth data. 4. Proposed a common network methodology allowing different kinds of sensor fusion without requiring detailed knowledge of the performance of each sensor. ## 2 Related Work The approach of fusing depth maps from different sensors (e.g., stereo-ToF depth fusion) has become popular. The majority of the fusion work [9, 10, 11, 12] shares the same pipeline architecture, which estimates the uncertainty of each pixel first and then refines the depth map based on those confidence maps. A recent survey is in [8]. More recently, Dal Mutto et al. [9] used the IR frequency, etc., of a ToF sensor to estimate the depth map uncertainty and used the similarity of image patches in the stereo images to estimate the confidence of pixels in the stereo depth map. Then a MAP-MRF framework refined the depth map. Later, Marin et al. [10] also utilized sensor physical properties to estimate the confidence for the ToF depth map and used an empirical model based on the global and local cost of stereo matching to calculate the confidence map for the stereo vision sensor. The extended LC (Locally Consistent) technique was used to fuse the depth maps based on each confidence map. To get a more accurate confidence map for fusion, Agresti et al. [11] used a simple convolution neural network for uncertainty estimation and then used the LC technique from [10] for the fusion. In addition to the work in stereo-ToF fusion above, Facil et al. [12] used a weighted interpolation of depths from a monocular vision sensor and a multi-view vision sensor based on the likelihood of each pixel's contribution to the depth value. The above-mentioned approaches have two issues limiting the accuracy of the refined disparity map: (1) Estimating the confidence map for each type of sensor accurately is hard and makes the system unstable. (2) Accurately modeling the complex disparity relationship among neighboring pixels in random scenes is challenging. The other class of depth fusion methods is based on deep learning. The method proposed here belongs to this class and we believe that it is the first to solve the two critical problems above simultaneously. Some researchers [11, 19] have estimated the confidence maps for different sensors with deep learning methods and then incorporated the confidence as weights into the classical pipeline to refine the disparity map. However, these methods treat the confidence maps as an intermediate result and no one has trained the neural network to do the fusion from end to end directly and taken both the depth and confidence information into account simultaneously. For example, Poggi and Mattoccia [13] selected the best disparity value for each pixel from the several algorithms by formulating depth fusion as a multi-labeling deep network classification problem. However, the method only used the disparity maps from the sensors and neglected other associated image information (e.g., intensities, gradients). The approach did not exploit the real disparity relationship among neighbouring pixels. The recent development of the GAN methodology led to the foundation of the approach proposed here. The GAN was first proposed by Goodfellow et al. [14], who trained two neural networks (generator and discriminator) simultaneously to make the distribution of the output from the generator approximate the real data distribution by a minimax two-player strategy. To control the data generation process, Mirza and Osindero [20] conditioned the model on additional information. There are many variants based on the initial GAN model as seen in the survey [21]. Some researchers [17, 18] used the Wasserstein distance to measure the distance between the model distribution and the real distribution, which reduced the difficulty of training the GAN drastically. It also reduced mode collapse to some extent. GANs have been applied to problems other than disparity fusion. For example, Isola et al. [22] trained a GAN to translate between image domains which can be also used to transfer the initial disparity maps from several sensors into a refined disparity map. However, the design proposed in [22] neglects information useful for disparity fusion, which limits the accuracy of the refined disparity map. In summary, there are previously developed methods for depth fusion based on both the algorithmic pipeline and emerging deep network techniques. In this paper, we combine image evidence as well as raw depth to give a more robust objective function. This is implemented in an end-to-end architecture similar to a GAN. We are the first to our knowledge to use such structure to learn the complex disparity relationship among pixels to improve depth fusion accuracy. ## 3 Methodology First we introduce the proposed general framework (Figure 1) for disparity fusion and then the new loss functions in the supervised and semi-supervised methods. These functions will make adversarial training simple and the refined disparity more accurate and robust. Finally, the end-to-end refiner (Figure 2) and discriminator (Figure 3) network structure are presented. ### Framework We develop a method that uses an adversarial network, which is similar to a GAN [14] but with raw disparity maps, gradient and intensity information as inputs instead of random noise. The refiner network \\(R\\) (similar to the generator \\(G\\) in [14]) is trained to produce a refined disparity which cannot be classified as \"fake\" by the discriminator \\(D\\). Simultaneously, the discriminator \\(D\\) is trained to become better at distinguishing that the input from refiner \\(R\\) is fake and the input from the ground truth is real. By adopting a minimax two-player game strategy, the two neural networks \\(\\{R,D\\}\\) make the output distribution from the refiner network approximate the real data distribution. The full system diagram is shown in Figure 1. ### Objective Function To let the refiner produce a more accurate refined disparity map, the objective function is designed as follows: (1) To encourage the disparity value of each pixel to approximate the ground truth and to avoid blur at scene edges (such as occurs with the Monodepth method [7]), a gradient-based \\(L_{1}\\) distance training loss is used, which applies a larger weight to the disparity values at the scene edges: \\[\\mathcal{L}_{L_{1}}(R)=\\underset{x\\sim P_{real},\\tilde{x}\\sim P_{refiner}}{ \\mathbb{E}}\\left[\\exp(\\alpha|\ abla(I_{l})|)\\ ||x-\\tilde{x}||_{1}\\right] \\tag{1}\\] where \\(R\\) represents the refiner network. \\(x\\) is the ground truth and \\(\\tilde{x}\\) is the refined disparity map from the refiner. \\(P_{real}\\) and \\(P_{refiner}\\) represents the real disparity distribution from the ground truth and fake disparity distribution produced by the refiner. \\(\ abla(I_{l})\\) is the gradient of the left intensity image in the scene because all the inputs and refined disparity map are from the left view. \\(\\alpha\\geq 0\\) weights the gradient. \\(||\\bullet||_{1}\\) is the \\(L_{1}\\) distance. The goal is to encourage disparity estimates near image edges (larger gradients) to get closer to the ground truth. (2) A gradient-based smoothness term is added to propagate more reliable disparity values from image edges to the other areas in the image under the assumption that the disparity of neighboring pixels should be similar if their intensities are similar: \\[\\mathcal{L}_{sm}(R)=\\underset{u\\in\\tilde{x},v\\in N(u),\\tilde{x}\\sim Pre_{fisher}}{ \\mathbb{E}}\\left[\\exp(1-\\beta|\ abla(I_{l})_{uv}|)\\ ||\\tilde{x}_{u}-\\tilde{x}_{v}||_{1}\\right] \\tag{2}\\] where \\(\\tilde{x}_{u}\\) is the disparity value of a pixel \\(u\\) in the refined disparity map \\(\\tilde{x}\\) from the refiner. \\(\\tilde{x}_{v}\\) is the disparity value of a pixel \\(v\\) in the neighborhood \\(N(u)\\) of pixel \\(u\\). \\(\ abla(I_{l})_{uv}\\) is the gradient from pixel \\(u\\) to \\(v\\) in the left intensity image (the refined disparity map is produced on the left view). \\(\\beta\\geq 0\\) is responsible for how close the disparities are if the intensities in the neighbourhood are similar. (3) The underlying assumption in \\(\\mathcal{L}_{L_{1}}(R)\\) is that the disparity relationship among pixels is independent. The disparity relationship in \\(\\mathcal{L}_{sm}(R)\\) is too simple to describe the real disparity relationship among neighbours in the real situation. To help the refiner produce a disparity map whose disparity Markov Random Field is closer to the real distribution, the proposed method inputs disparity maps from the refiner and the ground truth into the discriminator, which outputs the probability of the input samples being from the same distribution as the ground truth. This probability is then used to update the refiner through its loss function. Instead of defining a global discriminator to classify the whole disparity map, we define it to classify all local disparity patches separately because any local disparity patch sampled from the refined disparity map should have similar statistics to the real disparity patch. Thus, by making the discriminator output the probabilities in different receptive fields or scales (In Figure 3, please refer to \\(D1\\), \\(D2\\), , \\(D5\\)), the refiner will be forced to make the disparity distribution in the refined disparity map close to the real. In Equations (3) and (4) below, Figure 1: Overview of Sdf-MAN. We train a refiner network \\(R\\) to map raw disparity maps (_disp1, disp2_) from two input algorithms to the ground truth based on associated image information (gradient, intensity). The refiner \\(R\\) tries to predict a refined disparity map close to the ground truth. The discriminator \\(D\\) tries to discriminate whether its input is fake (refined disparity from \\(R\\)) or real (real disparity from the ground truth). The refiner and discriminator can see both the supplementary information and initial disparity inputs simultaneously. We can fuse any number of disparity inputs or different information cues by concatenating them together directly as inputs. The two networks are updated alternately. (**a**) Refiner: a network to refine initial disparity maps; (**b**) Negative examples: a discriminator network with refined disparity inputs; (**c**) Positive examples: a discriminator network with real disparity inputs. \\(D_{i}\\) is the probability at the \\(i\\)th scale that the input patch to the discriminator is from the real distribution at the \\(i\\)th scale: \\[\\mathcal{L}_{JS-GAN}(R,D_{i})=\\underset{x\\sim P_{real}}{\\mathbb{E}}\\left[\\log(D_ {i}(x))\\right]+\\underset{\\hat{x}\\sim P_{refiner}}{\\mathbb{E}}\\left[\\,\\log(1-D_{ i}(\\hat{x}))\\right] \\tag{3}\\] To avoid \\(JS-GAN\\) mode collapse during training and alleviate other training difficulties, we have also investigated replacing \\(\\mathcal{L}_{JS-GAN}(R,D_{i})\\) with the improved WGAN loss function [18]. \\(\\lambda\\) is the penalty coefficient (We set it 0.0001 for all the experiments in this paper) and \\(\\hat{x}\\) are the random samples (For more details, please read [18]): \\[\\mathcal{L}_{WGAN}(R,D_{i})=\\underset{\\hat{x}\\sim P_{refiner}}{\\mathbb{E}} \\left[D_{i}(\\hat{x})\\right]-\\underset{x\\sim P_{real}}{\\mathbb{E}}\\left[D_{i}( x)\\right]+\\lambda\\underset{\\hat{x}\\sim P_{2}}{\\mathbb{E}}\\left[(||\ abla_{\\hat{x}}D_{i }(\\hat{x})||_{2}-1)^{2}\\right] \\tag{4}\\] The experiments explored the difference in performance of these two GAN loss functions. We let \\(\\mathcal{L}_{GAN}(R,D_{i})\\) be either \\(\\mathcal{L}_{JS-GAN}(R,D_{i})\\) or \\(\\mathcal{L}_{WGAN}(R,D_{i})\\) in the following context. The difference of performance with both the single scale and multiple scales will also be explored. (4) By inputting only the refined disparity map and its corresponding ground truth into the discriminator simultaneously in each step during training, the discriminator is trained in a fully supervised manner considering whether the input disparity maps are the same. In semi-supervised mode, we still feed the refined disparity map and its corresponding ground truth into the discriminator for the labeled data. But for the unlabeled data, we feed the refined disparity map of the unlabeled data and random samples from a small ground truth dataset simultaneously. By doing this, the discriminator will be taught to classify the input samples based on the disparity Markov Random Field. Then, in turn, the refiner will be trained to produce a disparity Markov Random Field in the refined disparity map that is closer to the real case. (5) The combined loss function in the fully supervised learning approach is: \\[\\mathcal{L}(R,D)=\\theta_{1}\\mathcal{L}_{L_{1}}^{Ld}(R)+\\theta_{2}\\mathcal{L}_ {sm}^{Ld}(R)+\\theta_{3}\\sum_{i=1}^{M}\\mathcal{L}_{GAN}^{Ld}(R,D_{i}) \\tag{5}\\] where \\(M\\) is the number of the scales. \\(\\theta_{1}\\), \\(\\theta_{2}\\), \\(\\theta_{3}\\) are the weights for the different loss terms. In the fully supervised learning approach (See Equation (5)), we only feed the labeled data (denoted by \\(Ld\\)). In the semi-supervised learning (See Equation (6)), in each iteration, we feed one batch of labeled data (denoted by \\(Ld\\)) and one batch of unlabeled data (denoted by \\(Ud\\)) simultaneously. As for the labeled data \\(Ld\\), we calculate its L1 loss (denoted by \\(\\mathcal{L}_{L_{1}}^{Ld}\\)), smoothness loss (denoted by \\(\\mathcal{L}_{sm}^{Ld}\\)), and GAN loss (denoted by \\(\\mathcal{L}_{GAN}^{Ld}\\)). The input to the discriminator is the refined disparity map (denoted by \\(Fake_{1}\\)) and corresponding ground truth (denoted by \\(Real_{1}\\)). Thus, the GAN loss for the labeled data \\(Ld\\) is calculated using \\(Fake_{1}\\) and \\(Real_{1}\\). As for the unlabeled data \\(Ud\\), we only calculate its GAN loss (\\(\\mathcal{L}_{GAN}^{Ud}\\)) and neglect the other loss terms. The unlabeled data gets its refined disparity map (denoted by \\(Fake_{2}\\)) from the refiner. Then feed \\(Real_{1}\\) and \\(Fake_{2}\\) into the discriminator to get the GAN loss for the unlabeled data. As our experiment results show, this approach allows the use of much less labeled data (expensive) in a semi-supervised method (Equation (6)) to achieve similar performance to the fully supervised method (Equation (5)) or better performance when using the same amount of labeled data with additional unlabeled data compared with the supervised method. The combined loss function in the semi-supervised method is: \\[\\mathcal{L}(R,D)=\\theta_{1}\\mathcal{L}_{L_{1}}^{Ld}(R)+\\theta_{2}\\mathcal{L}_ {sm}^{Ld}(R)+\\frac{\\theta_{3}}{2}\\Big{(}\\sum_{i=1}^{M}\\mathcal{L}_{GAN}^{Ld}( R,D_{i})+\\sum_{i=1}^{M}\\mathcal{L}_{GAN}^{Ud}(R,D_{i})\\Big{)} \\tag{6}\\] ### Network Architectures We adopt a fully convolutional neural network [23] and also the partial architectures from [22, 24, 16] are adapted here for the refiner and discriminator. The refiner and discriminator use dense blocks to increase local non-linearity. Transition layers change the size of the feature maps to reduce the time and space complexity [16]. In each dense block and transition layer, modules of the form ReLu-BatchNorm-convolution are used. We use two modules in the refiner and four modules in the discriminator in each dense block, where the filter size is 3 \\(\\times\\) 3 and stride is 1. The growth rate \\(k\\) for each dense block is dynamic (unlike [16]). In each transition layer, we only use one module, where the filter size is 4 \\(\\times\\) 4 and the stride is 2 (except that in Tran.3 of the discriminator the stride is 1). Figure 2 shows the main architecture of the refiner, where \\(c1\\) initial disparity inputs (the experiments below use \\(c1=2\\) for 2 disparity maps) and \\(c2\\) pieces of information (the experiments below use \\(c2=2\\) for the left intensity image and a gradient of intensity image) are concatenated as input into the generator. The batch size is \\(b\\) and input image resolution is \\(32m\\times 32n\\) (\\(m\\), \\(n\\) are integers). \\(lg\\) is the number of the feature map channels after the first convolution. To reduce the computational complexity and increase the extraction ability of local details, each dense block contains only 2 internal layers (or modules above). Additionally, the skip connections [15] from the previous layers to the latter layers preserve the local details in order not to lose information after the network bottleneck. During training, a dropout strategy has been added into the layers in the refiner after the bottleneck to avoid overfitting and we cancel the dropout part during test to produce a determined result. Figure 2: This figure shows some important hyperparameters and the refiner architecture configuration. Please refer to Table 2 for the specific values in each experiment. Tip: Readers can deepen the refiner by symmetrically adding more dense blocks and deconvolution layer by themselves according to their own needs. Figure 3 is for the discriminator. The discriminator will only be used during training and abandoned during testing. Thus, the architecture of the discriminator will only influence the computational costs during training. The initial raw disparity maps, information and real or refined disparity maps are concatenated and fed into the discriminator. Each dense block contains 4 internal layers (or modules above). The sigmoid function outputs the probability map (\\(Di,i=1,2, ,5\\)) that the local disparity patch is real or fake at different scales to force the Markov Random Field of the refined disparity map to get closer to the real distribution at different receptive field sizes. ## 4 Experimental Evaluation The network is implemented using TensorFlow [25] and trained & tested using an Intel Core i7-7820HK processor (quad-core, 8 MB cache, up to 4.4 GHz) and Nvidia Geforce GTX 1080Ti. First, an ablation study with initial raw disparity inputs ([4, 3]) is conducted using a synthetic garden dataset to analyze the influence of each factor in the energy function and the objective function. Secondly, three groups of experiments for three fusion tasks (monocular-stereo, stereo-ToF, stereo-stereo) show the robustness, accuracy and generality of the proposed algorithm using synthetic datasets (SYNTH3 [11], Scene Flow [4], our synthetic garden dataset (They are not available to the public currently)) and real datasets (Kitti2015 [26] dataset, Trimbot2020 Garden datasets (For more description, see Appendix A). A brief description of datasets (In the semi-supervised method, as for each labelled training sample, we use it with its ground truth in the supervised part. We also use it without its ground truth in the unsupervised part) in the paper is shown in Table 1. All the results show the proposed algorithm's superiority compared with the state-of-art or classical depth acquisition algorithms ([2, 7, 3, 4, 5, 6]), the state-of-art stereo-stereo fusion algorithms ([13]), the state-of-art stereo-ToF fusion algorithm [10, 11], and the state-of-art image style transfer algorithm [22]. In the following experiments, the inputs to the neural network were first scaled to \\(32m\\times 32n\\) and normalized to [\\(-\\)1, 1]. After that, the input was flipped vertically with a 50% chance to double the number of training samples. Weights of all the neurons were initialized from a Gaussian distribution (standard deviation 0.02, mean 0). We trained all the models in all the experiments with a batch size of 4 in the supervised and semi-supervised method, using Adam [27] with a momentum of 0.5. The learning rate is changed from 0.005 to 0.0001 gradually. The method in [14] is used to optimize the refiner network and discriminator network by alternating between one step on the discriminator and then one step on the refiner. We set the parameters \\(\\theta_{1}\\), \\(\\theta_{2}\\), \\(\\theta_{3}\\) in Equation (5) or Equation (6) to make those terms contribute differently to the energy function in the training process. We used the \\(L_{1}\\) distance between the estimated image and ground truth as the error. The unit is pixel. For more details about the network settings and computational complexity, please see Table 2. To highlight the real test, the network is so fast that it can run the disparity fusion (e.g., up to 384 \\(\\times\\) 1248 pixels on Kitti2015 datasets) directly at 90 fps without any cropping (e.g., DSF [13] used samples with 9 \\(\\times\\) 9 pixels) or down-sampling. ### Ablation Study This subsection shows the effectiveness of the loss function design in Section 4.1.1 and the influence of each factor in the final loss function in Section 4.1.2. All the experiments in this subsection are conducted on our synthetic garden dataset (The performance demo on the synthetic garden dataset: [https://youtu.be/QtTj6hOQwUw](https://youtu.be/QtTj6hOQwUw)). The synthetic garden dataset contains 4600 training samples and 421 test samples under outdoor environments. Each sample has one pair of rectified stereo images and dense ground truth with resolution 480 \\(\\times\\) 640 (height \\(\\times\\) width) pixels. The reason why we use a synthetic dataset is that the real dataset (e.g., Kitti2015) does not have dense ground truth, which will influence the evaluation of the network. We used Dispnet [4] and FPGA-stereo [3] to generate the two input disparity images. The authors of [4, 3] helped us get the best performance on the dataset as the input to the network. As for each model, we trained it for 100 epochs and it takes 20 h or so. The inference is fast (about 142 frames per second ) for the 480 \\(\\times\\) 640 (Height \\(\\times\\) Width) resolution input. One qualitative example is shown in Figure 4 from Section 4.1.1. \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline & \\multicolumn{2}{c}{**Supervised**} & \\multicolumn{2}{c}{**Semi-Supervised**} \\\\ \\hline **Dataset** & **Labeled Training Samples** & **Test Samples** & **Training Samples** & **Test Samples** \\\\ \\hline Synthetic Garden & 4600 & 421 & 4600 (labeled) & 421 \\\\ Scene Flow & 6000 & 1460 & 600 (labeled) + 5400 (unlabeled) & 1460 \\\\ SYNTH3 & 40 & 15 & 40 (labeled) & 15 \\\\ Kitti2015 & 150 & 50 & None & None \\\\ Trimbot2020 Garden & 1000 & 250 & 1000 (labeled) & 250 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: A Brief Description of Datasets in This Paper. Figure 3: This figure shows some important hyperparameters and the discriminator architecture configuration. Please refer to Table 2 for the specific values in each experiment. #### 4.1.1 Loss Function Design We aimed at testing the effectiveness of the objective function design from Section 3.2. Table 3 defines different combinations of the strategies that were evaluated, based on the objective functions defined in Section 3.2. The default network settings and some important parameters in this group of experiments, please see \"Ablation Study\" in Table 2. #### 4.1.2 Influence of Each Term in Loss Function In this part, we will change one of the following factors (\\(\\theta_{1}\\), \\(\\theta_{2}\\), \\(\\theta_{3}\\), \\(\\alpha\\), \\(\\beta\\)) in our energy function to see the influence of each cue in Equation (5). The Baseline method in this part is also the Supervised model from Section 4.1.1. The performance results are listed in Table 5. We can see \\(\\mathcal{L}_{L_{1}}(R)\\) in Equation (1) has the largest influence (corresponding to \\(\\theta_{1}\\)) and then the gradient information in Equation (1) (corresponding to \\(\\alpha=0\\)). After that, the smoothing term \\begin{table} \\begin{tabular}{c c c} \\hline \\hline **Model Name** & **Combination** \\\\ \\hline Supervised & WGAN (4) + multiscale (M = 5) + supervised (5) \\\\ Semi & WGAN (4) + multiscale (M = 5) + semi-supervised (6) \\\\ Monoscale & WGAN (4) + monoscale (M = 1) + supervised (5) \\\\ JS-GAN & JS-GAN (3) + multiscale (M = 5) + supervised (5) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Mean absolute disparity error of each model on Synthetic Garden dataset (421 test samples). \\begin{table} \\begin{tabular}{c c c c c c c c c c c c c c} \\hline \\hline & \\multicolumn{6}{c}{Ablation Study with Synthetic Garden Dataset} \\\\ \\hline \\(Para.\\) & Test time & \\(b\\) & \\(32m\\) & \\(32n\\) & \\(c_{1}\\) & \\(c_{2}\\) & \\(lg\\) & \\(ld\\) & \\(\\theta_{1}\\) & \\(\\theta_{2}\\) & \\(\\theta_{3}\\) & \\(\\alpha\\) & \\(\\beta\\) \\\\ \\(Value\\) & 0.007 (s/frame) & 4 & 480 & 640 & 2 & 2 & 12 & 12 & 395 & 5 & 1 & 1 & 650 \\\\ \\hline \\multicolumn{11}{c}{Stereo-Monocular Fusion with Synthetic Scene Flow Dataset [11]} \\\\ \\hline \\(Para.\\) & Test time & \\(b\\) & \\(32m\\) & \\(32n\\) & \\(c_{1}\\) & \\(c_{2}\\) & \\(lg\\) & \\(ld\\) & \\(\\theta_{1}\\) & \\(\\theta_{2}\\) & \\(\\theta_{3}\\) & \\(\\alpha\\) & \\(\\beta\\) \\\\ \\(Value\\) & 0.042 (s/frame) & 4 & 256 & 256 & 2 & 2 & 64 & 64 & 199 & 1 & 1 & 0.5 & 100 \\\\ \\hline \\multicolumn{11}{c}{Stereo-ToF Fusion with Synthetic SYNTH3 Dataset [11]} \\\\ \\hline \\(Para.\\) & Test time & \\(b\\) & \\(32m\\) & \\(32n\\) & \\(c_{1}\\) & \\(c_{2}\\) & \\(lg\\) & \\(ld\\) & \\(\\theta_{1}\\) & \\(\\theta_{2}\\) & \\(\\theta_{3}\\) & \\(\\alpha\\) & \\(\\beta\\) \\\\ \\(Value\\) & 0.012 (s/frame) & 4 & 544 & 960 & 2 & 2 & 16 & 16 & 395 & 5 & 1 & 1 & 1–1.3K \\\\ \\hline \\multicolumn{11}{c}{Stereo-stereo Fusion with Real Kitti2015 Dataset [26]} \\\\ \\hline \\(Para.\\) & Test time & \\(b\\) & \\(32m\\) & \\(32n\\) & \\(c_{1}\\) & \\(c_{2}\\) & \\(lg\\) & \\(ld\\) & \\(\\theta_{1}\\) & \\(\\theta_{2}\\) & \\(\\theta_{3}\\) & \\(\\alpha\\) & \\(\\beta\\) \\\\ \\(Value\\) & 0.011 (s/frame) & 4 & 384 & 1280 & 2 & 2 & 16 & 16 & 1 & 1 & 1 & 1 & 1 –2K \\\\ \\hline \\multicolumn{11}{c}{Stereo-stereo Fusion with Real Trimbot2020 Garden Dataset} \\\\ \\hline \\(Para.\\) & Test time & \\(b\\) & \\(32m\\) & \\(32n\\) & \\(c_{1}\\) & \\(c_{2}\\) & \\(lg\\) & \\(ld\\) & \\(\\theta_{1}\\) & \\(\\theta_{2}\\) & \\(\\theta_{3}\\) & \\(\\alpha\\) & \\(\\beta\\) \\\\ \\(Value\\) & 0.008 (s/frame) & 4 & 480 & 768 & 2 & 2 & 12 & 12 & 395 & 5 & 1 & 1 & 1–1.3K \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Computation Time and Parameter Settings. \\begin{table} \\begin{tabular}{c c} \\hline \\hline **Model Name** & **Combination** \\\\ \\hline Supervised & WGAN (4) + multiscale (M = 5) + supervised (5) \\\\ Semi & WGAN (4) + multiscale (M = 5) + semi-supervised (6) \\\\ Monoscale & WGAN (4) + monoscale (M = 1) + supervised (5) \\\\ JS-GAN & JS-GAN (3) + multiscale (M = 5) + supervised (5) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Model definition. in Equation (2) (corresponding to \\(\\theta_{2}\\)) and \\(\\beta\\) have less influence compared with the former factors. The Loss term in \\(\\mathcal{L}_{GAN}\\) (corresponding to \\(\\theta_{3}\\)) has the least influence. Figure 4: We fuse two initial raw disparity inputs (**c**,**d**) to get a refined disparity map (**e**,**f**) using our Supervised and Semi method on the synthetic garden dataset. (**a**) is the ground truth and (**b**) is the corresponding scene. Many, but not all, pixels from the fused result are closer to ground truth than the original inputs. (**a**) Ground Truth; (**b**) Scene; (**c**) FPGA Stereo [3]; (**d**) Dispnet [4]; (**e**) Our Supervised; (**f**) Our Semi. ### Robustness and Accuracy Test Given that the proposed network does not need confidence values from the specific sensors, the network architecture can be generalized to fusion tasks using different data sources. Thus, the following experiments will input different quality disparity maps from different sources to test the robustness and accuracy of the proposed algorithm. #### 4.2.1 Stereo-Monocular Fusion Monocular depth estimation algorithms are usually less accurate than stereo vision algorithms. Stereo vision algorithm PLSM [5] and monocular vision algorithm Monodepth [7] were used to input the relevant initial disparity maps. Monodepth was retrained on the Scene Flow dataset (Flying A) with 50 epochs to get its left disparity maps. PLSM with semi-global matching computed the left disparity map without refinement. The default network settings and some important parameters of the networks in this part can be seen in \"Stereo-Monocular Fusion\" in Table 2. 6000 labeled samples (80%) in Scene Flow (Flying A) were used for the supervised training and 600 labeled samples (8%) + 5400 unlabeled samples for the semi-supervised training. Another 1460 samples (20%) were used for testing. DSF [13] is a recent high performance fusion algorithm that we compare with. Pix2pix [22] was set up to use PLSM + Monodepth as inputs and the fused disparity map as output. The reason to choose Pix2pix as a comparison algorithm is that disparity fusion can be seen as equivalent to an image style transfer and Pix2pix is a famous image style transfer algorithm. DSF was retrained for 10 epochs (about 5 h per epoch) and Pix2pix [22] was retrained for 100 epochs (0.15 h per epoch). The relevant error of each algorithm is shown in Table 6. The supervised method (Num = 6000) and the semi-supervised method (Num = 600) achieve similar top performances while the semi-supervised method uses much less labeled training data (9 times less than the supervised method). Pix2pix behaves badly and we neglect it in the following experiments. A qualitative result comparison can be seen in Figure 5. #### 4.2.2 Stereo-ToF Fusion The default network settings and some important parameters of the networks in this part, can be seen in \"Stereo-ToF Fusion\" in Table 2. The network was trained on the SYNTH3 dataset (40 training and 15 test samples with resolution 540 \\(\\times\\) 960 pixels). Semi-global matching from OpenCV was used to get the stereo disparity map, with the point-wise Birchfield-Tomasi metric, 7 \\(\\times\\) 7-pixel window size and 8-path optimization. The initial ToF depth map was projected onto the right stereo camera image plane and up-sampled and converted to the disparity map. Limited by the very small number of training samples, the proposed networks do not reach their best performance. But, compared with the input disparity maps, the proposed methods perform slightly better (See Table 7). The experiment results for SGM stereo, ToF, LC [10] and DLF [11] are from the paper [11] because we used the same dataset as [11] from their website ([http://lttm.dei.unipd.it/paper_data/deepfusion/](http://lttm.dei.unipd.it/paper_data/deepfusion/)). The proposed Supervised method performs less well because of the insufficient number of training samples. However, the proposed Semi method ranks first among all of the stereo-ToF fusion algorithms. One qualitative result is shown in Figure 6. #### 4.2.3 Stereo-Stereo Fusion Performance on Kitti2015 DatasetWe tested the proposed network on the real Kitti2015 dataset, which used a Velodyne HDL-64E Lidar scanner to get the sparse ground truth and a 1242 \\(\\times\\) 375 resolution stereo camera \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline & \\multicolumn{2}{c}{**Inputs**} & \\multicolumn{2}{c}{**Comparison**} & \\multicolumn{2}{c}{**Our Fused**} \\\\ \\hline **Training Data** & **PLSM**[5] & **Monodepth**[7] & **DSF**[13] & **Pix2pix**[22] & **Supervised** & **Semi** \\\\ \\hline Num=600 & 2.41 px & 3.30 px & 2.00 px & 2.91 px & 1.95 px & 1.60 px \\\\ Num=6000 & 2.41 px & 3.30 px & 1.87 px & 2.65 px & 1.55 px & NA \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 6: Mean absolute disparity error of stereo-monocular fusion on Scene Flow (1460 test samples). \\begin{table} \\begin{tabular}{c c c c c c c c} \\hline \\hline & \\multicolumn{2}{c}{**Inputs**} & \\multicolumn{4}{c}{**Experimental Outputs**} \\\\ \\hline Experiment & FPGA Stereo [3] & DispNet [4] & \\(\\theta_{1}=0\\) & \\(\\theta_{2}=0\\) & \\(\\theta_{3}=0\\) & \\(\\alpha=0\\) & \\(\\beta=1\\) & Baseline \\\\ \\hline Error [px] & 11.41 & 6.28 & 298.2 & 3.46 & 3.25 & 3.48 & 3.37 & 3.10 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Ablation Study on Each Cue Using the Supervised Model. to get stereo image pairs. The initial training dataset contains 200 labeled samples. We used 50 samples from '000000_10.png' to '000049_10.png' in the Kitti2015 training dataset as our test dataset. We used the other 150 samples as our training set for fine-tuning. By flipping the training samples vertically, we doubled the number of training samples. We used the state-of-art stereo vision algorithm PSMNet [2] as one of our inputs. We used their released pre-trained model (PSMNet [2]: [https://github.com/JiaRenChang/PSMNet](https://github.com/JiaRenChang/PSMNet)) on the Kitti2015 dataset to get the disparity maps. A traditional stereo vision algorithm SGM [6] is used as the second input to the network. We set their parameters to produce more reliable but sparse disparity maps. More specifically, we used the implementation ('disparity' function) from Matlab2016b. The relevant parameters are: 'DisparityRange' [0, 160], 'BlockSize' 5, 'ContrastThreshold' 0.99, 'UniquenessThreshold' 70, 'DistanceThreshold' 2. The settings of the neural network are shown in \"Stereo-stereo Fusion with Real Kitti2015 Dataset\" in Table 2. We compared the algorithm with the state-of-art technique [13] in stereo-stereo fusion and also stereo vision inputs [2, 6]. As the ground truth of Kitti2015 is sparse, we do not compare the semi-supervised method (which requires learning the disparity Markov Random Field). We trained our supervised method on the synthetic garden dataset first and then fine-tuned the pre-trained model on the Kitti2015 dataset. We used 150 labeled samples from '00050_10.png' to '000199_10.png' in the initial training dataset for the supervised method's fine-tuning. The relevant results are shown in Table 8. The same conclusion can be made as with the stereo-monocular and stereo-ToF fusion: the proposed method is accurate and robust. An example result of stereo-stereo fusion is shown in Figure 7. We can see that the proposed method compensates for the weaknesses of the inputs and refines the initial disparity maps effectively. Compared with SGM [6] (0.78 pixels) (This is a more accurate disparity but is calculated only using more reliable pixels. On average only 40% of the ground truth pixels are used. If we use all the valid ground truth to calculate its error, it is 22.13 pixels), the fused results are much more dense and accurate. Compared with PSMNet, the proposed method preserves the details better (e.g., tree, sky), which are missing in the ground truth though. Our network can deal with the input (resolution: 384 \\(\\times\\) 1280) at 0.011 s/frame, which is real-time and very fast. Performance on Trimbot2020 Garden DatasetWe tested the proposed network on the real Trimbot2020 Garden dataset, which used a Leica ScanStation P15 to capture a dense 3D Lidar point cloud of the whole real garden and then project it to each camera view to get the dense ground truth disparity maps. A 480 \\(\\times\\) 752 resolution stereo camera was used to get stereo image pairs. The Trimbot2020 Garden dataset contains 1000 labeled samples for training and 250 labeled samples for testing. We trained the network on the synthetic garden dataset first and fine-tuned the network on the real garden dataset. We used Dispnet [4] and FPGA-stereo [3] as inputs. The authors of [4, 3] helped us get the best performance on the real Trimbot2020 Garden dataset as the input to the network. The settings of the network are shown in \"Stereo-stereo Fusion with Real Trimbot Garden Dataset\" in Table 2. The demo in the real outdoors garden is available from [https://youtu.be/2yyoXSwCSeM](https://youtu.be/2yyoXSwCSeM). The relevant error of each algorithm on valid pixels is shown in Table 9. The supervised method and the semi-supervised method have achieved similar top performances compared with the rest. A qualitative result comparison can be seen in Figure 8. The proposed network can deal with the input (resolution: 480 \\(\\times\\) 768) at 125 fps, which is faster than real-time. \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline & \\multicolumn{2}{c}{**Inputs**} & \\multicolumn{2}{c}{**Comparison**} & \\multicolumn{2}{c}{**Our Fused**} \\\\ \\hline **Training Data** & **SGMStereo** & **ToF** & **LC**[10] & **DLF**[11] & **Supervised** & **Semi** \\\\ \\hline Num=40 & 3.73 px & 2.19 px & 2.07 px & 2.06 px & 2.18 px & 2.02 px \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 7: Mean absolute disparity error of ToF-stereo fusion on SYNTH3 (15 test samples). \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline & \\multicolumn{2}{c}{**Inputs**} & \\multicolumn{2}{c}{**Comparison**} & \\multicolumn{2}{c}{**Our Fused**} \\\\ \\hline **Training Data** & **SGM**[6] & **PSMNet**[2] & **DSF**[13] & **Supervised** \\\\ \\hline Num=150 & 0.78 px & 1.22 px & 1.20 px & 1.17 px \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 8: Mean absolute disparity error of stereo-stereo fusion on Kitti2015 (50 test samples). \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline & \\multicolumn{2}{c}{**Inputs**} & \\multicolumn{2}{c}{**Comparison**} & \\multicolumn{2}{c}{**Our Fused**} \\\\ \\hline **Training Data** & **FPGA Stereo**[3] & **Dispnet**[4] & **DSF**[13] & **Supervised** & **Semi** \\\\ \\hline Num=1000 & 2.94 px & 1.35 px & 0.83 px & 0.67 px & 0.66 px \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 9: Mean absolute disparity error of stereo-stereo fusion on Trimbot Garden Dataset (270 test samples). ### Sensitivity Analysis All the following experiments are conducted on the Trimbot2020 Garden dataset using the same settings with Performance on Trimbot2020 Garden Dataset in Section 4.2.3 except the control variables. The sensitivity analysis is done for the parameter alpha in Equation (1), the number of scales M in Equation (5) and Equation (6), the number of feature maps for the refiner network and discriminator network architectures \\(lg=ld=L\\), and also the parameter momentum in the optimization algorithm Adam. #### 4.3.1 Alpha Table 10 (corresponding to Figure 9) shows the performance change when alpha varies from 0.5 to 1.5 with an interval 0.25. Figure 9 shows the robustness of the proposed algorithm. When alpha = 1, it achieves its best performance. #### 4.3.2 The Number of Scales Table 11 (corresponding to Figure 10) shows the performance change when the number of scales M varies from 1 to 5 with an interval 1. Figure 10 shows that with the increment of the number of scales, the error decrease gradually. Therefore we chose M = 5. #### 4.3.3 The Number of Feature Maps Table 12 (corresponding to Figure 11) shows the performance change when \\(L\\) varies from 6 to the proposed algorithms are significantly better than the DSF algorithm, however it is less clear if there is a significant difference between the supervised and semi-supervised performances. ## 5 Conclusions and Discussion The paper has presented a method to refine disparity maps based on fusing the results from multiple disparity calculation algorithms and other supplementary image information (e.g., intensity, gradient). The proposed method can generalize to perform different fusion tasks and achieves better accuracy compared with several recent fusion algorithms. It could potentially fuse multiple algorithms (not only 2 algorithms as shown in this paper) by concatenating more initial disparity maps in the network's input but this has not been explored. The objective function and network architecture are novel and effective. In addition, the proposed semi-supervised method greatly reduces the amount of ground truth training data needed, while achieving comparable performance with the proposed supervised method. The proposed semi-supervised method can achieve better performance when using the same amount of labeled data as the supervised method plus the additional unlabeled data. In the future, we plan to explore unsupervised disparity fusion with adversarial neural networks using left-right intensity consistency between the two stereo vision cameras. Meanwhile, future exploration on disparity fusion in object space (e.g., [28]) is considered. It will be interesting to compare the disparity fusion in image space versus object space. Additionally, more datasets will be used to explore the generalization of the proposed method, such as using remote sensing datasets that are acquired by Satellite or UAV sensors. ## 6 Acknowledgement The research is fully funded by the TrimBot2020 project [Grant Agreement No. 688007, URL: [http://trimbot2020.webbosting.rug.nl/](http://trimbot2020.webbosting.rug.nl/)] from the European Union Horizon 2020 programme. We thank Chengyang Zhao, Marija Jegorova and Timothy Hospedales for giving us good advice on this paper. We thank our partners (the authors of [4, 3]) for helping us get their best performance as the input to the network. ## Appendix A Description of Trimbot2020 Garden Dataset We make use of the Trimbot Garden 2017 dataset used for the semantic reconstruction challenge of the ICCV 2017 workshop '3D Reconstruction meets Semantics' [29]. The dataset consists of a 3D laser scan of the garden as well as multiple traversals of the robot through the garden (see Figure 13). In addition to the challenge dataset (2 camera pairs), we included all 5 camera pairs (Figure 14), obtaining total 1250 sample pairs. Robot poses for the traversals were recorded in the coordinate system of the laser scanner using a Topcon laser tracker. The results were subsequently refined using Structure-from-Motion [30]. The quantitative evaluation is performed only on a subset of pixels which correspond to static non-ground areas (the grass on the ground surface yields noisy GT measurements as well as other moving parts like tree branches). The accuracy of stereo depth map estimates depends on the distance of the cameras to the scene, with the uncertainty growing quadratically with the distance. In contrast, the uncertainty grows only linearly in the disparity space (measured in pixels). As is common [31, 32], we thus measured the accuracy of the stereo algorithms by comparing their estimated disparity values with the ground truth disparity values provided by the laser scanner. \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\hline **Momentum** & **0.1** & **0.3** & **0.5** & **0.7** & **0.9** \\\\ \\hline Supervised & 0.81 & 0.83 & 0.67 & 0.76 & 0.67 \\\\ Semi & 0.87 & 0.90 & 0.66 & 0.66 & 0.70 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 13: Sensitivity Analysis (Momentum). \\begin{table} \\begin{tabular}{c c c c c c c c} \\hline \\hline & \\multicolumn{4}{c}{**Repeated Experiment**} & \\multicolumn{4}{c}{**Statical Result**} \\\\ \\hline **Experiment** & **1** & **2** & **3** & **4** & **5** & **Mean** & **Std.** \\\\ \\hline DSF & 0.83 & 0.89 & 0.86 & 0.87 & 0.85 & 0.86 & 0.02 \\\\ Supervised & 0.73 & 0.77 & 0.67 & 0.70 & 0.72 & 0.72 & 0.04 \\\\ Semi & 0.67 & 0.71 & 0.66 & 0.76 & 0.71 & 0.70 & 0.04 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 14: Statistical Analysis. ## References * [1] Zanuttigh, P.; Marin, G.; Dal Mutto, C.; Dominio, F.; Minto, L.; Cortelazzo, G.M. _Time-Of-Flight and Structured Light Depth Cameras_; Springer: Berlin/Heidelberg, Germany, 2016. * [2] Chang, J.R.; Chen, Y.S. Pyramid Stereo Matching Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18-22 June 2018; pp. 5410-5418. * [3] Honegger, D.; Sattler, T.; Pollefeys, M. Embedded real-time multi-baseline stereo. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May-3 June 2017; pp. 5245-5250. * [4] Mayer, N.; Ilg, E.; Hausser, P.; Fischer, P.; Cremers, D.; Dosovitskiy, A.; Brox, T. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Las Vegas, NV, USA, 26 June-1 July 2016; pp. 4040-4048. * [5] Horna, L.; Fisher, R.B. 3D Plane Labeling Stereo Matching with Content Aware Adaptive Windows. In Proceedings of the VISIGRAPP (6: VISAPP), Porto, Portugal, 27 February 2017; pp. 162-171. * [6] Hirschmuller, H. Accurate and efficient stereo processing by semi-global matching and mutual information. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, 20-26 June 2005; Volume 2, pp. 807-814. * [7] Godard, C.; Mac Aodha, O.; Brostow, G.J. Unsupervised monocular depth estimation with left-right consistency. In Proceedings of the CVPR, Honolulu, HI, USA, 21-26 July 2017; Volume 2, p. 7. * [8] Nair, R.; Ruhl, K.; Lenzen, F.; Meister, S.; Schafer, H.; Garbe, C.S.; Eisemann, M.; Magnor, M.; Kondermann, D. A survey on time-of-flight stereo fusion. In _Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications_; Springer: Berlin/Heidelberg, Germany, 2013; pp. 105-127. * [9] Dal Mutto, C.; Zanuttigh, P.; Cortelazzo, G.M. Probabilistic tof and stereo data fusion based on mixed pixels measurement models. _IEEE Trans. Pattern Anal. Mach. Intell._**2015**, _37_, 2260-2272. * [10] Marin, G.; Zanuttigh, P.; Mattoccia, S. Reliable fusion of tof and stereo depth driven by confidence measures. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8-16 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 386-401. * [11] Agresti, G.; Minto, L.; Marin, G.; Zanuttigh, P. Deep Learning for Confidence Information in Stereo and ToF Data Fusion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Venice, Italy, 22-29 October 2017; pp. 697-705. * [12] Facil, J.M.; Concha, A.; Montesano, L.; Civera, J. Single-View and Multi-View Depth Fusion. _IEEE Robot. Autom. Lett._**2017**, \\(2\\), 1994-2001. * [13] Poggi, M.; Mattoccia, S. Deep stereo fusion: combining multiple disparity hypotheses with deep-learning. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25-28 October 2016; pp. 138-147. * [14] Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In _Advances in Neural Information Processing Systems_; MIT Press: Cambridge, MA, USA, 2014; pp. 2672-2680. * [15] Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5-9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234-241. * [16] Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Venice, Italy, 22-29 October 2017. * [17] Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6-11 August 2017; pp. 214-223. * [18] Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of wasserstein gans. In _Advances in Neural Information Processing Systems_; MIT Press: Cambridge, MA, USA, 2017, pp. 5769-5779. * [19] Seki, A.; Pollefeys, M. Patch Based Confidence Prediction for Dense Disparity Map. In Proceedings of the BMVC, York, UK, 19-22 September 2016. * [20] Mirza, M.; Osindero, S. Conditional generative adversarial nets. _arXiv_**2014**, arXiv:1411.1784. * [21] Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative Adversarial Networks: An Overview. _IEEE Signal Process. Mag._**2018**, _35_, 53-65. * [22] Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the CVPR, Honolulu, HI, USA, 21-26 July 2017. * [23] Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 2-12 June 2015; pp. 3431-3440. * [24] Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. _arXiv_**2015**, arXiv:1511.06434. * [25] Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A System for Large-Scale Machine Learning. In Proceedings of the OSDI, Savannah, GA, USA, 2-4 November 2016; Volume 16, pp. 265-283. * [26] Menze, M.; Heipke, C.; Geiger, A. Joint 3D Estimation of Vehicles and Scene Flow. In Proceedings of the ISPRS Workshop on Image Sequence Analysis (ISA), Berlin, Germany, 1-2 December 2015. * [27] Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. _arXiv_**2014**, arXiv:1412.6980. * [28] Fritsch, D.; Klein, M. 3D preservation of buildings-reconstructing the past. _Multimed. Tools Appl._**2018**, _77_, 9153-9170. * [29] Sattler, T.; Tylecek, R.; Brox, T.; Pollefeys, M.; Fisher, R.B. _3D Reconstruction meets Semantics--Reconstruction Challenge_; Technical Report; ICCV Workshop, Venice, Italy: 2017. * [30] Schonberger, J.L.; Frahm, J.M. Structure-From-Motion Revisited. In Proceedings of the CVPR, Las Vegas, NV, USA, 26 June-1 July 2016. * [31] Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16-21 June 2012. * [32] Schops, T.; Schonberger, J.L.; Galliani, S.; Sattler, T.; Schindler, K.; Pollefeys, M.; Geiger, A. A Multi-View Stereo Benchmark with High-Resolution Images and Multi-Camera Videos. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21-26 July 2017. Figure 5: A qualitative result with inputs from PLSM [5] and Monodepth [7] in stereo-monocular fusion. The lighter pixels represent bigger disparity errors in figure (**d**,**f**,**h**,**j**). (**a**) Ground Truth; (**b**) Color image; (**c**) PLSM [5]; (**d**) PLSM error; (**e**) Monodepth [7]; (**f**) Monodepth error; (**g**) Supervised 1; (**h**) Supervised 1 error; (**i**) semi 2; (**j**) Semi 2 error. Figure 6: One qualitative result for ToF-stereo fusion with many invalid pixels input. The inputs are from ToF and disparity calculation algorithm using SGM in OpenCV. The lighter pixels in (**d**,**f**,**h**,**j**) represent larger disparity error. (**a**) Ground Truth; (**b**) Color image; (**c**) ToF; (**d**) ToF error; (**e**) SGM OpenCV; (**f**) SGM error; (**g**) Supervised 1; (**h**) Supervised error; (**i**) Semi 2; (**j**) Semi 2 error. Figure 7: We trained our network to fuse the initial disparity maps (**c,e**) into a refined disparity map (**g**) for the same scene (**b**) from the Kitti2015 dataset [26] using our supervised method. (**a**) is the corresponding ground truth. (**d,f,h**) are the errors of (**c,e**,g**). The lighter pixels have bigger disparity error in (**d,f,h**). (**a**) Ground Truth; (**b**) Scene; (**c**) Input Disparity 1: SGM [6]; (**d**) Input Disparity 1 Error: SGM [6]; (**e**) Input Disparity 2: PSMNet [2]; (**f**) Input Disparity 2 Error: PSMNet [2]; (**g**) Refined Disparity; (**h**) Refined Disparity Error. Figure 8: _Cont._ Figure 8: One qualitative result for stereo-stereo fusion in real Trimbot2020 Garden Dataset. The lighter pixels in (**d**,**f**,**h**,**j**) represent larger disparity error. (**a**) ground truth; (**b**) intensity image; (**c**) FPGA SGM; (**d**) FPGA SGM error; (**e**) DispNet; (**f**) DispNet error; (**g**) Supervised 1; (**h**) error 1 (**i**) Semi 2; (**j**) error 2. Figure 10: Sensitivity Analysis (M). Figure 9: Sensitivity Analysis (Alpha). Figure 11: Sensitivity Analysis (L). Figure 12: Sensitivity Analysis (Momentum). Figure 13: Trimbot Garden 2017 GT dataset [29]. **Above**: point cloud with color-encoded height. **Below**: semantic point cloud with trajectories (magenta line) and camera centers (yellow). Figure 14: Trimbot Garden 2017 GT dataset [29]. **Left**: Pentagonal camera rig mounted on the robot with five stereo pairs. **Right**: Top view of camera rig with test set pairs (green field of veiw) and training set pairs (yellow field of view).
Refining raw disparity maps from different algorithms to exploit their complementary advantages is still challenging. Uncertainty estimation and complex disparity relationships among pixels limit the accuracy and robustness of existing methods and there is no standard method for fusion of different kinds of depth data. In this paper, we introduce a new method to fuse disparity maps from different sources, while incorporating supplementary information (intensity, gradient, etc.) into a refiner network to better refine raw disparity inputs. A discriminator network classifies disparities at different receptive fields and scales. Assuming a Markov Random Field for the refined disparity map produces better estimates of the true disparity distribution. Both fully supervised and semi-supervised versions of the algorithm are proposed. The approach includes a more robust loss function to inpaint invalid disparity values and requires much less labeled data to train in the semi-supervised learning mode. The algorithm can be generalized to fuse depths from different kinds of depth sources. Experiments explored different fusion opportunities: stereo-monocular fusion, stereo-ToF fusion and stereo-stereo fusion. The experiments show the superiority of the proposed algorithm compared with the most recent algorithms on public synthetic datasets (Scene Flow, SYNTH3, our synthetic garden dataset ) and real datasets (Kitti2015 dataset and Trimbot2020 Garden dataset). Depth fusion, Disparity fusion, Stereo Vision, Monocular Vision, Time of Flight
Condense the content of the following passage.
272
arxiv-format/2105_11649v1.md
# On Enhancing Ground Surface Detection from Sparse Lidar Point Cloud Bo Li\\({}^{1}\\) *This work was done at TrunkTech. Contact: [email protected] ## I Introduction Recent development of lidar products brings significant progress for autonomous driving. Velodyne releases dense beam product VLS-128 and the price of VLP-16 is largely reduced. New manufacturers like Hesai and RoboSense keep releasing new products with 16, 32 or 40 beams to the market in the past two years. Based on the application scenario, autonomous driving systems can accordingly select suitable lidars for the perception module. For example, HDL-64 is replaced by 32- or 40- beam lidars for cost reduction in many recent passenger vehicle systems. In low-speed applications like small delivery vehicles, VLP-16 can be used for further cost compression. Along with the development of lidar hardware, research on environment perception from lidar point cloud data draws increasing attention. Besides the traditional geometry based object detection approaches, deep learning based object detection techniques are also transplanted to point cloud data and have achieved promising performance [11, 19]. In addition to object detection, ground surface detection from point cloud is also an important topic in autonomous driving. The value of ground surface detection comes from two aspects: 1) ground detection can be interpreted as the dual problem of generic obstacle detection; 2) segmented ground surface can be directly used as the drivable area for vehicle motion planning. A variety of previous works have been drawn on ground surface detection but mostly designed to work within regions with high density of points (usually within \\(30\\)m from a HDL-64). Such approaches are insufficient for many autonomous driving applications. Passenger vehicles running at \\(70\\)km/h should detect obstacle at least \\(40\\sim 50\\)m away and low speed delivery vehicles should do up to \\(30\\)m. Note that the point density of VLP-16 at \\(30\\)m is simliar to that at \\(40\\sim 50\\)m for HDL-64. Many previous approaches do not apply to such density. To solve this problem, we proposes an approach to detect ground surface from regions with sparse points. ## II Related Works We briefly revisit the existing approaches on ground surface detection from point cloud by roughly categorizing these approaches as model-fitting and labeling methods. Model-fittingA typical series of previous approaches fit ground surface by planes. Vanilla RANSAC based plane fitting can be enhanced by taking point-wise normal into account for hypothesis verification [17]. Chen et al. [4] convert plane fitting to line fitting in a dual space for better performance. By assuming known lidar height and longitudinal road slope, the ground plane representation can be reduced to one parameter [7]. However, in realistic applications, ground surface is usually not perfectly planar, in which case a single model cannot fit well. Zermas et al. [20] uniformly partition the ground surface to three parts and fit planes respectively. Himmelsbach [10] divides ground surface into angular grid and fits lines instead of planes for each angular direction respectively. These improvement increases the fitting flexibility, but some partition segments might contain too few points to fit when the point cloud is sparse. LabelingThe labeling methods exploit local features and do not rely on global geometry model. Petrovskaya and Thrun [15], Moosmann et al. [13], Bogoslavskyi and Stachniss [2] search disconnection between points and label Fig. 1: Toy examples of some plane fitting approaches illustrated in a 2D side view. Black lines denote planes. Grey data points are colored green if taken as inliers. (a) Raw data. (b) Plane fitted by vanilla RANSAC. (c) Same as (b) but with a larger inlier threshold. (d) Two planes greedily fitted by vanilla RANSAC. (e) Plane fitted by RANSAC with tangent based inlier verification. (f) Two disjoint plane fitted by RANSAC with tangent based inlier verification and optimal partition search. ground points by region growing. Nitsch et al. [14] uses point-wise normal and normal quality as classification features. If assuming lidar mounted perfectly vertical, adjacent ring distance can also be used as classification feature [6]. These features can be extracted from dense point cloud regions, for example HDL-64 within \\(30\\)m or VLP-16 within \\(10\\)m. However, it is difficult to extract such features from sparse point cloud region which is studied in this paper. Inspired by the recent development of deep learning on point cloud data, Convolutional Neural Network (CNN) is also used for feature extraction and ground segmentation [18, 12]. Besides feature design, higher order inference approaches are also studied to guarantee the ground surface smoothness. Representative techniques include Markov Random Field (MRF) [21, 9, 3], Conditional Random Field (CRF) [16] and Gaussian Process (GP) [5, 8]. Rummelhard et al. [16], Douillard et al. [8] use height as classification feature in unary item. Zhang et al. [21] relies on significant local height change to distinguish obstacles to define unary cost for obstacles. Byun et al. [3], Guo et al. [9], Chen et al. [5] build features based on local surface gradients. Douillard et al. [8] combines plane fitting and GP to overcome ground point noise. Besides the difficulty of normal/gradient estimation in sparse point cloud, one common problem for higher order inference approaches is the ambiguity between sloped ground and far obstacles in very sparse point cloud. ## III Approaches ### _Tangent Based Inlier Verification_ In a vanilla RANSAC based plane detection approach, inliers are verified according to point-plane distance by \\[|\\mathbf{n}_{\\text{pl}}^{\\top}\\mathbf{x}_{i}+d|<\\epsilon \\tag{1}\\] where \\((\\mathbf{n}_{\\text{pl}}^{\\top},d)\\) is the plane coeffients and \\(\\mathbf{x}_{i}\\) the point coordinates. \\(\\epsilon\\) is the inlier threshold. Such criterion is not sufficient to distinguish false-positive in many cases. Figure 1b shows an example where a fitted plane is deviated by false-positive inliers. The false-positive inliers belong to the right vertical wall plane but have close distance to fitted ground plane. To avoid such false-positive inliers, verification can be enhanced if point-wise normal is available [17]. This criterion can be denoted as: \\[\\begin{split}|\\mathbf{n}_{\\text{pl}}^{\\top}\\mathbf{x}_{i}+d|& <\\epsilon\\\\ \\arccos(\\mathbf{n}_{\\text{pl}}^{\\top}\\mathbf{n}_{i})& <\\delta\\end{split} \\tag{2}\\] where \\(\\mathbf{n}_{i}\\) denotes the normal estimated at point \\(\\mathbf{x}_{i}\\) and \\(\\delta\\) the angle threshold for normal difference. A variety of previous approaches have been proposed to estimate point normal in point cloud, e.g. [1]. As mentioned in Section II, point-wise normal estimation is not always reliable for sparse point cloud captured by lidar in autonomous driving. In this paper we take point cloud captured Velodyne VLP-16 as a representative example as is shown in Figure 2. Points in the green plane patch belong to ground plane but are difficult to estimate normals. The neighbor points in the green patch are approximately distributed on a straight line. Thus it is ill-posed to estimate a plane and its normal locally from these points. In this paper we enhance the inlier verification in sparse point cloud without estimating point-wise normal. Consider the well-known 2D range scan organization of lidar point cloud used in many previous research [2, 1]. We observe that regardless of the sparsity in the vertical beam direction, point density along the horizontal scan direction is always high. For a Velodyne HDL-64 or VLP-16 spinning at 10Hz, the azimuth angle difference between adjacent points from a same beam is less than \\(0.2^{\\circ}\\). If interpret points produced by the same beam as a curve, such density is sufficient to estimate the tangent of the curve with good precision. Furthermore, since this curve is embedded on an object (ground) surface locally, the estimated tangent vector is also one of the tangents of the surface plane. Note that for high beam resolution lidar it is also possible to estimate tangent along the vertical beam direction and thus the point normal is determined. Previous works like [1, 9] make use of this property to accelerate normal/gradient estimation in range scans. With only one valid tangent on a plane in our cases, we can relax the RANSAC criterion (2) based on the perpendicularity between the tangent and the plane normal to estimate: \\[\\begin{split}|\\mathbf{n}_{\\text{pl}}^{\\top}\\mathbf{x}_{i}+d|& <\\epsilon\\\\ |\\pi/2-\\arccos(\\mathbf{n}_{\\text{pl}}^{\\top}\\mathbf{t}_{i})|& <\\delta\\end{split} \\tag{3}\\] where \\(\\mathbf{t}_{i}\\) is the estimated curve tangent at point \\(\\mathbf{x}_{i}\\). \\(\\mathbf{t}_{i}\\) can be estimated by taking numerical differentiation of points from a same beam. Figure 1e provides a toy example of RANSAC with tangent based outlier rejection in a 2D view. Points on the left ground have different tangent direction with those on the right ground or the vertical wall, which avoids the fitted plane to compromise over different surfaces. Fig. 2: A sample point cloud captured by a Velodyne VLP-16 lidar. Yellow axes denote the scan direction corresponding to the column direction in the range image and the beam direction corresponding to the row direction. ### _Disjoint Multiple Plane Fitting_ Leveraging local point-wise tangents for inlier verification enhances the robustness of RANSAC plane fitting. However, as mentioned in Section II, fitting one plane is still not sufficient to precisely model the ground surface of real road which is not perfectly planar. Take Figure 1 as an example again. Fitting a dominant ground plane with small inlier threshold leaves remaining ground points as false-negatives, which is the case shown in Figure 1b and 1e. On the other hand, increasing inlier threshold will mistaken outliers as false-positive, which is the case shown in Figure 1c. For each time of laser emission in VLP-16, only \\(6\\sim 7\\) out of \\(16\\) points will hit the ground. Thus partitioning point cloud into fan-like thin segments as proposed in [10, 5] results in too few points for further modeling. In addition, it is difficult for MRF or GP to distinguish whether the elevation difference of a far neighbor points is due to obstacle or gradual ground elevation increase. In order to model non-planar road surface without introducing too much flexibility or hyper-parameters, we propose to use multiple horizontally disjoint plane segments to fit the ground surface points in this paper. The property of disjointness means that the ground surface at every horizontal position has only one layer. Figure 1d provides a simple example of naively fitting multiple planes in a greedy strategy. Points of two different planes at the right of the scenes are taken as inliers respectively, which causes the ground surface to have two ambiguous layers. Zermas et al. [20] partitions point cloud by fixed distance at the longitudinal direction and fit planes independently for disjointness. However, it is impossible to find a fixed partition which is suitable for different surface deformation. Instead of fixing partition, we propose to search for the best partition to fit multiple disjoint plane for each point cloud respectively. We first illustrate a general formulation of this problem and its computational complexity. Suppose we have \\(P\\) partition cases over the horizontal plane and the point cloud contains \\(N\\) points. The \\(p\\)-th partition divides the horizontal plane into \\(S_{p}\\) segments. Based on the RANSAC plane fitting strategy, we first generate \\(M\\) plane hypotheses. The general formulation of our problem searches for best partition and best plane for each segment which maximizes the total inlier number: \\[\\begin{split}&\\max_{0\\leq p<P}\\max_{m_{0},\\cdots,m_{S_{p}-1}} \\sum_{s=0}^{S_{p}-1}\\sum_{i=0}^{N-1}f(i;m_{s})\\wedge g_{p}(i;s)\\\\ &\\qquad=\\max_{0\\leq p<P}\\sum_{s=0}^{S_{p}-1}\\max_{m_{s}}\\sum_{i=0 }^{N-1}f(i;m_{s})\\wedge g_{p}(i;s)\\end{split} \\tag{4}\\] In this above formulation, \\(s\\) is used to index a segment region on the horizontal plane given a partition indexed by \\(p\\) and \\(m_{s}\\in\\{0,\\cdots,M-1\\}\\) is used to index the plane hypothesis assigned to segment \\(s\\). \\(f(i;m_{s})\\) denotes whether the \\(i\\)-th point is a inlier of plane \\(m_{s}\\) and \\(g_{p}(i;s)\\) denotes whether this point is within segment region \\(s\\) in a partition \\(p\\): \\[f(i;m)=\\begin{cases}1&\\text{point $i$ is an inliers for plane $m$}\\\\ 0&\\text{otherwise}\\end{cases} \\tag{5}\\] (5) can be computed in a constant computational complexity time. Denote \\(S\\) as a superior of \\(S_{p}\\). The overall computational complexity of exhausted search over objective (4) is \\(O(PSMN)\\). Such complexity is not tractable for real-world application. Specifically, even if we limit the segment number and discretize partition cases, the number of general partition cases on a given plane region is still not tractable. We reduce the complexity by first assuming that partitions over the horizontal plane are always aligned with the \\(xy\\) coordinate axes s.t. each partition segment is a rectangle area aligned with \\(xy\\) axes. This assumption helps simplify the inner part of (4), denoted as \\(h_{p}(m,s)=\\sum_{i=0}^{N-1}f(i;m_{s})\\wedge g_{p}(i;s)\\) for short. Bound the whole point cloud within a horizontal square, which is uniformly divided into \\(B\\times B\\) bins. Given a plane \\(m\\), denote \\(b_{m}(c,r)\\) with \\(0\\leq c,r<B\\) as the number of inliers of plane \\(m\\) which fall in the bin \\((c,r)\\). Denote \\(I_{m}(c,r)\\) as the integral image of \\(b_{m}(c,r)\\). If we round a segment \\(s\\) as its closest rectangle aligned with the bin grid, \\(h_{p}(m,s)\\) can be easily approximated from \\(I_{m}\\). Denote the rectangle segment \\(s\\) by its top left and bottom right corners \\((c_{0},r_{0})\\), \\((c_{1},r_{1})\\). \\(h_{p}(m,s)\\) is obtained by: \\[h_{p}(m,s)\\approx I_{m}(c_{1},r_{1})+I_{m}(c_{0},r_{0})-I_{m}(c_{0},r_{1})-I_ {m}(c_{1},r_{0}) \\tag{6}\\] With this approximation, we reduce the complexity to \\(O(PSM)\\) given precomputed \\(I_{m}\\). The complexity of its precomputation is \\(O((N+B^{2})M)\\) (see the first for loop in Algorithm 1). Next we consider the partition formulation. Figure 3 shows some simplest partition series aligned with \\(xy\\) axes. Note that [20] can be interpreted to use the second series of partition (three parallel rectangles) in Figure 3 and fix partition position s.t. \\(P=1\\) and \\(S=3\\) for complexity reduction. In the proposed approach, we use the forth cross-shape-partition series shown in Figure 3. This partition series compromises well over different surface deformation without introducing too much flexibility. Thus \\(S\\) is fixed as \\(4\\). Since we approximate partition segment to align with \\(B\\times B\\) bins, the cross-shape-partition series contains \\(B^{2}\\) partition cases, i.e. \\(P=B^{2}\\). These \\(B^{2}\\) cases are equivalent to enumerating Fig. 3: Samples of partition series. In this paper we restrict that partitions are always made up of rectangles. We name the 4-th partition series the cross-shape-partition. See Section III-Bthe cutting cross over the \\(B^{2}\\) bin centers. Therefore, the complexity \\(O(PSM)\\) is further reduced to \\(O(B^{2}M)\\), omitting contant \\(S\\) (see the second and third for loop in Algorithm 1). Combining with the precomputation procedure, the overall complexity is bounded by \\(O((N+B^{2})M)\\). A Velodyne VLP-16 spinning at 10Hz generally produces approximately \\(30\\)k points per frame. For a \\(80\\)m \\(\\times\\)\\(80\\)m point cloud scene, bin size of \\(1\\)m results in \\(B^{2}=6.4\\)k bins and provides good partition precision. Algorithm 1 illustrates the pesudo-code for the overall procedure. Note that to avoid overfitting on small outlier planes, we restrict each segments to have at least \\(T\\) inliers. ``` Sample \\(M\\) RANSAC plane hypotheses \\(b[:,:]\\gets 0\\) for\\(m\\gets 0\\)to\\(M-1\\)do for\\(i\\gets 0\\)to\\(N-1\\)do \\((c,r)\\leftarrow\\) belonging bin index for \\(\\mathbf{x}_{i}\\) \\(b[c,r]\\gets b[c,r]+f(i;m)\\) end \\(I_{m}\\leftarrow\\) integral image of \\(b_{m}\\) end \\(S\\gets 4\\) \\(A[:,:,:]\\gets 0\\) for\\(0\\leq c<B,0\\leq r<B\\)do for\\(m\\gets 0\\)to\\(M-1\\)do for\\(s\\gets 0\\)to\\(S-1\\)do \\(p\\gets r*B+c\\) \\(A[c,r,s,m]\\gets h_{p}(m,s)\\) end for end for end for for\\(0\\leq c<B,0\\leq r<B\\)do \\(m[:]\\gets 0\\) for\\(s\\gets 0\\)to\\(S-1\\)do \\(m[s]\\leftarrow\\operatorname*{argmax}_{m}A[c,r,s,m]\\) end for curr_sum \\(\\leftarrow\\sum_{p}A(c,r,s,m[s])\\) ifcurr_sum \\(>\\)best_sum \\(\\land\\min m[s]>T\\) then best_sum \\(\\leftarrow\\)curr_sum update best \\(c,r,m[:]\\) end for end for ``` **Algorithm 1**RANSAC based ground surface fitting ## IV Experiments In our experiments we compare the proposed approach with three representative previous works. * Vanilla RANSAC fits a single plane over the whole point cloud. * MRF implements the Markov Random Field (MRF) approach proposed in [21], where a MRF is constructed over a cylindrical grid map to estimate the ground height. * LPR implements the ground surface fitting module proposed in [20], where the Lowest Point Representatives (LPR) are used to initialize iterative ground plane fitting for a fixed partition. ### _Implementation Details_ In the experiments, the above four approaches are implemented in C++ without parallel optimization. * Point cloud captured by Velodyne VLP-16 within a radius of \\(40\\)m is used as input. The extrinsics between lidar and vehicle frame are known to counteract the lidar mounting error. * The point cloud is sampled over a horizontal grid with resolution \\(0.1\\)m, producing approximately \\(8\\)k points. * Vanilla RANSAC and our proposed method both generates \\(M=200\\) plane hypotheses. * Inlier threshold of point-plane distance for Vanilla RANSAC, LPR and our proposed method is \\(0.2\\)m. For MRF, points which are \\(0.2\\)m higher than the estimated ground height are classified as obstacles. * The proposed method uses a grid with size of \\(B=80\\), i.e. bin size of \\(1\\)m. * MRF uses same cylindrical grid map resolution, cost function parameters and number of iterations with the [21]. The unrevealed binary cost weight and binary cost superior is selected empirically. * LPR uses same algorithm parameters including the number of plane fitting iterations and the candidate number of LPR with [20]. The scene uniformly divided to three parts along the longitudinal direction. ### _Results Analysis_ Figure 4 shows some representative results of the compared approaches. The proposed approach shows promising performance enhancement on sparse point cloud. We discuss the following cases marked by yellow boxes which are considered difficult for previous approaches. Far obstaclesBox 1 and Box 2 mark typical examples of far obstacles approximately \\(35\\)m away from the VLP-16. In such cases, obstacles (e.g. a short scan segment on vehicle) are higher than the ground plane but have no nearby neighbor ground points. Thus MRF can not distinguish if far points are from obstacles or a part of sloped ground. Vanilla RANSAC also mistakens far obstacles as ground points in Box 1 and Box 2. This failure is similar to the case of Figure 0(c). Sloped ground regionBox 3 shows an example of a sloped lane (with a gate). Vanilla RANSAC of a single plane fails to fit the sloped region as was expected. Fixed partition of LPR fails to handle such case neither. Crowded obstaclesRow 4 shows an example of cluttered obstacles in traffic jams. Only a small portion of ground surface is visible to the lidar. MRF detects ground surface correctly but generates false-positives on vehicles (Box 4.1). Vanilla RANSAC misses one ground region (Box 4.2) as the fitted plane is polluted by false-positive ground points. Low fencesBox 5 shows another case where MRF fails. Similar to far obstacles, when there is no ground points visible in the neighborhood, MRF is confused whether the points are from low obstacles or sloped ground. Vanilla RANSAC also produces false-positives in Box 5 due to polluted inliers. Non-horizontal groundWe note that LPR fails to detect some obvious ground (Row 1, 2, 3 and 5). This is because the road is not perfectly horizontal. Lowest Point Representatives are all distributed in a small region at one side. Thus the plane overfitted from this small region may not fit the rest part of the ground. ### _Runtime Comparison_ Table I compares the average runtime of the four approaches in our experiments. Vanilla RANSAC and LPR similarly have the best time efficiency. Our proposed approach introduces acceptable computational burden caused by tangent computation, integral image computation and partition search. MRF is the slowest with similar runtime as reported in [21]. Note that all approaches can be further accelerated via SIMD or GPU. ## V Conclusions Our approach proposes two techniques to enhance the ground surface detection in sparse point cloud regions. Tangent based inlier verification naturally applies to RANSAC schemes for point cloud captured by lidars with different beam resolution. Disjoint multiple plane fitting adaptively partitions ground surface s.t. segments can be optimally fitted by disjoint planes. We show that the proposed approach effectively improves ground surface detection for sparse point cloud. ## References * [1] H. Badino, D. Huber, Y. Park, and T. Kanade, \"Fast and accurate computation of surface normals from range images,\" in _Robotics and Automation (ICRA), 2011 IEEE International Conference on_. IEEE, 2011. * [2] I. Bogoslavskyi and C. Stachniss, \"Fast range image-based segmentation of sparse 3D laser scans for online operation,\" in _IEEE International Conference on Intelligent Robots and Systems_, 2016. * [3] J. Byun, K. in Na, B. su Seo, and M. Roh, \"Drivable road detection with 3D point clouds based on the MRF for intelligent vehicle,\" _Springer Tracts in Advanced Robotics_, 2015. * IEEE International Conference on Robotics and Automation_, 2017. * [5] T. Chen, B. Dai, R. Wang, and D. Liu, \"Gaussian-Process-Based Real-Time Ground Segmentation for Autonomous Land Vehicles,\" _Journal of Intelligent and Robotic Systems: Theory and Applications_, 2014. * [6] J. Choi, S. Ulbrich, B. Lichte, and M. Maurer, \"Multi-Target Tracking using a 3D-Lidar sensor for autonomous vehicles,\" _IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC_, 2013. * [7] S. Choi, J. Park, J. Byun, and W. Yu, \"Robust ground plane detection from 3D point clouds,\" _International Conference on Control, Automation and Systems_, 2014. * IEEE International Conference on Robotics and Automation_, 2011. * [9] C. Guo, W. Sato, L. Han, S. Mita, and D. McAllester, \"Graph-based 2D road representation of 3D point clouds for intelligent vehicles,\" _IEEE Intelligent Vehicles Symposium, Proceedings_, 2011. * [10] M. Himmelbach, \"Fast Segmentation of 3D Point Clouds for Ground Vehicles,\" _2018 IEEE Intelligent Vehicles Symposium (IV)_, 2010. * [11] B. Li, T. Zhang, and T. Xia, \"Vehicle Detection from 3D Lidar Using Fully Convolutional Network,\" _Robotics: Science and Systems_, 2016. * [12] Y. Lyu, L. Bai, and X. Huang, \"Real-time road segmentation using lidar data processing on an fpga,\" in _Circuits and Systems (ISCAS), 2018 IEEE International Symposium on_. IEEE, 2018. * [13] F. Moosmann, O. Pink, and C. Stiller, \"Segmentation of 3D lidar data in non-flat urban environments using a local convexity criterion,\" _IEEE Intelligent Vehicles Symposium, Proceedings_, 2009. * [14] J. Nitsch, J. Aguilar, J. Nieto, R. Siegwart, M. Schmidt, and C. Cadena, \"3D Ground Point Classification for Automotive Scenarios,\" _2018 21st International Conference on Intelligent Transportation Systems (ITSC)_, 2018. * [15] A. Petrovskaya and S. Thrun, \"Model based vehicle tracking for autonomous driving in urban environments,\" _Proc. Robotics: Science and Systems_, 2008. * [16] L. Rummelhard, A. Paigwar, A. Negre, and C. Laugier, \"Ground estimation and point cloud segmentation using spatiotemporal conditional random field,\" in _Intelligent Vehicles Symposium (IV), 2017 IEEE_. IEEE, 2017. * [17] R. Schnabel, R. Wahl, and R. Klein, \"Efficient RANSAC for Point-Cloud Shape Detection,\" _Computer Graphics Forum_, jun 2007. * [18] M. Velas, M. Spanel, M. Hradis, and A. Herout, \"Cnn for very fast ground segmentation in velodyne lidar data,\" in _Autonomous Robot Systems and Competitions (ICARSC), 2018 IEEE International Conference on_. IEEE, 2018. * [19] Y. Yan, Y. Mao, and B. Li, \"Second: Sparsely embedded convolutional detection,\" _Sensors_, 2018. * IEEE International Conference on Robotics and Automation_, 2017. * 2015 International Conference on 3D Vision, 3DV 2015_, 2015. Fig. 4: Sample results produced by approaches in our experiements. Detected ground points are labeled as green. See Section IV-B for analysis.
Ground surface detection in point cloud is widely used as a key module in autonomous driving systems. Different from previous approaches which are mostly developed for lidars with high beam resolution, e.g. Velodyne HDL-64, this paper proposes ground detection techniques applicable to much sparser point cloud captured by lidars with low beam resolution, e.g. Velodyne VLP-16. The approach is based on the RANSAC scheme of plane fitting, Inlier verification for plane hypotheses is enhanced by exploiting the point-wise tangent, which is a local feature available to compute regardless of the density of lidar beams. Ground surface which is not perfectly planar is fitted by multiple (specifically 4 in our implementation) disjoint plane regions. By assuming these plane regions to be rectangular and exploiting the integral image technique, our approach approximately finds the optimal region partition and plane hypotheses under the RANSAC scheme with real-time computational complexity.
Provide a brief summary of the text.
188
arxiv-format/2004_02493v2.md
# A Generalized Multi-Task Learning Approach to Stereo DSM Filtering in Urban Areas Lukas Liebel\\({}^{1}\\) Corresponding Author: [email protected] Ksenia Bittner\\({}^{2}\\) Marco Korner\\({}^{1}\\) \\({}^{1}\\) Computer Vision Research Group, Chair of Remote Sensing Technology Technical University of Munich (TUM), Germany \\({}^{2}\\) Photogrammetry and Image Analysis, Remote Sensing Technology Institute German Aerospace Center (DLR), Germany Corresponding Author: [email protected] ## 1 Introduction Digital surface models (DSMs) provide insight into urban structures and are, thus, a valuable source of information for authorities, industries, and non-profit organizations alike. Numerous applications, such as urban planning, disaster response, or environmental simulation, depend on such 3D elevation models at global scale. However, availability is heavily restricted due to high expenses for acquiring suitable source data through light detection and ranging (LiDAR) measurements or even in-situ surveying, especially in less developed regions. Photogrammetric stereo reconstruction from unmanned aerial vehicles (UAVs), aerial, or even high-resolution satellite imagery provides similar results, while merely relying on comparably inexpensive data and automatic processing. The resulting height maps are, however, affected by noise and inaccuracies due to matching errors, temporal changes, or interpolation. Most relevant for many applications are precise building shapes which are, unfortunately, often obscured beyond recognition. Furthermore, vegetation, that is of lesser interest and even obstructive for most applications, heavily influences height maps and often covers or distorts the appearance of adjacent buildings. Removing noise and vegetation from automatically produced stereo DSMs while simultaneously refining building shapes yields elevation maps that can serve as a simple geometric proxy for city models in level of detail 2 (LOD2)[39]. Methods from computer vision first allowed for successfully tackling this task by making use of state-of-the-art machine learning techniques such as convolutional neural networks (CNNs) and conditional generative adversarial networks (cGANs), as well as training data generated from digital elevation models (DEMs) and CityGML models [39], available to the public through open data initiatives [8]. Harnessing the abundance of data provided by an ever-growing number of high-resolution spaceborne imaging sensors and open data sources allows employing high-capacity models that extract usable information from vast datasets. At the same time, they are able to deal with large intra-dataset variance, in particular, imperfect labels stemming from temporal and spatial offsets between data from different sources. Finally, models that have been trained on such datasets and have undergone rigorous and genuine evaluation procedures on independent validation and test data are expected to generalize well to large and various areas. The results of first approaches in this field were promising but still suffered from improperly removed vegetation and distorted or missing buildings. These problems can effectivelybe tackled by multi-task learning approaches that use additional supervision for identifying built-up areas through the secondary task of building footprint segmentation and roof type classification. This auxiliary task mainly serves as a regularization measure that supports optimization of the main task by restricting the space of possible solutions [16]. Since the tasks are inherently related, intuitively, training a joint model for multiple tasks can increase the performance of all tasks. An approach making use of this technique for DSM filtering has recently been proposed [3]. Experimental results proved the effectiveness of the method, yet there remain some notable issues in the prediction results. As they still hint at insufficient utilization of the available supervision as a regularization measure, we revise this approach with a more comprehensive abstracted or generalized multi-task concept, illustrated in Figure 1. To this end, we designed a modular encoder-decoder model with a single common encoder and multiple task-specific decoders. By sharing a major portion of the network parameters, we allow for the optimization of a common representation that is valuable for all tasks. Multiple objectives per task enable the simultaneous evaluation of different aspects, such as geometrical properties, semantics, or conditional adversarial terms. Secondary tasks, such as roof type segmentation, mainly serve as a regularization measure to the highly non-convex and hard optimization problem of the main regression task. All parts of this generic model can be deactivated on demand and the architecture of encoder-decoder blocks can be selected and adjusted individually. Finally, automatic balancing of the single-objective and single-task terms in the multi-objective multi-task loss function is achieved by optimizing weighting terms along with the network parameters based on an uncertainty measure. This allows for the utilization of arbitrarily many terms in the loss function without requiring manual balancing of their individual contribution to the final objective, subject to optimization. As a consequence, our approach scales well to more tasks and objectives easily. We expect the proposed system to exploit and leverage provided supervision efficiently and effectively. Prior methods [8; 7; 3] can be interpreted as specific instances of our abstract general framework. In extensive experiments, we evaluated the suitability of selected instances, which utilize numerous combinations of objectives and decoder architectures, towards improving the predictive performance on unseen data with a focus on mending the remaining systematic problems of previous approaches. Our main contributions are three-fold. 1. We propose a modular multi-task framework consisting of an encoder-decoder model with a shared encoder and task-specific decoders that can optionally be extended by various objectives and tasks. Multi-task approaches proposed in prior work can be modeled as instances of this generalized framework. 2. We introduce the automatic and dynamic balancing of objective terms in the multi-task loss functions by learned weights to the application of DSM filtering. 3. We present novel instances of our framework that are able to undercut the remaining regression error in the state of the art for DSM filtering by as much as 34 %. The remainder of this paper is structured as follows. We review the state of the art in both, DSM filtering and multi-task learning in the next section. Our approach to tackling the remaining problems through a modular encoder-decoder system with multiple objectives and learned balancing terms is presented detail in Section 3. The properties of the datasets used in our experiments are specified in Section 4 along with the covered study areas and the procedure for deriving pairs of input and ground-truth samples from remote sensing imagery and virtual 3D city models. In Section 5, we describe our implementation of the proposed approach with architectural details and derive our final multi-task multi-objective loss function from individual terms and an automated weighting scheme. We define an experimental setup for training and validating these models and show training results and ablation studies that allow for selection of the most promising models. These were deployed for the final testing stage, which Figure 1: Schematic overview of our generalized approach to multi-task multi-objective DSM filtering and roof type classification. The modular encoder-decoder networks consist of a common encoder and task-specific decoder modules. Various objectives, evaluating geometrical and semantic properties as well as conditional adversarial feedback, can be utilized and are combined to a loss function with balancing terms that can optionally be learned based on a task uncertainty measure. is described in Section 6. Test results for both study areas are shown, discussed, and compared to the state of the art. Finally, in Section 8, we conclude with a summary of the most important findings and review the suitability of the proposed method for the target application. ## 2 Related Work Multi-task learning [47] is a machine learning concept that comprises techniques for exploiting mutual information from different supervisory signals, most commonly through neural networks with shared parameters [23]. Various machine learning applications, from natural language processing [41] to computer vision, have profited from multi-task approaches. In the field of computer vision, numerous works explored specific use-cases and combinations of tasks. A popular example is Fast-RCNN [29] which combines bounding box regression and image classification into a holistic object detection approach. Eigen et al. [28] utilize joint learning of single-image depth estimation with surface normal estimation and semantic segmentation. Similarly, Kendall et al. [15] improved depth estimation results for road scenes by evaluating semantic and instance labels at the same time. A different approach on single-image depth estimation using multi-task learning has been pursued by Liebel et al. [6] who posed this natural regression task as the classification of discrete depth ranges as an additional auxiliary task and jointly solved for both targets. Other problems that have recently been tackled by multi-task learning include facial landmark detection [34] and person attribute classification [22]. More generally, Zamir et al. [18] explored the relationship between visual tasks in general. Moreover, studies have even been conducted towards the benefit of simultaneously solving seemingly unrelated tasks [16; 4]. While such application-based studies prove the suitability of the multi-task concept for diverse real-world use-cases, recent methodological contributions further advanced the underlying techniques. Especially active is a branch of research devoted to designing loss functions for multi-task systems, either by developing automated task weighting strategies [15; 14] or by directly training with multiple loss functions through multi-objective optimization [17]. Zhao et al. [19] addressed measures for avoiding an unwanted side-effect, often referred to as destructive inference, which occurs when different tasks compete with each other by yielding opposed gradients and thus undermine learning of the whole model. Urban height estimation from satellite images is a hot research topic in the field of remote sensing, as its availability can positively influence the understanding of urban environments. Buildings are one of the prominent classes of terrestrial objects within cities. The appearance or destruction of such is regarded with great interest. However, estimating the height and exact shape of individual buildings in a city is a time-consuming process that needs a great deal of effort when done manually. To increase the extraction efficiency of building representations in urban areas, automatic remote sensing methods have been developed. Yet, estimation from photogrammetric DSMs, which are often used for building height and silhouette assessment, also presents some challenges. Most prominently, noise and mismatches, due to stereo-matching failures in areas with occlusions, low-texture or perspective differences in a stereo image pair, can lead to significant distortions. Earlier approaches investigated the applicability of filtering techniques [46; 43; 40] to detect and remove outliers from photogrammetric DSMs. Although they were able to reduce the number of spikes and blunders in the resulting DSM, a side-effect was observed in the form of steepness smoothing of building walls. Krauss et al. [36] approached the enhancement of DSMs by using segmentation results extracted from stereo images. Mainly, statistical and spectral information was utilized to approximately detect above-ground objects for their further classification and filtering. Several methodologies [38; 37] are based on fitting shape models, by either fitting a single rectangular box to a coarse building segment or a chain of such to model more complicated building forms. Nevertheless, even though the outlines of the extracted buildings became more realistic, the true roof forms were not modeled, as only a single height value was assigned per bounding box, representing a building. Recent deep learning-based methodology achieves state-of-the-art results in remote sensing image processing, especially by analyzing data with spectral information. However, the processing or generation of images with continuous values, such as height information in DSMs, remains an open problem. Solutions to this challenge were proposed recently, _i.a._, Ghamisi et al. [13] made an attempt to simulate elevation models from single optical images using CNN architectures based on cGANs. Although their research is close to our area of interest, the main difference is the employed type of source data. High-resolution aerial images have been used to estimate height information in their approach. Since single spectral images do not contain explicit height information, only relative elevation can be estimated based on empirical knowledge. Using a DSM as input, as in our work, only requires learning a residual to yield absolute height values. In our first work introducing a concept for DSM estimation with improved building shapes [7] we proposed a network based on the cGAN model by Isola et al. [21]. A low-quality photogrammetric DSM from multi-view stereo satellite imagery serves as input to this network. Our method is able to reconstruct elevation models with improved building roof forms but artifacts negatively influence the quality of the obtained results. In a follow-up work [8], we improved our method by introducing least-square residuals in the objective function instead of the negative log-likelihood. This allows reconstructing improved building structures with fewer artifacts and less noise. To further enhance the assessment of building silhouettes in height images, we recently proposed to add additional supervision through a multi-task learning approach [3]. Mainly, the simultaneous learning of DSM filtering and prediction of roof types masks via an end-to-endlearning framework allows to obtain extra information from both tasks and to reconstruct building shapes with more complete structure. However, the proposed cGAN-based network architecture was fixed to a specific configuration with the shared part only containing very few layers. Thus, in fact, the single tasks can not make use of mutual information extracted from the input data, practically leading to two separate networks, independently optimized in parallel. Besides, the weighting parameters for tuning the contribution of each task in the loss function were experimentally selected and fixed during training. To compensate for the aforementioned drawbacks, in this work, we propose a more general approach which comprises the properties of prior work while introducing a holistic modular framework. In our abstracted model, arbitrary combinations of objective functions and tasks can be employed, tasks can be added on-demand, a clear separation into a common part and task-specific parts of the network is defined, various architectures can be used for the respective encoder and decoder blocks of the network, and each contributing objective can optionally be automatically and dynamically weighted by uncertainty-based balancing terms. Prior methods can be modeled as instances of our generalized approach by employing the respective architectural blocks as encoder and decoder modules, the utilized objectives, and fixed task weights. Since our framework is much more flexible, it allows for the optimization of instances towards the demands of DSM filtering and simultaneous roof type classification without being restricted to this set of tasks or this specific application. ## 3 A Modular Multi-Task Framework for DSM Filtering Refining a DSM requires the accurate detection of buildings and the ground plane in order to improve valuable features and remove unwanted phenomena, such as noise and vegetation. Training a deep CNN for this task using a simple reconstruction loss is feasible, yet hardly captures all of the desired properties. Prior work shows that, by adding additional objectives, _e.g._, the direction of surface normals, the quality of predictions can be enhanced [3]. Adding an auxiliary task to the network complements the multiple objectives for the main task. The roof type segmentation task, as utilized in prior work, uses ground-truth data derived from the same source as the DSM filtering task, thus, adding little overhead to the data creation process. While the different objectives for the main task evaluate the same network output and therefore consequently optimize the same set of network parameters, the auxiliary task can only share a part of the network weights as an independent output has to be produced. We approach the design of a suitable network architecture by creating a family of networks through a modular encoder-decoder structure, in which different architectural blocks can be used for each part. In these encoder-decoder networks, sharing a common encoder enables learning a rich representation while taking into account the supervision of both tasks. Multiple task-specific decoders allow for appropriate decoding of the shared features for each task. For our multi-objective multi-task loss function, we balance the contribution of each objective from both tasks by learned weighting factors. Figure 1 illustrates an abstract overview of this abstracted multi-task concept. ### Multi-Task Concept Image-to-image problems, such as DSM filtering, require producing an equally-shaped raster output from an input image. For such tasks, auto-encoders can be employed that map the input to a latent space in a first encoding stage and decode this representation to the final output subsequently. Since the defined tasks are related by their connection in the physical world, we expect them to rely on similar features in the input image space. Hence, a major portion of the network parameters can be shared. By hard-sharing a subset of parameters, each task contributes towards learning an optimal set of generic parameters. Intuitively, sharing a common encoder enables the extraction of generic features, while individual decoders allow for a task-specific interpretation of the latent representation and translation to the respective outputs. This auto-encoder-based system with shared and individual parts can be seen in Figure 1. In our implementation, we designed a modular system which is able to make use of various architectures in the encoder and decoder parts of the network. Hence, we can adapt the network to the different requirements of the main regression and the auxiliary segmentation tasks. Using encoders of different capacity greatly adds to the variability of our system through changes in the proportion of shared _vs._ task-specific parameters. For our experiments, we selected multiple state-of-the-art semantic segmentation architectures as a basis for the decoders. These highly specialized architectures excel in transforming input images to the desired output map of pixel-wise classification results in close-to-original resolution. Similar to implementations of numerous semantic segmentation approaches, we use state-of-the-art classification architectures as a backbone for feature extraction in our encoder. In order to retain the spatial structure of the input, we create a resolution-preserving encoder by modifying the backbone network architecture slightly. Combinations of different decoder modules yield a vast number of instances of our abstract model. ### Learning Multiple Objectives DSM filtering can be tackled as a single-task regression task, _e.g._, by using an auto-encoder with a reconstruction objective, such as an \\(\\ell_{1}\\) loss \\(\\mathcal{L}_{\\ell_{1}}\\). In our generalized framework, _cf._ Figure 1, this could be modeled as an encoder followed by a single decoder module and directly optimized using the reconstruction loss. In order to enhance the quality of the predicted DSMs, additional objectives, such as the evaluation of surface normals \\(\\mathcal{L}_{\\text{n}}\\) or adversarial terms \\(\\mathcal{L}_{\\text{GAN}}\\), can be em ployed [3]. Such loss functions \\(\\mathcal{L}\\) operate independently on the predicted DSM and are later combined to a multi-objective loss function. We implement all three mentioned objectives for the main DSM filtering task and integrate them into a single multi-objective multi-task loss \\(\\mathcal{L}_{\\text{mt}}\\) for optimization together with the segmentation objective \\(\\mathcal{L}_{\\text{seg}}\\) of the auxiliary roof type classification task. ### Learning Task Weights Since our approach makes use of multiple objectives for both, different objectives and tasks, a strategy for combining them to a loss function for optimization is required. Balancing the contributing objectives is a vital part of multi-task learning. Most commonly, this is realized by simply calculating a weighted sum of the single-task losses with fixed balancing factors. These terms may even be equal in the most simple case. Since the single objective functions are inherently different in terms of their output range and sensitivity to changes, weighting each of them by an individually adapted balancing factor is an appropriate measure. While such balancing factors can be tuned along with other hyperparameters to achieve better results, finding a suitable set of weights is a tedious and challenging problem. This procedure can be replaced by an automated and potentially dynamic weighting scheme. Multiple solutions have been proposed in the literature, such as weighting based on task difficulty [14], or task uncertainty [15]. We follow the latter that allows for learning task weights \\(w\\) based on an uncertainty measure \\(\\sigma\\) along with the network parameters, as indicated in Figure 1. Minimizing a sum of weighted losses \\(\\mathcal{L}_{\\text{mt}}=\\sum_{\\tau}w_{\\tau}\\cdot\\mathcal{L}_{\\tau}\\) with learnable parameters \\(w_{\\tau}\\) for each task \\(\\tau\\) with \\(\\mathcal{T}=\\{\\tau_{i}\\}_{i}\\) favors trivial solutions for \\(w_{\\tau}\\rightarrow-\\infty\\). Hence, regularization terms \\(r\\) have to be introduced. As proposed by Kendall et al. [15], we formulate a multi-task loss \\[\\mathcal{L}_{\\text{mt}}=\\sum_{\\tau}\\mathcal{L}_{\\tau}\\cdot w_{\\tau}+r_{\\tau} \\tag{1}\\] with \\(w_{\\tau}=0.5\\cdot\\exp(-\\log(\\sigma_{\\tau}^{2}))\\) for regression tasks, \\(w_{\\tau}=\\exp(-\\log(\\sigma_{\\tau}^{2}))\\) for classification tasks and \\(r_{\\tau}=0.5\\cdot\\log(\\sigma_{\\tau}^{2})\\). ## 4 Data Sources and Produced Datasets Our experiments and evaluation were performed on two datasets acquired over study areas in the cities of Berlin and Munich, Germany. The extent of both study areas is depicted in Figure 2. ### Data Processing The Berlin dataset covers a total area of 420 km\\({}^{2}\\). A photogrammetric DSM with a rasterized ground sampling distance (GSD) of 0.5 m, depicted in Figure 3, was generated using semi-global matching (SGM) Hirschmuller [42] on six pan-chromatic WorldView-1 images acquired on two different days, following the workflow of d'Angelo et al. [35]. The ground-truth data for the DSM filtering task, _i.e._, building geometries of a virtual city model in LOD2 filled with a DEM, was obtained from a CityGML model freely available through the Berlin Open Data portal1. In order to convert the data to a suitable form, mainly the strategy introduced by Bittner et al. [8] was applied. The roof polygons from the CityGML data model were first triangulated using the algorithm proposed by Shewchuk [49] which is, in turn, based on Delaunay triangulation [50]. For the creation of a raster height map, a unique elevation value for each pixel inside the resulting triangles was calculated using barycentric interpolation. Pixels that do not belong to a building were filled with DEM information. The ground-truth data for the roof type segmentation task was computed from the previously obtained city model as well. The slope for each pixel within the ground-truth height image was calculated as the maximum rate of elevation change between that pixel and its neighborhood. The orientation of the computed slope was estimated clockwise from 0\\({}^{\\circ}\\) to 360\\({}^{\\circ}\\), where north is represented by 0\\({}^{\\circ}\\), east by 90\\({}^{\\circ}\\), south by 180\\({}^{\\circ}\\), and west by 270\\({}^{\\circ}\\). Finally, a ground-truth roof map with multiple classes was defined as follows: Class 0 corresponds to non-built-up areas, class 1 to flat roofs, and class 2 to sloped roofs. Footnote 1: [http://www.businesslocationcenter.de/downloadportal](http://www.businesslocationcenter.de/downloadportal) The Munich study area covers a total area of 3.8 km\\({}^{2}\\). A Figure 3: Overview and details of the raster elevation maps of Berlin consisting of a photogrammetric DSM (a) and target DSM derived from a DEM and a semantic 3D city model in LOD2 (b). Figure 2: Location of the considered study areas \\(\\blacksquare\\) within the densely built-up \\(\\blacksquare\\) cities of Berlin (a) and Munich (b) \\(\\blacksquare\\) that are located in the northern (Berlin) and southern (Munich) part of Germany (c). different space-borne sensor acquired the images used for generating the photogrammetric DSM than in the Berlin study area. Precisely, SGM was applied on six pan-chromatic WorldView-2 images, which feature the same GSD of 0.5 m as the WorldView-1 images of Berlin to produce a DSM, resampled to 0.5 m GSD accordingly. Similarly to the other study area, the ground-truth for the target DSM and the roof type mask were simulated from a CityGML model in LOD2 and a DEM. The virtual city model was provided by the Bavarian Agency for Digitization, High-Speed Internet and Surveying. An illustration of the resulting height maps, given in Figure 4, clearly shows the effect of a temporal offset in between the acquisition dates of the respective source data. Especially noticeable are newly built and demolished buildings in the north western and eastern parts. ### Produced Datasets We split the Berlin dataset into regions for training, validation, and testing, as in prior work [8; 3]. These subsets are still closely related, since they originate from the same source. Therefore, independent test data is required for evaluating the performance as to be expected in a real use-case. We introduce an additional study area in Munich, which we exclusively use for testing selected models with a finalized set of network parameters, optimized on the training set, obtained with hyperparameters tuned on the validation set. During training, we cropped 256 \\(\\times\\) 256 px patches from the training area. Each epoch consisted of a full randomized sweep over the training area. As a data augmentation measure, we randomly shifted the original grid position of each patch by up to 256 px. Hence, for each epoch, approximately 20 000 samples were generated. A sample from this dataset is illustrated in Figure 5. ## 5 Implementation and Experiments In order to prove the feasibility and effectiveness of our proposed approach, we conducted numerous experiments and ablation studies to investigate the effect of each part in our system on the final result. From the family of network architectures that can be generated using our generalized and modular multi-task concept, we selected and configured numerous instances that utilize different combinations of decoders and objective functions to gain insight into their suitability, both individually and in conjunction with each other. We utilized renowned network architectures as building blocks within our modular framework. Architectural considerations and respective evaluations of these networks and their specific features have been reported by the original authors and were re-validated by independent overview papers already. Therefore, we refer the reader to the given references, to be found in the respective subsections, for in-depth analyses of the utilized network architectures. ### The Encoder Module CNN architectures, developed for classification tasks, are widely studied and often employed as backbone networks in encoder-decoder systems for semantic segmentation. Well-known examples for such architectures are the top-performing CNN from the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [32], _i.a._, Inception [33] and its successors, and various ResNet [25] variants that build upon the ideas of residual networks. We mainly focus on ResNets, as they are versatile and can be implemented in various depths and providing skip features at different levels. In order to retain the resolution of the input image, which is important for segmentation tasks, we modified the original architecture with atrous convolutions [27; 26]. By employing a deep ResNet variant, namely an ImageNet pre-trained ResNet-101 with more than \\(42\\cdot 10^{6}\\) parameters, as a backbone network, we ensure a sufficient capacity of the network. ### Decoder Modules Since both of our decoders are expected to produce a full-sized output map, the employed decoder architectures for the main and auxiliary tasks are interchangeable and equal--except for the very last layer that produces either one or three channels for the DSM or roof type output, respectively. All of the decoder modules we used in our experiments were derived Figure 4: Overview of the raster elevation maps of Munich consisting of a photogrammetric DSM (a) and target DSM derived from a DEM and a semantic 3D city model in LOD2 (b). This complementary test dataset is completely independent of the Berlin dataset, yet features similar properties. The temporal difference between the acquisition of source data for both products is clearly noticeable by new build and demolished buildings in the northwestern and eastern parts of the study area. Figure 5: Sample from our training dataset, with a patch, cropped from the photogrammetric DSM of Berlin (a) serving as input to the network, as well as ground-truth for the main task of DSM filtering (b) and roof type classification (c). Figure 8: Architecture for a decoder module based on the PSPNet. The most prominent part of this module is its spatial pyramid pooling module that applies pooling with various kernel sizes in parallel. With \\(23\\cdot 10^{6}\\) learnable parameters, the PSPNet decoder is similar in capacity to the DeepLabv3+ module and features the lowest computation time out of all considered decoders. Figure 6: Architecture of a decoder module based on the decoder part of UNet. We replaced up-sampling blocks by simple skip blocks, to account for the resolution-preserving encoder employed in our experiments. With \\(137\\cdot 10^{6}\\) learnable parameters this module outweighs the other considered decoders by far, resulting in the highest memory demand and inference time. Figure 7: Architecture of a DeepLabv3+-based decoder as used in our experiments. This lightweight module with only \\(17\\cdot 10^{6}\\) learnable parameters makes utilizes atrous spatial pyramid pooling. An output stride of eight is realized by the selected dilation rates in the atrous convolution layers. from well-studied semantic segmentation networks and have successfully been re-utilized for regression tasks before. #### 5.2.1 UNet A simple, yet powerful encoder-decoder based architecture called UNet was presented by Ronneberger et al. [31] for medical image analysis. The network consists of symmetrical encoder and decoder sub-networks that are connected via skip connections. Ever since the proposal of the original UNet, several variations have been proposed and applied in various works. The original architecture consists of four down-sampling and respective up-sampling blocks and a bottleneck block in between. For our decoder-only version though, we merely use the bottleneck and up-sampling parts of the UNet. Since resolution-preserving encoders, such as the one employed in our experiments, retain a constant resolution of the feature maps up to a certain factor, we add two resolution-preserving skip blocks after the bottleneck block instead of up-sampling blocks. They are, in fact, alike except for the very first layer, in which a standard convolution is applied instead of a deconvolution, as to be seen in Figure 6. The skip blocks are followed by three up-sampling blocks. In total, the decoder takes advantage of five skip connections from low-level encoder features. While this decoder uses rather simple building blocks, it comprises approximately \\(137\\cdot 10^{6}\\) learnable parameters, which is by far the highest number of all decoders we implemented. #### 5.2.2 DeepLabv3+ The DeepLabv3+ [11] network achieved state-of-the-art results in multiple semantic segmentation benchmarks and challenges. The architecture itself is a direct successor to DeepLabv3 [20], which, in turn, descends from DeepLabv2 [10] and the original DeepLab [27]. Core concepts implemented in this family of CNN architectures are atrous convolutions [27] and atrous spatial pyramid pooling [10]. Our DeepLabv3+ decoder, illustrated in Figure 7, makes use of a single skip connection, similar to the original implementation. Out of the three decoders employed in our experiments, the DeepLabv3+-based decoder with approximately \\(17\\cdot 10^{6}\\) learnable parameters is the most lightweight. #### 5.2.3 PSPNet Similar to DeepLabv3+, the PSPNet [24] won several challenges for semantic segmentation by introducing effective techniques for pyramid scene parsing. The signature component of this architecture is its pyramid pooling module that uses parallel pooling layers with different kernel sizes and subsequent concatenation of the resulting feature maps to capture scene context. With approximately \\(23\\cdot 10^{6}\\) learnable parameters, the capacity of our PSPNet-based decoder, illustrated in Figure 8, is similar to the DeepLabv3+ variant. As opposed to the other presented decoders, this one does not make use of any skip connections from low-level features of the encoder. ### Objectives In order to train our model for the desired main goal of DSM filtering, we defined multiple objectives. In addition to an \\(\\ell_{1}\\) reconstruction loss, we evaluated surface normals to encourage the prediction of smooth surfaces. Since reconstruction losses favor slightly blurry predictions, we added a conditional adversarial loss to force the network to predict crisp boundaries. All of the defined objective functions have to be combined into a single loss function to facilitate optimization. By balancing their contribution with corresponding weighting terms, we account for the different properties of each of those. Implementation details are described in the following. #### 5.3.1 Reconstruction Loss A basic reconstruction loss was applied to evaluate the quality of the estimated DSM. We implemented an \\(\\ell_{1}\\) loss \\(\\mathcal{L}_{\\ell_{1}}\\) that favours clear and detailed predictions rather than over-smoothed results, as often observed when applying \\(\\ell_{2}\\) losses [9]. While this objective is rather basic, it is expected to contribute to the final result the most due to its unique property of assessing absolute height errors. Therefore, we treated this objective as the core part of our multi-objective multi-task loss function and utilized it in all of our experiments. To further boost the quality of predictions, we added more specialized objectives to account for specific shortcomings of this reconstruction loss. #### 5.3.2 Surface Normal Loss Since smoothness and crisp boundaries are not explicitly enforced by the basic reconstruction loss, we add the evaluation of surface normals to our set of objectives. We derive normal directions from gradients in the image representation of the height maps [5]. Hence, this objective makes use of the same predictions as the reconstruction loss and, thus, does not require an individual decoder. The employed surface normal loss \\[\\mathcal{L}_{\\text{n}}=1-\\left(\\frac{1}{N}\\sum_{i=1}^{N}\\left(\\frac{\\mathbf{n}_{i} ^{\\top}\\mathbf{n}_{i}^{*}}{\\|\\mathbf{n}_{i}\\|\\cdot\\|\\mathbf{n}_{i}^{*}\\|}\\right)\\right) \\tag{2}\\] evaluates the remaining angular difference between the normal directions in prediction \\(\\mathbf{n}_{i}\\) and ground-truth \\(\\mathbf{n}_{i}^{*}\\), normalized to unit length. #### 5.3.3 Conditional Adversarial Loss While typical reconstruction losses favor slightly blurry predictions to avoid larger errors, approaches making use of adversarial losses have been shown to produce sharper edges in recent work [9, 3] since such systematical errors are easy for a discriminator to detect. We implemented a cGAN-based objective through a conditional discriminator network \\(\\mathfrak{D}\\). Since deep high-capacity discriminator network architectures add numerous parameters to the optimization problem but only improved the quality of results negligibly in preliminary ex periments, we employed a simple PatchGAN discriminator architecture [21] that classifies 70 \\(\\times\\) 70 px patches instead of full images. It is, therefore, able to classify arbitrarily sized images while only using approximately \\(2.8\\cdot 10^{6}\\) parameters. We modified the original architecture slightly by feeding in concatenated predictions and input images, as shown in Figure 9, thus making it a conditional discriminator. \\(\\mathfrak{D}\\) operates on the input stereo DSM \\(\\mathbf{x}\\) and a filtered DSM \\(\\mathbf{y}_{\\text{DSM}}\\) or \\(\\mathbf{y}_{\\text{DSM}}^{*}\\) and assigns patch-wise labels \\(\\{0:\\text{fake},1:\\text{real}\\}\\). During training, we alternately optimized the main encoder-decoder network and the discriminator network using a simple mean squared error (MSE) loss. We derive our objective function \\(\\mathcal{L}_{\\text{GAN}}\\) from the discriminator decision as \\[\\mathcal{L}_{\\text{GAN}}=\\frac{1}{N}\\sum_{i=1}^{N}\\left(\\mathfrak{D}(\\mathbf{x}_{i },\\mathbf{y}_{\\text{DSM},i})-1\\right)^{2}\\quad. \\tag{3}\\] Thus, \\(\\mathcal{L}_{\\text{GAN}}\\) yields low values when the discriminator network mistakes predictions for ground-truth samples. #### 5.3.4 Segmentation Loss The main and only objective we employed for roof type segmentation was a standard SoftMax cross-entropy loss \\[\\mathcal{L}_{\\text{seg}}=\\frac{1}{N}\\sum_{i}^{N}-y_{\\text{seg},i,y_{\\text{ seg},i}^{*}}+\\log\\left(\\sum_{c\\in C}y_{\\text{seg},c}\\right) \\tag{4}\\] that evaluates per pixel predictions \\(\\mathbf{Y}_{\\text{seg}}=\\left(y_{\\text{seg},i,c}\\right)_{i,c}\\) with target labels \\(\\mathbf{y}_{\\text{seg}}^{*}=\\left(y_{\\text{seg},i}^{*}\\right)_{i}\\) for the three-class segmentation task. This objective was expected to mainly provide supervision for building footprint recognition in order to preserve and refine the shape of buildings in the predicted DSM. As a secondary source of information, buildings are classified based on their roof geometries with classes for flat and non-flat roofs. In the closely related field of depth estimation it can be observed that the optimization of regression tasks is generally harder than the optimization of corresponding classification tasks [12, 6]. By restricting the highly non-convex optimization space through this constraint, this auxiliary objective doubles as a regularization measure [16]. ### Multi-Objective and Multi-Task Loss Function The set of objective functions to be utilized during training a specific model has to be incorporated into a single multi-objective and multi-task loss function, subject to optimization. We applied weighting based on homoscedastic task uncertainty \\(\\sigma_{\\text{T}}\\)[15], treating each objective as a \"task\" \\(\\tau\\). As per Equation (1), we designed our multi-task loss for the regression objectives \\(\\mathcal{L}_{\\ell_{1}}\\) and \\(\\mathcal{L}_{n}\\), the classification objective \\(\\mathcal{L}_{\\text{seg}}\\), and the conditional adversarial objective \\(\\mathcal{L}_{\\text{GAN}}\\)--for which we fix the weight--as \\[\\mathcal{L}_{\\text{mt}} =\\sum_{\\tau}\\mathcal{L}_{\\tau}\\cdot w_{\\tau}+\\tau_{\\tau} \\tag{5}\\] \\[=\\left(\\mathcal{L}_{\\ell_{1}}\\cdot w_{\\ell_{1}}+r_{\\ell_{1}} \\right)+\\left(\\mathcal{L}_{\\text{n}}\\cdot w_{\\text{n}}+r_{\\text{n}}\\right)\\] \\[\\quad+\\left(\\mathcal{L}_{\\text{seg}}\\cdot w_{\\text{seg}}+r_{ \\text{seg}}\\right)+\\mathcal{L}_{\\text{GAN}}\\cdot w_{\\text{GAN}}\\] (6) \\[=\\left(\\mathcal{L}_{\\ell_{1}}\\cdot\\frac{\\exp(-s_{\\ell_{1}})}{2}+ \\frac{s_{\\ell_{1}}}{2}\\right)\\] \\[\\quad+\\left(\\mathcal{L}_{\\text{n}}\\cdot\\frac{\\exp(-s_{\\text{n}})} {2}+\\frac{s_{\\text{n}}}{2}\\right)\\] \\[\\quad+\\left(\\mathcal{L}_{\\text{seg}}\\cdot\\exp(-s_{\\text{seg}})+ \\frac{s_{\\text{seg}}}{2}\\right)\\] \\[\\quad+\\mathcal{L}_{\\text{GAN}}\\cdot s_{\\text{GAN}}\\] (7) \\[=\\left(\\mathcal{L}_{\\ell_{1}}\\cdot\\frac{\\exp(-\\log(\\sigma_{\\ell_{ 1}}^{2}))}{2}+\\frac{\\log(\\sigma_{\\ell_{1}}^{2})}{2}\\right)\\] \\[\\quad+\\left(\\mathcal{L}_{\\text{n}}\\cdot\\frac{\\exp(-\\log(\\sigma_{ \\text{n}}^{2}))}{2}+\\frac{\\log(\\sigma_{\\text{n}}^{2})}{2}\\right)\\] \\[\\quad+\\left(\\mathcal{L}_{\\text{seg}}\\cdot\\exp(-\\log(\\sigma_{ \\text{seg}}^{2}))+\\frac{\\log(\\sigma_{\\text{seg}}^{2})}{2}\\right)\\] \\[\\quad+\\mathcal{L}_{\\text{GAN}}\\cdot\\log(\\sigma_{\\text{GAN}}^{2})\\] (8) \\[=\\frac{1}{2}\\left(\\frac{\\mathcal{L}_{\\ell_{1}}}{\\sigma_{\\ell_{1}} ^{2}}+\\log(\\sigma_{\\ell_{1}}^{2})\\right)+\\frac{1}{2}\\left(\\frac{\\mathcal{L}_{ \\text{n}}}{\\sigma_{\\text{n}}^{2}}+\\log(\\sigma_{\\text{n}}^{2})\\right)\\] \\[\\quad+\\left(\\frac{\\mathcal{L}_{\\text{seg}}}{\\sigma_{\\text{seg}}^{2 }}+\\frac{\\log(\\sigma_{\\text{seg}}^{2})}{2}\\right)\\] \\[\\quad+\\mathcal{L}_{\\text{GAN}}\\cdot\\log(\\sigma_{\\text{GAN}}^{2})\\;. \\tag{9}\\] The learned weighting terms represent a relative balancing of the tasks, thus one of them can safely be fixed. Fixing the weight for \\(\\mathcal{L}_{\\text{GAN}}\\) loss lead to more stable convergence in preliminary experiments, since it is a combination of multiple losses itself and thus neither satisfies the underlying assumptions for classification nor regression losses. As a consequence, no regularization value \\(r_{\\text{GAN}}\\) is required. Note that we optimized for \\(s_{\\tau}:=\\log\\sigma_{\\tau}^{2}\\), as given in Equation (7), Figure 9: Architecture of the conditional discriminator network used in our experiments. This fully-convolutional PatchGAN-based discriminator is conditioned on the initial stereo DSM and assigns binary labels to fixed-sized image patches for arbitrarily large inputs. A simple MSE loss serves as an objective function for training. instead of \\(\\sigma_{\\mathrm{T}}\\) due to higher numerical stability. ### Experimental Design Based on the various architectural modules and objective functions presented in Sections 5.1 to 5.3, we conducted experiments using different instances of our generalized framework with various combinations of network architectures and objectives. In order to restrict the study to a feasible number of experiments, we alternately fixed parts of the system while validating a broad range of configurations for the other parts. We started by evaluating the impact of each objective on the overall performance. For the best performing candidate, we ran further experiments with a variety of decoder architectures. By training and validating each network on the respective subset of the Berlin study area, we make the results comparable while leaving out the Berlin and Munich test areas to avoid overfitting. Only the best performing models from the validation stage were subsequently evaluated on the test sets. This independent testing stage, thus, gives an impression of the generalization performance, _i.e._, performance on unseen data as in a real-world use-case. A more detailed description of the testing procedure along with the respective results is given in Section 6. ### Training Settings For running the experiments, our proposed approach was implemented in PyTorch. Keeping the hyperparameter settings consistent for all experiments allowed for comparing the results directly. We used batches of size \\(N_{\\mathrm{batch}}=5\\), a fixed learning rate \\(\\alpha=0.0005\\) for the Adam optimizer [30] with \\(\\beta_{1}=0.9\\), \\(\\beta_{2}=0.999\\), and ran training for a fixed number of \\(N_{\\mathrm{epoch}}=100\\) epochs on the Berlin training subset, which roughly equals \\(N_{\\mathrm{iter}}=38\\cdot 10^{4}\\) optimization steps. Training was conducted on a single NVIDIA 1080Ti GPU. We selected the final set of parameters based on validation metrics calculated during training from the validation subset. Weighting terms for the contributing objectives were dynamically adjusted with the network parameters. Figure 10 shows the development of these parameters over the course of a full training run. While we fixed \\(s_{\\mathrm{GAN}}\\) to its initial value, the remaining weights were quickly adjusted to the right scale automatically. The task weights slowly decayed from this temporary high, mirroring the overall increase in performance of the network. ### Validation of Training Results The modular approach of our implementation with interchangeable encoder and decoder architectures, and multiple choices for objective functions, allows for numerous unique configurations. Out of this pool of options, we picked various relevant configurations and trained them according to the aforementioned procedure and datasets. Results were evaluated using standard metrics for both tasks. We mainly measured the quality of the filtered DSM using the root mean squared error (RMSE) \\[\\mathrm{RMSE}(\\mathbf{y}_{\\mathrm{DSM}},\\mathbf{y}_{\\mathrm{DSM}}^{*})=\\] \\[\\frac{1}{N}\\sqrt{(\\mathbf{y}_{\\mathrm{DSM}}-\\mathbf{y}_{\\mathrm{DSM}}^{ *})^{\\top}\\cdot(\\mathbf{y}_{\\mathrm{DSM}}-\\mathbf{y}_{\\mathrm{DSM}}^{*})} \\tag{10}\\] with predictions \\(\\mathbf{y}_{\\mathrm{DSM}}^{*}\\) and ground-truth labels \\(\\mathbf{y}_{\\mathrm{DSM}}\\). In addition to that, we evaluated the mean absolute error (MAE) \\[\\mathrm{MAE}(\\mathbf{y}_{\\mathrm{DSM}},\\mathbf{y}_{\\mathrm{DSM}}^{*})=\\frac{1}{N} \\sum_{i=1}^{N}|y_{\\mathrm{DSM},i}-y_{\\mathrm{DSM},i}^{*}|\\quad. \\tag{11}\\] All of these metrics are given in meters, where lower values correspond to lower errors and, thus, better performance. Similar to these standard metrics for regression results, we use the common mean intersection over union (mIOU) metric \\[\\mathrm{mIOU}(\\mathbf{y}_{\\mathrm{seg}},\\mathbf{y}_{\\mathrm{seg}}^{*} \\mathcal{C})=\\] \\[\\frac{1}{|\\mathcal{C}|}\\sum_{c\\in\\mathcal{C}}\\frac{|\\mathcal{R} _{c}(\\mathbf{y}_{\\mathrm{seg}})\\cap\\mathcal{R}_{c}(\\mathbf{y}_{\\mathrm{seg}}^{*})|}{| \\mathcal{R}_{c}(\\mathbf{y}_{\\mathrm{seg}})\\cup\\mathcal{R}_{c}(\\mathbf{y}_{\\mathrm{seg} }^{*})|} \\tag{12}\\] for validating the segmentation performance of the auxiliary task with three classes \\(\\mathcal{C}\\) that represent {no building, flat roof, sloped roof}, with higher values representing better performance. #### 5.7.1 Multi-Objective Ablation Study One of the most important aspects of this evaluation is to re-validate the multi-task multi-objective approach in an ablation study. Hence, we trained a family of networks that utilize a gradually increasing number and combination of objectives and tasks. We considered configurations from the most basic single-task and single-objective \\(\\mathcal{L}_{\\ell_{1}}\\) DSM filtering network up to a full-fledged version using the complete set of implemented objectives and tasks. We used a UNet decoder for the main task and a DeepLabv3+ decoder for the segmentation task in these experiments. The validation results, given in Table 1, show that each objective that was added to the Figure 10: Evolution of task balancing weights during the training phase of one of our models with a multi-task DeepLabv3+PSPNet configuration. The applied weighting scheme is based on an uncertainty measure, represented by \\(\\sigma_{\\mathrm{T}}\\), which were optimized along with the network parameters. Since this measure does not directly apply to adversarial terms, we fixed the corresponding weight to its initial value. loss function improved the predictive performance on unseen data. Consequently, additional supervision through the secondary task and its respective objective function \\(\\mathcal{L}_{\\text{seg}}\\) was expected to boost the performance further. Our experimental results show that multi-task training not only improved the performance in terms of RMSE by scoring the lowest error but also dramatically decreased training time. While best results were achieved after epoch 96 using single-objective training, and after epochs 72 and 78 using multi-objective training, the multi-task multi-objective variant of the network peaked performance after epoch 59 already. Hence, by making use of the proposed method with the full set of implemented objectives, roughly 40 % of training time could be saved, adding training time efficiency to its advantages. Another important factor in this is the employed weighting scheme, as to be seen from the results for multi-task multi-objective training with equal weights that required 83 epochs of training to achieve comparable performance. #### 5.7.2 Model Selection We deem it crucial to stick to a strict three-fold evaluation scheme with independent training, validation, and testing phases. Hence, we selected the best models and procedure for the final testing stage from the validation results. To this end, we compared the performance of multi-task models that follow the results of the prior ablation study (Table 1), _i.e._, the full set of implemented objectives and tasks. Figure 11 illustrates the results of the validation stage for models utilizing various combinations of decoder architectures, with \\(\\mathcal{L}_{\\ell_{1}}\\) optimization serving as a single-task single-objective baseline. It became apparent that there is a clear order in the suitability of decoder architectures for DSM filtering according to the baseline results, with the DeepLabv3+ decoder scoring the lowest and the PSPNet decoder the highest RMSE. Similarly, DeepLabv3+ and PSPNet decoders consistently achieve better performance in roof type classification than the UNet decoder. The difference in RMSE between the configurations using DeepLabv3+ and UNet decoders for the main task combined with all three decoders for the auxiliary task is negligible. Hence, all five of these configurations were advanced to the testing stage. Note that due to technical limitations, experiments for the UNet/UNet configuration could not be conducted using a similar setup as the other ones due to extensive memory demand. We intentionally refrained from making individual adjustments in the training settings for this specific configuration in order to retain a maximum of comparability. ## 6 Evaluation For the final testing stage, we prepared a test set from each of our two study areas. While the Berlin area was split into training, validation, and test as described in Section 4, the Munich dataset was exclusively used for testing. Only the best-performing models on the validation set were considered in the test phase in order to avoid overfitting on the test split. In order to be able to set the results into perspective, we implemented a baseline that utilizes hand-crafted geometric features, described in Section 6.1. Results for the Berlin study area are presented in Section 6.2. The completely independent Munich test set gives further insight into the generalization performance of the proposed method. Test results for this additional evaluation stage are given in Section 6.3. The test areas were split into overlapping patches of \\(256\\times 256\\,\\mathrm{px}\\). A stride of 64 px was implemented in order to reduce the influence of border effects by averaging over multiple predictions. While this measure increases the total number of inference steps, it helps to generate smooth predictions over the whole test region. Note that the proposed network architectures are fully convolutional and--given sufficient memory--therefore, able to generate predictions of arbitrary sizes in a single inference step in theory. A quantitative evaluation of the results was conducted using the RMSE as our main metric, similar to the validation stage. We also evaluate the mIOU of the predicted roof type maps, even though they merely serve as a regularization measure through additional supervision and are not a primary product of our system. ### Baseline Method Multiple solutions have been proposed for distinguishing object from ground points and subsequent removal of such for \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline \\multicolumn{2}{c}{Objectives} & \\multicolumn{2}{c}{Weighting} & \\multicolumn{2}{c}{Performance} \\\\ \\hline \\(\\mathcal{L}_{\\ell_{1}}\\) & \\(\\mathcal{L}_{a}\\) & \\(\\mathcal{L}_{\\text{GAN}}\\) & \\(\\mathcal{L}_{\\text{seg}}\\) & Best Epoch \\(\\uparrow\\) & RMSE (in m) \\(\\uparrow\\) \\\\ \\hline \\(\\mathcal{L}_{\\ell_{1}}\\) & \\(\\mathcal{L}_{a}\\) & \\(\\mathcal{L}_{\\text{GAN}}\\) & \\(\\mathcal{L}_{\\text{seg}}\\) & & 96 & 1.81 \\\\ \\(\\mathcal{L}_{\\ell_{1}}\\) & \\(\\mathcal{L}_{\\text{seg}}\\) & \\(\\mathcal{L}_{\\text{seg}}\\) & learned & 72 & 1.69 \\\\ \\(\\mathcal{L}_{\\ell_{1}}\\) & \\(\\mathcal{L}_{\\text{seg}}\\) & \\(\\mathcal{L}_{\\text{seg}}\\) & learned & 78 & 1.68 \\\\ \\(\\mathcal{L}_{\\ell_{1}}\\) & \\(\\mathcal{L}_{\\text{seg}}\\) & \\(\\mathcal{L}_{\\text{seg}}\\) & fixed & 83 & 1.67 \\\\ \\(\\mathcal{L}_{\\ell_{1}}\\) & \\(\\mathcal{L}_{\\text{seg}}\\) & \\(\\mathcal{L}_{\\text{seg}}\\) & \\(\\mathcal{L}_{\\text{seg}}\\) & 1.67 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Results of an ablation study on the Berlin validation set showing the training time and performance of instances of our model trained with an increasing number of objectives. In the case of multi-objective training the utilized weighting scheme is given. Best results are highlighted in blue color. Multi-task training with learned weighting terms yields best results, both in terms of RMSE and required training time. Figure 11: Performance of selected instances of our model utilizing various decoders for the main DSM filtering task and the secondary roof type classification task. Single-task and single-objective training with \\(\\mathcal{L}_{\\ell_{1}}\\) exclusively is given as a baseline reference for each DSM filtering decoder architecture. the purpose of deriving digital terrain models from DSMs [44, 45, 1]. While these methods are closely related to the desired removal of vegetation, they also target buildings and other structures. In contrast to focusing on removing a certain class of objects, they often detect points to keep and interpolate between them. Anders et al. [2] conducted a comprehensive comparison of various methods for removing vegetation from UAV-based photogrammetric point clouds. The presented methods are applicable to our task in general, however, they operate on point clouds instead of a raster DSM. Hence, utilizing such approaches would require a transformation from height maps to point clouds. While this is feasible, the characteristics of the resulting product would differ from photogrammetric point clouds. Most notably, a transformed point cloud will exhibit low point density due to the prior lossy rasterization process. Considering these shortcomings, we implemented a more classical method based on manually designed geometric features as a baseline. Utilizing a straightforward and hand-crafted approach as a baseline comes with the major benefit of being easily interpretable. Following Weidner [48], we expect vegetation to be detectable in the DSM from a high variance in the direction of surface normals. Hence, we extracted surface normals from the height map and calculated their variance in a defined neighborhood around each pixel. A segmentation map was derived from this by applying a threshold. To reduce noise in the resulting binary mask, we employed morphological opening and closing. The masked pixels were removed from the DSM. Missing values were subsequently interpolated. We considered various interpolation methods out of which filling with the minimal value in a specified window around the masked pixel yielded the best results. The parameters and interpolation method for this algorithm were manually tuned on samples from the training set and applied to the test areas without further changes. ### Test Results Test results for the Berlin study area, given in Table 2, show that the proposed method is able to filter stereo DSMs. The residual between the input DSM and the ground-truth with an RMSE of 7.14 m can be decreased to 5.99 m by the baseline method. The current state of the art, which achieves an RMSE of 2.95 m, further improves on this. However, all of the tested architectural variants of the proposed method were able to outperform the state of the art. With an RMSE of 2.01 m, one instance of the proposed method, implemented using decoders based on DeepLabv3+ and PSPNet for the main DSM filtering and roof type classification task, respectively, scored the lowest error. Since all five of the tested instances of our approach achieved similar results with a difference in RMSE of only 0.16 m, and particular strengths and weaknesses in different scenarios, we merged the results using a simple ensemble approach to further boost the prediction quality. We applied equally weighted averaging over all models to produce an ensemble prediction that scores an RMSE as low as 1.94 m, showing that fusing the prediction results yields a product that outperforms each of the contributing models by a sizeable margin of 0.07 m to 0.23 m. Ropf type masks were fused by per-pixel majority voting, resulting in a final segmentation result that yields an mIOU of 65.85 %, which is an improvement over the state of the art with an mIOU of 64.02 %. A qualitative example of the roof type masks is shown in Figure 12, illustrating that the achieved segmentation result improves on the state of the art. As the ensemble further advances the state of the art in the relevant metrics (RMSE and MAE), we only considered the fused prediction results for further analyses. Comparing the spatial distribution of remaining errors in state-of-the-art predictions to our results, shown in Figure 13, reveals that a fair share of the total error stems from distinct areas in the study area. We investigated particularly notable regions in the southern part of the study area. Qualitative evaluation exposed problems in densely vegetated areas, as shown in a detailed view in Figures 13e to g. Incompletely removed vegetation apparently accounts for sizeable portions of the remaining errors. The baseline method with parameters tuned to capture trees in between blocks of buildings, \\begin{table} \\begin{tabular}{l c c c} \\hline \\hline Method & RMSE\\(\\uparrow\\) & MAE \\(\\uparrow\\) & mIOU \\(\\downarrow\\) \\\\ & (in m) & (in m) & \\\\ \\hline Stereo DSM & 7.14 & 1.99 & — \\\\ Baseline & 5.99 & 1.69 & — \\\\ Bitmer et al. [8] & 2.95 & 1.20 & 64.02 \\% \\\\ Ours (UNet/PSPNet) & 2.17 & 1.07 & 64.50 \\% \\\\ Ours (DeepLabv3+/DeepLabv3+) & 2.13 & 1.11 & 64.73 \\% \\\\ Ours (UNet/DeepLabv3+) & 2.04 & 1.06 & 64.14 \\% \\\\ Ours (DeepLabv3+/UNet) & 2.04 & 1.10 & 64.48 \\% \\\\ Ours (DeepLabv3+/PSPNet) & 2.01 & 1.09 & 64.90 \\% \\\\ Ours (Ensemble) & 1.94 & 1.06 & 65.85 \\% \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Evaluation results for multiple instances of our multi-task framework on the Berlin test set. As expected, all methods easily outperformed the very basic stereo DSM baseline (first line), which serves as an input to the networks, as well as the stronger baseline using hand-crafted geometric features (second line). All variants of our model (lines 4-8) also considerably outperformed the state of the art (third line) in all metrics. Best performance (highlighted in blue) was achieved by fusing the prediction results of the five evaluated instances of our model using a simple ensemble approach. Figure 12: Roof type segmentation mask (b) for a scene in the Berlin test area (a) with prediction results using our approach (d) in comparison to the state of the art (c). Both methods succeed in segmenting building shapes. Classification of roof types is slightly better in our results, which is confirmed by an increase of roughly 2 % in terms of mIOU to a total of 65.85 %. as present in most of the training area, fails to remove larger patches of trees almost completely, as clearly visible from Figure 13a already. While this influence is also notable in both, state-of-the-art predictions and our results, our method is able to produce much more accurate height estimates, even in unfavorable scenarios. Since the test area is under influence of a temporal distance in between the acquisition of the stereo DSM that serves as input and the city model that provides the respective ground-truth (_cf._ Section 4) a portion of the remaining errors must be accredited to this factor. While this is acceptable for a relative comparison in large-scale areas, qualitative comparison of more detailed scenes requires addressing this effect. We selected a small scene from the test set that contains vegetation, high rise buildings, smaller buildings, as well as undeveloped flat land and densely vegetated areas that remained unchanged in between the acquisition dates. An analysis of such scene allows to draw qualitative conclusions and a comparison to the state of the art. Figure 14 shows a building complex in Berlin-Malchow that satisfies the desired properties, color-coded and projected into 3D space to allow for a more intuitive inspection. From this test scene, it can be observed that both methods manage to extract and refine the relevant information from the stereo DSM that contains a large offset in height to the ground-truth with an RMSE of 9.12 m, due to noise and vegetation. The state-of-the-art improved this baseline result by an unsurprisingly large margin to a remaining RMSE of 2.74 m. Outliers in the produced height map remain, in particular, in the densely vegetated area left of the high rise buildings, below the top-most horizontal buildings, and right of the arc of smaller buildings. Furthermore, some of the smaller buildings, especially in the outermost line of buildings in the arc are heavily distorted and almost missing. Our method improves on this result further by successfully suppressing the remaining outliers introduced by incomplete removal of vegetation while preserving even small buildings that are hardly visible in the stereo DSM with the naked eye. As a result, our predictions match the ground-truth closely and consequently achieve an even lower RMSE of only 2.14 m. ### Generalization Performance Evaluation of our models on the Berlin dataset allowed for a direct comparison to the state of the art. While the test area was unseen during training and validation, it was still derived from the same stereo DSM and is, hence, not completely independent from the training set. We, therefore, newly introduce a second dataset, completely unseen during training, processed from different data sources, and spatially disconnected from the other study area. The test results obtained from the Munich dataset, thus, allow for evaluating how well the developed method generalizes to unseen areas--similar to a real use-case. First prediction results of the full study area yielded a surprisingly high RMSE of almost 5 m, which hints at a systematic error. Indeed, the remaining errors from both prediction and input DSM, reveal large errors matching the shape of buildings, as to be seen in Figure 15. As already observed and mentioned in Section 4, a major part of the buildings in the study area was constructed, changed, or demolished in between the acquisition of input and ground-truth data. In order to still allow for a meaningful quantitative analysis, we extracted the central block of buildings from the study area, that remained unchanged. From this scene in Munich-Milbertshofen, shown in Figure 16, we derived similar performance metrics as before. With an RMSE of 3.33 m, the overall performance is slightly worse than the results on the Berlin dataset. Yet, it still easily outperforms the very simple stereo DSM baseline by almost 2 m. Interestingly, the baseline method which is based on hand-crafted features and thresholding parameters tuned on the training set of the Berlin study area, fails to correctly distinguish vegetation from buildings here and therefore scores an error of 7.16 m. Thus, it even degrades the quality of the original DSM. A qualitative evaluation shows that our method successfully extracts and refines buildings and removes vegetation. Even in this completely independent study area, located in an absolute height of 519 m above sea level, which is roughly 500 m higher than Berlin, with a DSM processed from images acquired by a different sensor, our method still performs well. With an RMSE that is still on par with state-of-the-art results on the much less independent Berlin test area, the proposed method generalizes well to unseen data, Figure 13: Remaining error in the filtered height estimates for the full Berlin test area from a baseline method (a) state-of-the-art results (b, e) and ours (c, f). Clearly visible are large spots with blatant errors, especially noticeable in (a) and (b). Our method produces remarkably better results in the affected areas, even though in some of the larger areas errors of much smaller magnitude are still evident. Flat and densely vegetated areas are most badly affected, as clearly visible from a detailed view (e-g). as opposed to the more traditional baseline method. While this independent testing protocol allows drawing first conclusions about the generalization performance, further analysis is desirable. The considered cities of Berlin and Munich share certain properties with regard to building types and appearance. CityGML models, utilized as ground-truth in our experiments, are not available globally rendering evaluation on a larger scale infeasible. However, our evaluation on independent test data approves that there are no signs of overfitting to the training set. In future work, LiDAR-derived data could potentially serve as ground-truth for predicted height maps. Building footprints could be evaluated using reference data from topographic maps, such as the OpenStreetMap project. Adopting such data sources for the evaluation process, however, requires further considerations regarding data pre-processing and the validation routine, and are therefore considered out of scope here. ## 7 Discussion The experimental evaluation of the proposed approach, presented in Section 6, showed the overall convincing performance of instances of our method. The general suitability of multi-task multi-objective training, as already known from prior work, was confirmed for our generalized modular encoder-decoder approach with learned weighting terms in an ablation study, presented in Section 5. Here, adding supervision through additional objectives and tasks consistently improved validation performance. This is especially noteworthy, as the additional ground-truth data was automatically derived from the same source. Hence, it is not adding to the effort of acquiring a suitable dataset consisting of various compatible labels for a single source image, which is a major obstacle to multi-task learning in general. In a study with different instances of our model, DeepLabv3+ decoders for the main task achieved the best performance over Figure 16: Predictions for an unseen area in Munich-Milbertshofen with no changes in between the acquisition of the stereo DSM and city model. The proposed method estimates height maps with comparable accuracy as the state of the art on the Berlin dataset, which is much closer to the training data. This experiment shows that networks following our concepts generalize well to unseen areas. The parameters of the baseline method based on hand-crafted geometric features, however, completely fail to distinguish buildings from vegetation when transferred to the unseen area. Figure 14: 3D view of a scene depicting a building complex in Berlin-Malchow that, in addition to distinct high-rise and low-rise buildings, contains flat undeveloped land and dense vegetation. Dense vegetation, as clearly visible in the lower part of the scene in the stereo DSM (a), was not completely removed by a state-of-the-art method (c). Furthermore, some of the small buildings, see left part of the scene in the ground-truth data (b), are heavily distorted or missing. Our method (d) successfully removes vegetation and preserves small buildings. Figure 15: Remaining height error in the Munich test set for stereo DSM (b) and prediction results (c). Comparing the error maps to a building mask 1 and stereo DSM (a) reveals the distinct influence of newly built and demolished buildings, especially in the northwestern and eastern part of the study area. all, despite featuring the lowest capacity out of all considered decoder architectures. While UNet decoders can achieve competitive results, they are more demanding in terms of memory as they contain a higher number of parameters. The PSPNet decoder, in comparison, was not able to produce satisfactory results for DSM filtering. It does, however, yield best results for the roof type segmentation task. Even though both of the other decoders are on par with this performance, the PSPNet decoder has the advantage of using few parameters and no skip connections from the encoder. Considering this, the DeepLabv3+/PSPNet model that performed best on the test set is most versatile and could easily be deployed in combination with other encoders that may not support multiple skip connections. Counterintuitively, the performance of the tested models was not directly related to their capacity. The learned weighting terms successfully balanced the different objectives, taking away the need for tedious manual tuning. The results of our studies suggest adding even more objectives and tasks for future approaches. Manually tuning the weighting terms to find an optimal set would surely render this strategy infeasible quickly, whereas an automatic and dynamic weighting procedure scales to this challenge easily. ## 8 Conclusions We presented a generalized framework for filtering stereo DSMs of urban areas, obtained from satellite imagery, via modular multi-task encoder-decoder CNNs. Our models utilize multiple objectives evaluating geometrical properties, an adversarial term, and segmentation results of a secondary roof type segmentation task. Balancing of these objectives in the multi-task multi-objective function, subject to optimization, was done by learned task uncertainty-based weights. Prior approaches on this application using multi-task learning [3] can be modeled as instances of our framework. In extensive experiments, we studied the performance of a variety of specific instances from the class of models our approach defines. Especially, different architectures for the decoder modules and the influence of the employed objectives were subject to studies. Our models were trained and tested on a common dataset from Berlin to allow for a fair comparison to the state of the art and were further evaluated on completely unseen and independent test data from a different city to give insight to their generalization performance as in a real use-case. The proposed method consistently outperforms the simple stereo DSM baseline, the stronger baseline based on handcrafted geometric features, and the current state of the art by a sizeable margin of more than 1 m RMSE, which corresponds to a decrease of around 34 %. In ablation studies, we were able to show the influence of each objective and the weighting scheme on the final result, and the suitability of different decoder architectures. Overall, the best performing instance of our model utilizes the full set of investigated objective functions for the main DSM filtering task, namely an \\(\\ell_{1}\\) loss, a surface normal loss, a conditional adversarial loss term, and a segmentation objective for the secondary roof type classification task. The single objectives were combined into multi-task multi-objective loss function using learned balancing terms based on homoscedastic uncertainty. Decoder modules based on DeepLabv3+ and PSPNet were employed for the main and auxiliary tasks. In comparison to a basic setup that was trained using a simple \\(\\ell_{1}\\) regression loss only, the proposed model not only yields a much lower error but also requires roughly 40 % less training, despite containing a higher number of parameters due to the addition of a secondary decoder. Source code and configuration files for the experiments reported in this paper will be publicly online soon2. Footnote 2: [https://github.com/lukasliebel/multitask_dsm_filtering](https://github.com/lukasliebel/multitask_dsm_filtering) ## Acknowledgements The work of Lukas Liebel was funded by the German Federal Ministry of Transport and Digital Infrastructure (BMVI) under reference 16AVF2019A. ## References * [1] M. Salah. \"Filtering of remote sensing point clouds using fuzzy C-means clustering\". In: _Applied Geomatics_ (2020). * [2] N. Anders, J. Valente, R. Masselink, and S. Keesstra. \"Comparing Filtering Techniques for Removing Vegetation from UAV-Based Photogrammetric Point Clouds\". In: _Drones_ 3.3 (2019), pp. 1-14. * [3] K. Bittner, M. Korner, F. Fraundorfer, and P. Reinartz. \"Multi-Task cGAN for Simultaneous Spaceborne DSM Refinement and Roof-Type Classification\". In: _Remote Sensing_ 11.11 (2019), p. 1262. * [4] S. Chennupati, G. Sistu, S. Yogamani, and S. Rawashdeh. \"AuxNet: Auxiliary tasks enhanced Semantic Segmentation for Automated Driving\". In: _International Conference on Computer Vision Theory and Applications_. 2019, pp. 1-8. * [5] J. Hu, M. Ozay, Y. Zhang, and T. Okatani. \"Revisiting Single Image Depth Estimation: Toward Higher Resolution Maps With Accurate Object Boundaries\". In: _Winter Conference on Applications of Computer Vision_. 2019, pp. 1043-1051. * [6] L. Liebel and M. Korner. \"MultiDepth: Single-Image Depth Estimation via Multi-Task Regression and Classification\". In: _International Transportation Systems Conference_. 2019, pp. 1440-1447. * [7] K. Bittner, P. d'Angelo, M. Korner, and P. Reinartz. \"Automatic Large-Scale 3D Building Shape Refinement Using Conditional Generative Adversarial Networks\". In: _International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_. Vol. 422. 2018, pp. 103-108. * [8] K. Bittner, P. d'Angelo, M. Korner, and P. Reinartz. \"DSM-to-LoD2: Spaceborne Stereo Digital Surface Model Refinement\". In: _Remote Sensing_ 10.12 (2018), p. 1926. * [9] M. Carvalho, B. L. Saux, P. Trouve-Peloux, A. Almansa, and F. Champagnat. \"On Regression Losses for Deep Depth Estimation\". In: _International Conference on Image Processing_. 2018, pp. 2915-2919. * [10] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. \"DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs\". In: _Transactions on Pattern Analysis and Machine Intelligence_ 40.4 (2018), pp. 834-848. * [11] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam. \"Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation\". In: _European Conference on Computer Vision_. 2018, pp. 1-18. * [12] H. Fu, M. Gong, C. Wang, K. Batmanghelich, and D. Tao. \"Deep Ordinal Regression Network for Monocular Depth Estimation\". In: _Conference on Computer Vision and Pattern Recognition_. 2018, pp. 2002-2011. * [13] P. Ghamisi and N. Yokoya. \"IMG2DSM: Height simulation from single imagery using conditional generative adversarial nets\". In: _Geoscience and Remote Sensing Letters_ 15.5 (2018), pp. 794-798. * [14] M. Guo, A. Haque, D.-A. Huang, S. Yeung, and L. Fei-Fei. \"Dynamic Task Prioritization for Multitask Learning\". In: _European Conference on Computer Vision_. 2018, pp. 282-299. * [15] A. Kendall, Y. Gal, and R. Cipolla. \"Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics\". In: _Conference on Computer Vision and Pattern Recognition_. 2018, pp. 7482-7491. * [16] L. Liebel and M. Korner. \"Auxiliary Tasks in Multi-task Learning\". In: _arXiv:1805.06334v2_ (2018), pp. 1-8. * [17] O. Sener and V. Koltun. \"Multi-Task Learning as Multi-Objective Optimization\". In: _Conference on Neural Information Processing Systems_. 2018, pp. 525-536. * [18] A. R. Zamir, A. Sax, W. B. Shen, L. J. Guibas, J. Malik, and S. Savarese. \"Taskonomy: Disentangling Task Transfer Learning\". In: _Conference on Computer Vision and Pattern Recognition_. 2018, pp. 3712-3722. * [19] X. Zhao, H. Li, X. Shen, X. Liang, and Y. Wu. \"A Modulation Module for Multi-task Learning with Applications in Image Retrieval\". In: _European Conference on Computer Vision_. 2018, pp. 1-16. * [20] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam. \"Rethinking Atrous Convolution for Semantic Image Segmentation\". In: _arXiv:1706.05587v3_ (2017), pp. 1-14. * [21] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. \"Image-To-Image Translation With Conditional Adversarial Networks\". In: _Conference on Computer Vision and Pattern Recognition_. 2017, pp. 5967-5976. * [22] Y. Lu, A. Kumar, S. Zhai, Y. Cheng, T. Javidi, and R. Feris. \"Fully-Adaptive Feature Sharing in Multi-Task Networks With Applications in Person Attribute Classification\". In: _Conference on Computer Vision and Pattern Recognition_. 2017, pp. 5334-5343. * [23] S. Ruder. \"An Overview of Multi-Task Learning in Deep Neural Networks\". In: _arXiv:1706.05098v1_ (2017), pp. 1-14. * [24] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. \"Pyramid Scene Parsing Network\". In: _Conference on Computer Vision and Pattern Recognition_. 2017, pp. 6230-6239. * [25] K. He, X. Zhang, S. Ren, and J. Sun. \"Deep Residual Learning for Image Recognition\". In: _Conference on Computer Vision and Pattern Recognition_. 2016. * [26] F. Yu and V. Koltun. \"Multi-Scale Context Aggregation by Dilated Convolutions\". In: _International Conference on Learning Representations_. 2016. * [27] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. \"Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs\". In: _International Conference on Learning Representations_. 2015, pp. 1-14. * [28] D. Eigen and R. Fergus. \"Predicting Depth, Surface Normals and Semantic Labels With a Common Multi-Scale Convolutional Architecture\". In: _International Conference on Computer Vision_. 2015, pp. 2650-2658. * [29] R. Girshick. \"Fast R-CNN\". In: _International Conference on Computer Vision_. 2015, pp. 1440-1448. * [30] D. P. Kingma and J. Ba. \"Adam: A Method for Stochastic Optimization\". In: _International Conference on Learning Representations_. 2015, pp. 1-15. * [31] O. Ronneberger, P. Fischer, and T. Brox. \"U-Net: Convolutional Networks for Biomedical Image Segmentation\". In: _Medical Image Computing and Computer-Assisted Intervention_. 2015, pp. 234-241. * [32] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. \"ImageNet Large Scale Visual Recognition Challenge\". In: _International Journal of Computer Vision_ 115.3 (2015), pp. 211-252. * [33] C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. \"Going deeper with convolutions\". In: _Conference on Computer Vision and Pattern Recognition_. 2015, pp. 1-9. * [34] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. \"Facial Landmark Detection by Deep Multi-task Learning\". In: _European Conference on Computer Vision_. 2014, pp. 94-108. * [35] P. d'Angelo and P. Reinartz. \"Semiglobal matching results on the ISPRS stereo matching benchmark\". In: _ISPRS Hannover Workshop_ 38.4/W19 (2011), pp. 79-84. * [36] T. Krauss and P. Reinartz. \"Enhancement of dense urban digital surface models from VHR optical satellite stereo data by pre-segmentation and object detection\". In: _International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_. Vol. 38. 1. 2010, pp. 1-6. * [37] B. Sirmacek, P. d'Angelo, and P. Reinartz. \"Detecting complex building shapes in panchromatic satellite images for digital elevation model enhancement\". In: _Workshop on Modeling of Optical Airborne and Space Bome Sensors_. 2010, pp. 1-6. * [38] B. Sirmacek, P. d'Angelo, T. Krauss, and P. Reinartz. \"Enhancing urban digital elevation models using automated computer vision techniques\". In: _ISPRS Commission VII Symposium_. 2010, pp. 541-546. * [39] T. H. Kolbe. \"Representing and Exchanging 3D City Models with CityGML\". In: _3D Geo-Information Sciences_. Ed. by J. Lee and S. Zlatanova. 2009. Chap. 2, pp. 15-31. * [40] K. Arrell, S. Wise, J. Wood, and D. Donoghue. \"Spectral filtering as a method of visualising and removing striped artefacts in digital elevation data\". In: _Earth Surface Processes and Landforms: The Journal of the British Geomorphological Research Group_ 33.6 (2008), pp. 943-961. * [41] R. Collobert and J. Weston. \"A unified architecture for natural language processing: Deep neural networks with multitask learning\". In: _International Conference on Machine Learning_. 2008, pp. 160-167. * [42] H. Hirschmuller. \"Stereo Processing by Semiglobal Matching and Mutual Information\". In: _Transactions on Pattern Analysis and Machine Intelligence_ 30.2 (2008), pp. 328-341. * [43] J. P. Walker and G. R. Willgoose. \"A comparative study of Australian cartonetric and photogrammetric digital elevation model accuracy\". In: _Photogrammetric Engineering & Remote Sensing_ 72.7 (2006), pp. 771-779. * [44] D. Tovari and N. Pfeifer. \"Segmentation Based Robust Interpolation -- A New Approach To Laser Data Filtering\". In: _International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_. 2005, pp. 79-84. * [45] K. Jacobsen and P. Lohmann. \"Segmented Filtering of Laser Scanner DSMs\". In: _International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_. 2003, pp. 1-6. * Between Visions and Applications_. 1998, pp. 649-656. * [47] R. Caruana. \"Multitask Learning\". In: _Machine Learning_ 28.1 (1997), pp. 41-75. * [48] U. Weidner. \"Digital Surface Models for Building Extraction\". In: _Automatic Extraction of Man-Made Objects from Aerial and Space Images_. 1997, pp. 193-202. * [49] J. R. Shewchuk. \"Triangle: Engineering a 2D quality mesh generator and Delaunay triangulator\". In: _Workshop on Applied Computational Geometry_. 1996, pp. 203-222. * [50] B. Delaunay. \"Sur la sphere vide\". In: _Bulletin de l'Academie des Sciences de l'URSS_ 6 (1934), pp. 793-800.
City models and height maps of urban areas serve as a valuable data source for numerous applications, such as disaster management or city planning. While this information is not globally available, it can be substituted by digital surface models (DSMs), automatically produced from inexpensive satellite imagery. However, stereo DSMs often suffer from noise and blur. Furthermore, they are heavily distorted by vegetation, which is of lesser relevance for most applications. Such basic models can be filtered by convolutional neural networks (CNNs), trained on labels derived from digital elevation models (DEMs) and 3D city models, in order to obtain a refined DSM. We propose a modular multi-task learning concept that consolidates existing approaches into a generalized framework. Our encoder-decoder models with shared encoders and multiple task-specific decoders leverage roof type classification as a secondary task and multiple objectives including a conditional adversarial term. The contributing single-objective losses are automatically weighted in the final multi-task loss function based on learned uncertainty estimates. We evaluated the performance of specific instances of this family of network architectures. Our method consistently outperforms the state of the art on common data, both quantitatively and qualitatively, and generalizes well to a new dataset of an independent study area.
Write a summary of the passage below.
251
mdpi/0028e642_a17d_4924_8e49_d688a1c09e92.md
Effects of Environment in the Microstructure and Properties of Sustainable Mortars with Fly Ash and Slag after a 5-Year Exposure Period Jose Marcos Ortega 1Departamento de Ingenieria Civil, Universidad de Alicante, Ap. Correos 99, 03080 Alicante, Spain; [email protected] Rosa Maria Tremino 1Departamento de Ingenieria Civil, Universidad de Alicante, Ap. Correos 99, 03080 Alicante, Spain; [email protected] Isidro Sanchez 1Departamento de Ingenieria Civil, Universidad de Alicante, Ap. Correos 99, 03080 Alicante, Spain; [email protected] Miguel Angel Climent 1Correspondence: [email protected] Received: 30 January 2018; Accepted: 27 February 2018; Published: 1 March 2018 ## 1 Introduction Nowadays, getting a more environmentally sustainable cement production is one of the main goals of the cement industry. Among the different ways to reach this aim, the use of additions as clinker replacement has lastly become increasingly common [1; 2; 3; 4; 5; 6], providing benefits such as the lessening of CO\\({}_{2}\\) emissions and the reduction of energy consumption throughout the cement manufacture, as well as the reuse of wastes coming from other industrial sectors, which would also help to solve their storage problem. The most popular active additions are ground granulated blast-furnace slag and fly ash and consequently their effects on the pore structure and cementitious materials properties have been well-studied, mainly when their hardening was produced in optimum laboratory conditions [7; 8; 9]. Under those conditions, they showed a better behaviour compared to cement-based materials prepared using ordinary Portland cement (OPC) without any addition [7]. On one hand, for slag this result is related to the hydration reactions of this addition, which form new CSH phases, entailing a more refined pore network [7; 8; 10]. On the other hand, the fly ash has pozzolanic activity, which means that it reacts with portlandite produced along the clinker hydration, giving rise to the formation of additional hydrated products, thus increasing the refinement of the pore network of mortars and concretes [9; 10; 11]. The abovementioned pore refinement produced by both additions improves the performance of cementitious materials, for example their chloride ingress resistance [12; 13; 14; 15; 16; 17] and their permeability [18]. However, real structures are hardened in different environmental temperature and relative humidity conditions than optimum laboratory one and this could affect the development of the pore network and service properties of cementitious materials, especially when active additions are used. In this regard, there are not too many experimental works where slag and fly ash cement-based materials had been hardened under real in-situ environments [12; 13; 19; 20; 21; 22; 23; 24; 25; 26] and considering their results, those materials showed different behaviour as a function of the climate characteristics of the place in which the specimens were located, although the majority of them coincide in the fact that the use of both additions is particularly adequate for marine structures [13; 23; 27; 28]. For studying the effects of the different environmental parameters which are involved in the development of microstructure and properties of cement-based materials, the main problem of exposing the materials to a real in-situ hardening conditions is the higher variability of the environmental temperature and relative humidity (among other parameters) during the exposure period, which make difficult to determine the effects of each one of them, especially when active additions are used. Then, it could also be appropriate to analyse the influence of exposure of those materials to non-optimum laboratory conditions, with a combination of constant temperature and relative humidity, as a complement and a good approach to real in-situ environment studies. Regarding those non-optimum laboratory hardening conditions, there are several researches where slag and fly ash cementitious materials were kept under different constant temperature and relative humidity [29; 30; 31; 32; 33; 34; 35; 36; 37] and most of them concluded that overall the performance of those materials was adequate [30; 31; 33; 34; 35; 38], mainly when the condition presented relatively high values of those environmental parameters [30; 31]. Nevertheless, the above-mentioned research has only studied the influence of those parameters over a relatively short time (in general less than 1 exposure year) and their effects would probably be more noticeable after very long exposure times. Furthermore, the analysis of the performance in the very long-term of cementitious materials with fly ash and slag kept under non-optimum laboratory hardening conditions could be also interesting in relation to the fact that the real structures should be designed for long service life periods, generally up to 50 years or even longer [39]. Then, the main purpose of this work is to observe the very long-term effects (until 5 years approximately) produced by the exposure to different non-optimum laboratory conditions, in which different constant relative humidity and temperature values were combined, in the microstructure, mechanical and durability behaviour of mortars prepared with fly ash and ground granulated blast-furnace slag commercial cements. Their performance has been compared to that noted for ordinary Portland cement (OPC) mortars. With respect to the experimental methodology of this research, the microstructure has been characterised through mercury intrusion porosimetry. Moreover, the studied parameters related to durability were the non-steady state chloride migration coefficient, the capillary suction coefficient and the effective porosity. Lastly, the mechanical behaviour of the mortars was checked determining their compressive and flexural strength. ## 2 Materials and Methods ### Sample Preparation In this work, mortar samples were tested, which were made using four commercial cements. The first one was an ordinary Portland cement, CEM I 42.5 R (CEM I hereafter) according to the Spanish and European standard UNE-EN 197-1 [40]. The others were sustainable cements. On one hand, a slag cement type III/B 42.5 L/SR [40] (CEM III from now on), with a content of ground granulated blast-furnace slag between 66% and 80% of total binder, was studied. On the other hand, two commercial fly ash cements were also analysed, a Portland cement with fly ash, CEMII/B-V 42.5 R [40] (CEM II hereafter), with fly ash content from 21% to 35% and a pozzolanic cement, CEM IV/B(V) 32.5 N [40] (CEM IV from now on), whose percentage of fly ash was between 36% and 55% of total binder. The different components of each one of the commercial cements and their percentage of the total binder are detailed in Table 1. It is important to emphasize here that the four studied cements were commercial ones, for reproducing more accurately the conditions of in-situ construction, when it commonly is difficult to mix OPC and additions in the field. The mortars were made using water to cement (w:c) ratios 0.4 and 0.5. Fine aggregate was used according to the standard UNE-EN 196-1 [41] and the aggregate-to-cement ratio was 3:1 for all the mortars. Cylindrical specimens, with dimensions 10 cm diameter and 15 cm height, have been prepared, as well as 4 cm \\(\\times\\) 4 cm \\(\\times\\) 16 cm prismatic specimens [41]. After setting, all specimens were kept in a chamber with 20 \\({}^{\\circ}\\)C and 95% RH along 24 h. Previously to the exposure to the different environmental conditions, the cylindrical specimens were cut in order to obtain cylinders of 1 cm and 5 cm thickness. Finally, the tests were performed at 28, 365 and 1900 days (5 years approximately) of age. ### Environmental Conditions As it has been explained in the introduction section, the main objective of this work is to analyse the very long-term effects (up to about 5 years) in different types of sustainable cement mortars, produced by the exposure to different non-optimum laboratory conditions, in which temperature and relative humidity values were combined. In order to reach this aim, four different laboratory environments were analysed (see Table 2). The first one (environment A) consisted of an optimum laboratory condition with 20 \\({}^{\\circ}\\)C and 100% relative humidity (RH), which was considered as a reference for comparing the effects of the rest of non-optimum environments in the pore structure and properties of the mortars. The environments B (15 \\({}^{\\circ}\\)C and 85% RH) and C (20 \\({}^{\\circ}\\)C and 65% RH) represented the Atlantic and Mediterranean climates. Those climates are present in different areas of Iberian Peninsula (Spain and Portugal) and their RH and temperature values were selected according to the annual average values of both parameters for each climate. Lastly, an extreme environmental condition was analysed, named as environment D, with 30 \\({}^{\\circ}\\)C and 40% RH. The climatic conditions corresponding to environments A to D were achieved putting the mortar specimens into hermetically sealed recipients containing distilled water or glycerol aqueous solutions of the appropriate concentration, following the standard DIN 50,008 part 1 [42], to get the desired RH value. The contact between the mortar samples and the liquid was avoided. Then, the containers were stored in chambers of controlled temperature. \\begin{table} \\begin{tabular}{c c c c c c c c} \\hline \\hline \\multirow{2}{*}{**Component**} & \\multicolumn{2}{c}{**CEM 1**} & \\multicolumn{2}{c}{**CEM II**} & \\multicolumn{2}{c}{**CEM III**} & \\multicolumn{2}{c}{**CEM IV**} \\\\ \\cline{2-9} & **UNE-EN** & **Manufacturer** & **UNE-EN** & **Manufacturer** & **UNE-EN** & **Manufacturer** \\\\ & **197-1 [40]** & **Data 1** & **197-1 [40]** & **Data 1** & **197-1 [40]** & **Data 1** & **197-1 [40]** & **Data 1** \\\\ \\hline Portland cement cluster & 95-100\\% & 95\\% & 65-79\\% & 75\\% & 20–34\\% & 31\\% & 45–64\\% & 50\\% \\\\ Lingestone & - & 5\\% & - & - & - & - & - & - \\\\ Blast-furnace slag & - & - & - & - & 66–80\\% & 69\\% & - & - \\\\ Fly ash & - & - & 21–35\\% & 25\\% & - & - & 36–55\\% & 50\\% \\\\ \\hline \\hline \\end{tabular} * Specific percentage of each component usually used according to the manufacturer. \\end{table} Table 1: Components of the commercial cements used. \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline **Environment** & **Temperature** & **Relative Humidity** & **Represented Climate** \\\\ \\hline Environment A & 20 \\({}^{\\circ}\\)C & 100\\% & Optimum condition \\\\ Environment B & 15 \\({}^{\\circ}\\)C & 85\\% & Atlantic climate \\\\ Environment C & 20 \\({}^{\\circ}\\)C & 65\\% & Mediterranean climate \\\\ Environment D & 30 \\({}^{\\circ}\\)C & 40\\% & Extreme condition \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Overview of studied environments. ### Mercury Intrusion Porosimetry The mortars pore network has been analysed with mercury intrusion porosimetry technique. The equipment used was a porosimeter model Autopore IV 9500 from Micromeritics, which allows a maximum pressure of 225 MPa. Prior to the experiment, samples were dried in an oven throughout 24 h at 105 \\({}^{\\circ}\\)C. Two measurements were performed at all testing ages. The tested samples were taken from cylinders with 1-cm thickness. Pore size distribution, total porosity and percentage of Hg retained at the end of the experiment were studied. ### Capillary Absorption Test This test was performed in accordance with the standard UNE 83982 [43] and it is based on the Fagerlund method for obtaining the concrete capillarity. Before the test, the specimens were subjected to a pre-conditioning procedure, which firstly consisted of a complete drying in oven at 105 \\({}^{\\circ}\\)C for 12 h. After that and up to starting the test (minimum 12 h), the specimens were maintained in a hermetic desiccator containing silica gel [14; 44; 45]. This pre-conditioning procedure was used, instead of other recommended protocols [46], in order to avoid too long pre-conditioning periods and the eventual need of contact of some of the mortar samples with liquid water. The results of this test are the effective porosity and the capillary suction coefficient. For each environment and cement type, three cylindrical specimens of 10 cm diameter and 5 cm thickness were tested at each age. ### Forced Migration Test Water-saturated mortar samples were tested according to NT Build 492 standard [47]. The obtained result of this test was the non-steady-state chloride migration coefficient D\\({}_{\\rm NTB}\\). For each condition and cement type, three 10 cm diameter and 5 cm height cylindrical specimens were tested at each studied age. ### Mechanical Strength Test Both compressive and flexural strengths were determined in accordance with the standard UNE-EN 196-1 [41]. Three 4 cm \\(\\times\\) 4 cm \\(\\times\\) 16 cm prismatic specimens were tested for each cement type and environment at the studied hardening ages. ## 3 Results and Discussion ### Mercury Intrusion Porosimetry The total porosity results obtained for the analysed mortars are shown in Figure 1. For environment A, generally this parameter decreased with time for all of them. This result could be related to the high relative humidity (100%) combined with a high enough temperature (20 \\({}^{\\circ}\\)C) of this optimum environment, which allows an adequate development of clinker and slag hydration [30; 44] and fly ash pozzolanic reactions [44; 48], whose products would entail a rise of samples solid fraction and consequently a porosity reduction. The lowest total porosities for environment A corresponded to CEM III mortars, while this parameter was very similar for CEM I and II mortars and a little higher for CEM IV ones, which would be in keeping with other authors [7]. In relation to environment B, after approximately 5-years period, the values of total porosity were slightly higher compared to those observed for environment A for each cement type, although in general terms there was not so much total porosity difference between both conditions. However, the reduction rate with time of this parameter was lower for mortars exposed to environment B. The slightly higher total porosities and their slower decrease observed for environment B could be due to its lower temperature (15 \\({}^{\\circ}\\)C), which would slow down the development of clinker and slag hydration [29; 30], as well as the fly ash pozzolanic reactions [36; 49], producing a slower formation of new solid phases. Furthermore, the lower relative humidity (85%) of this environment B in comparison with environment A would entail that lesswater would be available in the long-term to carry on the development of abovementioned reactions, which would hinder them [44], as suggested that total porosity kept practically constant at very high ages, especially for CEM I and III mortars. Despite that, the slag and fly ash cement mortars again showed lower or similar total porosities compared to CEM I ones after more than 5-years exposure to condition B. The total porosity values for mortars kept in the environment C hardly changed during the studied period, except for CEM II and III specimens prepared with w:c ratio 0.5, for which this parameter notably decreased with time. The fact that this parameter showed slight modifications could be related to the low environmental relative humidity (65%), although it is combined with an optimum temperature (20 \\({}^{\\circ}\\)C). This optimum temperature would permit a suitable development of hydration and pozzolanic reactions [30; 36] but the relatively low environmental relative humidity would make it difficult [44; 48; 50], causing a smaller new solid phases formation. Once again, the mortars prepared using sustainable cements with slag and fly ash hardened in the environment C, also showed lower or similar total porosities in the very long-term compared to CEM I mortars. Regarding environment D, the total porosity increased with hardening age for many of the studied samples and it slightly changed for the rest. This behaviour could be explained as a consequence of the extreme conditions of this environment, with a high temperature (30 \\({}^{\\circ}\\)C) and very low relative humidity (40%). On one hand, the high temperature would accelerate the development of hydration and pozzolanic reactions [35; 36; 49], entailing a fast formation of solids in the short-term, as would corroborate the relatively low total porosities noted at 28 days. Nevertheless, the very low relative humidity would not permit the further progression of this process [30; 31; 33]. Moreover, this dryness of the environment could produce the appearance of shrinkage microcracking in the samples [51], which could justify the rise of the total porosity in the long-term, which has been noted in many studied mortars. In spite of that, after about 5-year exposure period, the CEM II and III mortars exposed to environment D showed lower or similar total porosities compared to those prepared using CEM I. On the other hand, the greatest values of this parameter in the long-term were observed for CEM IV mortars, which could mean that the porosity of this cement type would be more affected by the extreme conditions of environment D than the others analysed, probably by the more severe effects of the previously explained shrinkage microcracking phenomenon. The pore size distributions obtained for specimens kept in environment A can be observed in Figure 2. Overall, it has been noted a progressive pore refinement with age for all the studied mortars. This result agrees with the total porosity results already discussed. The high water availability in this environment, combined with a high enough temperature, would facilitate the development of clinker and slag hydration, as well as the fly ash pozzolanic reactions, producing a fast new solid formation, which would reduce the size of the pores of the sample [30; 44; 48]. In addition to this, it is important to indicate that mortars with fly ash and slag showed a more refined microstructure than CEM I ones. This could be due to the formation of additional CSH phases as products of slag hydration and fly ash pozzolanic reactions [7; 8; 9]. The changes with time of pore size distribution for mortars kept under environment B are depicted in Figure 3. As happened in environment A, a continuous pore network refinement with hardening age has also been observed for environment B. Nevertheless, this microstructure refinement was produced in a slower way in this environment and in general terms, for all studied cement types the pore network was less refined at 1900 days compared to environment A. This result would be in keeping with those noted for total porosity and it would confirm the slowing down of hydration and pozzolanic reactions [29; 30; 36; 49] in an environment with a temperature lower than the optimum one. In spite of that, the relatively high relative humidity would allow that these reactions continued developing [33; 44], as would suggest the ongoing increase of volume of smaller pores, although without reaching the refinement degree observed for environment A. Figure 1: Results of total porosity for the studied mortars. The development of pore size distributions for samples hardened in environment C can be observed in Figure 4. Generally, the microstructure of the samples kept under this environment was less refined than that obtained for environment A. Furthermore, at 365 days the pore size distributions of the majority of the mortars noted for environment C, were similar to those observed for environment B. However, between 365 and 1900 days, most of the samples hardened under environment C did not experience a rise of volume of fine pores (sizes less than 100 nm) and even many of them showed a loss of microstructure refinement. This would reveal the fact that an optimum temperature would favour the development of hydration and pozzolanic reactions [30; 35; 49] but a relative low relative humidity in the environment would prevent them in the long-term, reducing the formation of new solids [31; 44] and consequently avoiding the pore refinement process. Regarding the pore sizes distributions for mortars hardened in environment D, they are depicted in Figure 5. At 28 days, they did not differ too much compared to those observed for the rest of non-optimum conditions. In spite of that, since then until 1900 days, the majority of the mortars showed a progressive reduction of microstructure refinement, as suggested the increase of pores volume with sizes greater than 100 nm, with the exception of CEM III mortars, which were hardly affected by this extreme condition. The microstructure evolution of samples exposed to environment D generally coincided with total porosity results. The pore size distributions noted in the very-short term in environment D could be a result of its high temperature, which would accelerate the solid fraction formation by the hydration and pozzolanic reactions [30; 33; 35; 36], increasing the volume of finer pores. Nevertheless, in the middle- and long-term the lack of enough water in the environment would prevent that this process continues [31; 44]. In addition to this, as has already been explained, the very Figure 3: Pore size distributions obtained for environment B. Figure 2: Pore size distributions obtained for environment A. low environmental relative humidity could produce the appearance of shrinkage cracking [31; 51], which would open the pore network, reducing its refinement, as has been registered for several of the studied samples. The results of percentages of Hg retained in the samples at the end of the experiment are shown in Figure 6. This parameter gives information about the pore structure tortuosity [52]. Overall, the highest Hg retained values were observed for mortars hardened in environment A, which would mean that the microstructure of these samples had a higher tortuosity. This agree with the pore size distribution results obtained for this environment, which revealed a greater pore network refinement, favoured by its optimum temperature and relative humidity. In relation to environment B, the Hg retained values were similar to those noted for environment A or even slightly lower in the very long-term, which would also be in keeping with other mercury intrusion porosimetry results, previously explained. For environment C, the Hg retained also showed similar or higher values compared to those observed for environment B in the middle-term, which would indicate that the samples pore network had a similar tortuosity. However, at 1900 days this parameter was lower for mortars hardened in environment C, especially for those prepared with w:c ratio 0.4. These results would be coinciding with the loss of pore refinement in the very long-term observed for these samples, which was already discussed in relation to the relative low humidity present in this environment. Finally, in spite of its relative high values at 28 days, the Hg retained decreased or kept constant with time for the studied mortars kept in environment D. This could mean that no increase or even a fall of pore network tortuosity was produced, which would again agree with the loss of microstructure refinement observed Figure 4: Pore size distributions obtained for environment C. Figure 5: Pore size distributions obtained for environment D. for most of the mortars kept under this environment. This result could be related to the formation of shrinkage microcracks due to its very low humidity [51], which would open the microstructure and would reduce its tortuosity. For all the environments, the higher Hg retained values corresponded to CEM III mortars, which would agree with their most refined pore network and would reveal the beneficial effects of slag hydration in the pore network development of cementitious materials [7; 8; 12], even in the very long-term. Figure 6: Results of mercury retained at the end of MIP tests for the studied mortars. ### Capillary Absorption Results The results of the capillary suction coefficient K can be observed in Figure 7. Overall, there were no high differences of the coefficient K between the studied environments. For environments A and B, its values were higher for CEM II and IV and it showed a global decreasing tendency with age, mainly in the middle-term, for all the studied mortars. This result could be related to the high humidity present in both environments, which would favour the pozzolanic and hydration reactions development [30; 44], as has been explained for microstructure results. The coefficient K increased over 365 days for most of the studied mortars that were hardened in environment C, and for the majority of those exposed to environment D, the rise of this parameter started earlier (at 28 days). This also agree with mercury intrusion porosimetry results, which would indicate the deleterious effect in the very-long term as a consequence of the low availability of water in the environment, hindering the hydration and pozzolanic reactions [31; 44] and causing the possible formation of shrinkage microcracks [51]. Lastly, it is important to indicate that for environments C and D, the mortars prepared using sustainable cements with slag and fly ash generally showed similar or lower values of coefficient K in the very long-term compared to CEM I ones. The effective porosity results are depicted in Figure 8. They showed similarities with those observed for coefficient K. The lower effective porosities were noted for condition A, especially for CEM I and III mortars, which would coincide with the majority of results of this work, showing the influence of a condition with an optimum relative humidity [30; 44; 48]. The results of effective porosity noted for environment B did not differ too much compared to environment A, probably due to its relative high humidity [44], as has been already explained. For both environments A and B, the effective porosity showed greater values for CEM II and IV mortars than for CEM I and III ones. In relation to environment C, the very long-term effective porosities were similar to those obtained for environment B, especially for fly ash mortars, while they were slightly higher for CEM I and III ones. The greatest values of effective porosity at longer ages corresponded to environment D and they increased with time. This is also in keeping with mercury intrusion porosimetry results and it would indicate the negative effect in the long-term caused by an environment with very low relative humidity [31; 33; 51], previously discussed. Finally, for environments C and D, the effective porosity at greater ages for mortars with active additions was not too much different than that noted for CEM I ones. ### Forced Chloride Migration Results The non-steady-state chloride migration coefficient D\\({}_{\\text{NTB}}\\) results are represented in Figure 9. Generally, the migration coefficients showed by mortars with additions were lower for all studied conditions in comparison with those noted for CEM I mortars. This would agree with several studies [12; 15; 23; 24], which have pointed out that using slag and fly ash cements brings a noticeable improvement in chloride ingress resistance. Firstly, this could be explained in relation to the higher refinement of pore network provided by fly ash and slag in comparison with pure clinker [7; 8; 9; 14; 15]. On the other hand, this good resistance of fly ash and slag mortars to chloride ingress since very early ages and its maintenance when the hardening was produced under non-optimum environments, which produced a lower microstructure refinement in the long-term compared to an optimum condition, could also be justified as a consequence of the higher chloride binding capacity of slag and fly ash cements, compared to OPC. The high content of calcium aluminates provided by fly ash and slag explains this improved binding capacity [10]. Regarding the influence of the environmental conditions, it is important to emphasize the increase with time of the migration coefficient for CEM I mortars exposed to environments C and D. This rise was also observed for CEM II specimens but only hardened in environment D. Both environments had a relatively low relative humidity, so it seems that this circumstance would have deleterious effects in the resistance against chloride ingress of both mortar types, especially for CEM I ones,possibly related to the abovementioned shrinkage microcracks formation combined with a less refined microstructure [50; 51]. Figure 7: Results of capillary suction coefficient (K) for the studied mortars. Figure 8: Effective porosity results for the studied mortars. Figure 9: Non-steady-state chloride migration coefficient evolution for the studied mortars. ### Mechanical Strength Results The results of compressive and flexural strengths obtained for the studied mortars are depicted in Figures 10 and 11, respectively. First of all, it is important to emphasize that after more than 5-years exposure period, the mortars made with fly ash and slag cements generally showed similar or higher compressive and flexural strengths compared to CEM I ones. This would agree with several studies [9; 12; 53], which have pointed out that slag and fly ash cements provide an important increase of mechanical strength in the long-term. Figure 10: Results of compressive strength for the studied mortars. Figure 11: Results of flexural strength for the studied mortars. With respect to the influence of hardening conditions, the compressive and flexural strengths results at greater ages did not differ too much between the analysed environments. Only, a slightly higher strengths were observed for environment A and scarcely lower for environment D, which would suggest the influence of the water availability in the environment [51], as has been previously explained. Despite that, it seems that the environmental temperature and relative humidity would not have a high influence in the mechanical performance of the studied mortars. Finally, the very low compressive and flexural strengths obtained for CEM I mortars prepared using wc ratio 0.4 may be influenced by the compaction problems noted during the setting of prismatic samples of those mortars, which showed very low fluidity. In spite of that, no plasticiser was included in the mix, for keeping the same mixing conditions for all the analysed mortars. This problem was not observed for fly ash and slag cement mortars made with the same wc ratio. The present results give further evidence that it is possible to use cementitious materials for concrete elements and structures with a high level of replacement of Portland cement linker by active additions, such are fly ash and ground granulated blast furnace slag [54]. In this study commercial cements containing the active additions have been used, since the structural concrete codes of certain countries restrict the possibility to incorporate some active additions when mixing the concrete and/or limit the replacement level of clinker by these materials [55]. These more sustainable binders can be used without reductions of the mechanical strength, at least in the long term and sometimes with improvements of the durability properties of the hardened cementitious materials, in agreement with previous results [54]. In this work, the focus has been put on studying the influence of the climatic conditions (mild Mediterranean and Atlantic climates) on the development of mechanical strength and on properties related with the durability of reinforced concrete elements exposed to a marine environment, where the corrosion of steel reinforcement is triggered by chloride ions which mainly penetrate the concrete cover on steel through capillary absorption or diffusion mechanisms. Nevertheless, when going to engineering design practice caution must be taken to fully consider all the degradation mechanisms and all the performance requirements of the construction under study [56]. For instance, it has been demonstrated that concrete with high levels of supplementary cementitious materials, like fly ash and slag, may not be suitable for exposure to a marine environment with multiple cycles of freezing and thawing, since these materials may suffer increased surface material loss in the long term [56]. ## Conclusions The main conclusions that can be drawn from the results previously discussed can be summarized as follows: 1. An environment with high relative humidity would produce a more refined microstructure and would improve the mechanical performance and the durability-related properties of fly ash, slag and OPC mortars in the very long-term (5-years period approximately). 2. The environmental temperature has an influence in the development of pore structure and service properties of fly ash, slag and OPC mortars. A high temperature would accelerate the pore network refinement and the improvement of properties in the short-term, while a lower environmental temperature would slow down them. This is due to the effects of the temperature in the progress of clinker and slag hydration and fly ash pozzolanic reactions. 3. A low environmental relative humidity would entail a reduction of pore network refinement in the very long-term for all the studied mortars. 4. Generally, the pore structure of fly ash and slag cement mortars was more refined for all the analysed environments than that observed for CEM I ones. This may be due to the additional solid phases formed as products of fly ash pozzolanic reactions, as well as slag hydration. 5. The effective porosity and the capillary suction coefficient for all the studied mortars in the very long-term were hardly influenced by the environmental conditions. The mortars prepared using sustainable cements with slag and fly ash generally showed in the very long-term similar or lower values of both effective porosity and capillary suction coefficient, as compared to OPC mortars, when they are exposed to low relative humidity environments. 7. The lowest non-steady state chloride migration coefficients were observed for the mortars made with fly ash and slag cements, regardless of the environmental condition. This may be related to the pore structure refinement provided by fly ash and slag compared to clinker and by the greater binding capacity of cements with those additions. 8. The CEM I mortars exposed to the environments with 40% and 65% relative humidity showed an important increase of the non-steady state chloride migration coefficient in the long-term. This was also noted for CEM II mortars but only for 40% relative humidity condition. Then, it seems that an environment with low relative humidity would have deleterious effects in the long-term resistance against chloride ingress of mortars in which the abovementioned cement types were used. 9. After a 5-years exposure period, the mortars prepared using sustainable slag and fly ash cements overall showed similar or higher compressive and flexural strengths compared to CEM I ones. 10. The mechanical strengths of all the studied mortars in the very long-term were slightly affected by the environmental conditions. 11. In view of the results obtained, mortars made with sustainable commercial cements with ground granulated blast-furnace slag and fly ash, exposed to non-optimum environments, show a good performance in the very long-term (after a 5-years hardening period) with respect to their microstructure, mechanical and durability-related properties, being similar or even better compared to OPC mortars. This research has been financially supported by the \"Ministerio de Economia y Competitividad\" (formerly \"Ministerio de Ciencia e Innovacion\") and AEI of Spain and FEDER (EU) with projects BIA2006-05961, BIA2010-20548, BIA2011-25721 and BIA2016-80982-R. The authors wish to thank Cementos Portland Valderrivas S.A. for providing the cements used in this study. The results included in this paper related to the short- and middle-term effects of the environment in the studied mortars were obtained in the Ph.D. thesis carried out by Jose Marcos Ortega at University of Alicante (Spain), under the supervision of Isidro Sanchez and Miguel Angel Climent. The results included in this paper regarding the long-term effects of the environment in the studied mortars were obtained in the master's final project carried out by Rosa Maria Tremino, under the supervision of Jose Marcos Ortega and Isidro Sanchez, to obtain the Civil Engineering Master's degree at University of Alicante (Spain). Jose Marcos Ortega wrote the paper with contributions from the other co-authors. Jose Marcos Ortega, Rosa Maria Tremino and Isidro Sanchez performed the experiments. Isidro Sanchez and Miguel Angel Climent supervised the research work and revised the paper. All the authors contributed to conceive and design the experiments and to analyse and discuss the results. The authors declare no conflict of interest. ## References * Ponnikiewski and Golaszewski (2014) Ponnikiewski, T.; Golaszewski, J. The effect of high-calcium fly ash on selected properties of self-compacting concrete. _Arch. Civ. Mech. Eng._**2014**, _14_, 455-465. [CrossRef] * Glinicki et al. (2016) Glinicki, M.; Jozwiak-Niedzwiedzka, D.; Gibas, K.; Dabrowski, M. Influence of Blended Cements with Calcareous Fly Ash on Chloride Ion Migration and Carbonation Resistance of Concrete for Durable Structures. _Materials_**2016**, \\(9\\), 18. [CrossRef] [PubMed] * Ortega et al. (2016) Ortega, J.M.; Esteban, M.D.; Rodriguez, R.R.; Pastor, J.L.; Sanchez, I. Microstructural Effects of Sulphate Attack in Sustainable Grouts for Micropiles. _Materials_**2016**, \\(9\\), 905. [CrossRef] [PubMed] * Ortega et al. (2017) Ortega, J.M.; Esteban, M.D.; Rodriguez, R.R.; Pastor, J.L.; Blanco, F.J.; Sanchez, I.; Climent, M.A. Long-Term Behaviour of Fly Ash and slag Cement Grouts for Micropiles Exposed to a Sulphate Aggressive Medium. _Materials_**2017**, _10_, 598. [CrossRef] [PubMed]Williams, M.; Ortega, J.M.; Sanchez, I.; Cabeza, M.; Climent, M.A. Non-Destructive Study of the Microstructural Effects of Sodium and Magnesium Sulphate Attack on Mortars Containing Silica Fume Using Impedance Spectroscopy. _Appl. Sci._**2017**, \\(7\\), 648. [CrossRef] * Ortega et al. (2017) Ortega, J.M.; Esteban, M.D.; Rodriguez, R.R.; Pastor, J.L.; Blanco, F.J.; Sanchez, I.; Climent, M.A. Influence of Silica Fume Addition in the Long-Term Performance of Sustainable Cement Grouts for Micropiles Exposed to a Sulphate Aggressive Medium. _Materials_**2017**, _10_, 890. [CrossRef] [PubMed] * Bijen (1996) Bijen, J. Benefits of slag and fly ash. _Constr. Build. Mater._**1996**, _10_, 309-314. [CrossRef] * Wedding et al. (1981) Wedding, P.; Manmohan, D.; Mehta, P. Influence of Pozzolanic, slag and Chemical Admixtures on Pore Size Distribution and Permeability of Hardened Cement Pastes. _Cem. Conc. Aggregg._**1981**, \\(3\\), 63-67. [CrossRef] * Papadakis (1999) Papadakis, V.G. Effect of fly ash on Portland cement systems. _Cem. Conc. Res._**1999**, _29_, 1727-1736. [CrossRef] * Leng et al. (2000) Leng, F.; Feng, N.; Lu, X. An experimental study on the properties of resistance to diffusion of chloride ions of fly ash and blast furnace slag concrete. _Cem. Concr. Res._**2000**, _30_, 989-992. [CrossRef] * Nochaiya et al. (2010) Nochaiya, T.; Wongkeo, W.; Chaignnich, A. Utilization of fly ash with silica fume and properties of Portland cement-fly ash-silica fume concrete. _Fuel_**2010**, _89_, 768-774. [CrossRef] * Geiseler et al. (1995) Geiseler, J.; Kollo, H.; Lang, E. Influence of blast furnace cements on durability of concrete structures. _ACI Mater. J._**1995**, _92_, 252-257. * Thomas et al. (2008) Thomas, M.D.A.; Scott, A.; Bremner, T.; Bilodeau, A.; Day, D. Performance of slag concrete in marine environment. _ACI Mater. J._**2008**, _105_, 628-634. * Ortega et al. (2013) Ortega, J.M.; Albaladejo, A.; Pastor, J.L.; Sanchez, I.; Climent, M.A. Influence of using slag cement on the microstructure and durability related properties of cement grouts for micropiles. _Constr. Build. Mater._**2013**, _38_, 84-93. [CrossRef] * Pastor et al. (2016) Pastor, J.L.; Ortega, J.M.; Flor, M.; Lopez, M.P.; Sanchez, I.; Climent, M.A. Microstructure and durability of fly ash cement grouts for micropiles. _Constr. Build. Mater._**2016**, _117_, 47-57. [CrossRef] * Jain and Neithalath (2010) Jain, J.A.; Neithalath, N. Chloride transport in fly ash and glass powder modified concretes--Influence of test methods on microstructure. _Cem. Concr. Compos._**2010**, _32_, 148-156. [CrossRef] * Kamali and Ghahremainezhad (2016) Kamali, M.; Ghahremainezhad, A. An investigation into the hydration and microstructure of cement pastes modified with glass powders. _Constr. Build. Mater._**2016**, _112_, 915-924. [CrossRef] * Ortega et al. (2014) Ortega, J.M.; Pastor, J.L.; Albaladejo, A.; Sanchez, I.; Climent, M.A. Durability and compressive strength of blast furnace slag-based cement grout for special geotechnical applications. _Mater. Constr._**2014**, _64_. [CrossRef] * Scott et al. (2009) Scott, A.N.; Thomas, M.D.A.; Bremner, T.W. Marine performance of concrete containing fly ash and slag. In Proceedings of the Annual Conference--Canadian Society for Civil Engineering, St. Johns, NL, Canada, 30 May 2009; Volume 3, pp. 1559-1568. * Shattaf et al. (2001) Shattaf, N.R.; Alshamsi, A.M.; Swamy, R.N. Curing/environment effect on pore structure of blended cement concrete. _J. Mater. Civ. Eng._**2001**, _13_, 380-388. [CrossRef] * Pasupathy et al. (2016) Pasupathy, K.; Berndt, M.; Castel, A.; Sanjayan, J.; Pathmanathan, R. Carbonation of a blended slag-fly ash geopolymer concrete in field conditions after 8 years. _Constr. Build. Mater._**2016**, _125_, 661-669. [CrossRef] * Polder and De Rooij (2005) Polder, R.B.; De Rooij, M.R. Durability of marine concrete structures--Field investigations and modelling. _Heron_**2005**, _50_, 133-154. * Thomas and Matthews (2004) Thomas, M.D.A.; Matthews, J. Performance of pfa concrete in a marine environment--10-year results. _Cem. Concr. Compos._**2004**, _26_, 5-20. [CrossRef] * Chalee et al. (2009) Chalee, W.; Jaturapitakkul, C.; Chindaprasir, P. Predicting the chloride penetration of fly ash concrete in seawater. _Mar. Struct._**2009**, _22_, 341-353. [CrossRef] * Ortega et al. (2017) Ortega, J.M.; Sanchez, I.; Cabeza, M.; Climent, M.A. Short-Term Behavior of Slag Concretes Exposed to a Real In Situ Mediterranean Climate Environment. _Materials_**2017**, _10_, 915. [CrossRef] [PubMed] * Ortega et al. (2017) Ortega, J.M.; Esteban, M.D.; Sanchez, I.; Climent, M.A. Performance of sustainable fly ash and slag cement mortars exposed to simulated and real in situ Mediterranean conditions along 90 warm season days. _Materials_**2017**, _10_, 1254. [CrossRef] [PubMed] * Chalee et al. (2010) Chalee, W.; Ausapanti, P.; Jaturapitakkul, C. Utilization of fly ash concrete in marine environment for long term design life analysis. _Mater. Des._**2010**, _31_, 1242-1249. [CrossRef] * Ganjian and Pouya (2009) Ganjian, E.; Pouya, H.S. The effect of Persian Gulf tidal zone exposure on durability of mixes containing silica fume and blast furnace slag. _Constr. Build. Mater._**2009**, _23_, 644-652. [CrossRef]* _Detwiler et al. (1991)_ Detwiler, R.J.; Kjellsen, K.O.; Gjorv, O.E. Resistance to chloride intrusion of concrete cured at different temperatures. _ACI Mater. J._**1991**, _88_, 19-24. * _Cakir and Akoz (2008)_ Cakir, O.; Akoz, F. Effect of curing conditions on the mortars with and without GGBFS. _Constr. Build. Mater._**2008**, _22_, 308-314. [CrossRef] * _Ramezanianpour and Malhotra (1995)_ Ramezanianpour, A.A.; Malhotra, V.M. Effect of curing on the compressive strength, resistance to chloride-ion penetration and porosity of concretes incorporating slag, fly ash or silica fume. _Cem. Concr. Compos._**1995**, _17_, 125-133. [CrossRef] * Ortega et al. (2012) Ortega, J.M.; Sanchez, I.; Climent, M.A. Durability related transport properties of OPC and slag cement mortars hardened under different environmental conditions. _Constr. Build. Mater._**2012**, _27_, 176-183. [CrossRef] * Ortega et al. (2015) Ortega, J.M.; Sanchez, I.; Climent, M.A. Impedance spectroscopy study of the effect of environmental conditions in the microstructure development of OPC and slag cement mortars. _Arch. Civ. Mech. Eng._**2015**, _15_, 569-583. [CrossRef] * Ortega et al. (2012) Ortega, J.M.; Sanchez, I.; Anton, C.; De Vera, G.; Climent, M.A. Influence of environment on durability of fly ash cement mortars. _ACI Mater. J._**2012**, _109_, 647-656. * Maltais and Marchand (1997) Maltais, Y.; Marchand, J. Influence of curing temperature on cement hydration and mechanical strength development of fly ash mortars. _Cem. Concr. Res._**1997**, _27_, 1009-1020. [CrossRef] * _Hanehara et al. (2001)_ Hanehara, S.; Tomosawa, F.; Kobayashi, M.; Hwang, K. Effects of water/powder ratio, mixing ratio of fly ash and curing temperature on pozzolanic reaction of fly ash in cement paste. _Cem. Concr. Res._**2001**, _31_, 31-39. [CrossRef] * _Climent et al. (2012)_ Climent, M.A.; Ortega, J.M.; Sanchez, I. Cement mortars with fly ash and slag--Study of their microstructure and resistance to salt ingress in different environmental conditions. In _Concrete Repair, Rehabilitation and Retrofitting III, Proceedings of the 3rd International Conference on Concrete Repair, Rehabilitation and Retrofitting (ICCRRR 2012), Cape Town, South Africa, 3-5 September 2012;_ Taylor & Francis Group: London, UK, 2012; pp. 345-350. * Ortega et al. (2017) Ortega, J.M.; Sanchez, I.; Climent, M.A. Impedance spectroscopy study of the effect of environmental conditions on the microstructure development of sustainable fly ash cement mortars. _Materials_**2017**, _10_, 1130. [CrossRef] [PubMed] * _European Committee for Standardization. EN 1992-1-1 Eurocode 2: Design of Concrete Structures--Part 1-1: General Rules and Rules for Buildings; Committee European Normalization (CEN): Brussels, Belgium, 2004. * _Asociacion Espanola de Normalizacion y Certificacion (AENOR). UNE-EN 197-1:2011. Composicion, _Especificaciones y Criterios de Conformidad de Los Cementos Comunes;_ AENOR: Madrid, Spain, 2011; p. 30. (In Spanish) * _Asociacion Espanola de Normalizacion y Certificacion (AENOR). UNE-EN 196-1:2005. Metodos de Ensayo de Cementos. Parte 1: Determinacion de Resistencias Macanicas;_ AENOR: Madrid, Spain, 2005; p. 36. (In Spanish) * _Deutsches Institut fur Normung, e.V. Deutsche Norm DIN 50008 Part 1;_ DIN: Berlin, Germany, 1981. (In German) * _Asociacion Espanola de Normalizacion y Certificacion (AENOR). UINE 83982:2008. Durabilidad del Hormigon. Metodos de Ensayo. Determinacion de La Absoricon de Agua Por Capilaridad del Hormigon Endurecido. Metodo Fagerlund;_ AENOR: Madrid, Spain, 2008; p. 8. (In Spanish) * _Ortega et al. (2013)_ Ortega, J.M.; Sanchez, I.; Climent, M.A. Influence of different curing conditions on the pore structure and the early age properties of mortars with fly ash and blast-furnace slag. _Mater. Constr._**2013**, _63_. [CrossRef] * _Ortega et al. (2010)_ Ortega, J.M.; Sanchez, I.; Climent, M.A. Influence of environmental conditions on durability of slag cement mortars. In Proceedings of the 2nd International Conference on Sustainable Construction Materials and Technologies, Ancona, Italy, 28-30 June 2010; pp. 277-287. * RILEM TC 116-PCD. A. Preconditioning of concrete test specimens for the measurement of gas permeability and capillary absorption of water. _Mater. Struct._**1999**, _32_, 174-179. Available online: [https://link.springer.com/article/10.1007%2FBF02481510](https://link.springer.com/article/10.1007%2FBF02481510) (accessed on 28 February 2018). * _Nordtest (1999)_ Nordtest. _NT Build 492. Concrete, Mortar and Cement-Based Repair Materials: Chloride Migration Coefficient from Non-Steady-State Migration Experiments;_ Nordtest Espoo: Greater Helsinki, Finland, 1999; p. 8. * Ortega et al. (2009) Ortega, J.M.; Ferrandiz, V.; Anton, C.; Climent, M.A.; Sanchez, I. Influence of curing conditions on the mechanical properties and durability of cement mortars. In _Materials Characterisation IV: Computational Methods and Experiments_; Mammoli, A.A., Brebbia, C.A., Eds.; WIT Press: Southampton, UK, 2009; pp. 381-392. [CrossRef] * Escalante-Garcia and Sharp (1998) Escalante-Garcia, J.I.; Sharp, J.H. Effect of temperature on the hydration of the main clinker phases in Portland cements: Part II, blended cements. _Cem. Concr. Res._**1998**, _28_, 1259-1274. [CrossRef] * Sanchez et al. (2010) Sanchez, I.; Albertos, T.S.; Ortega, J.M.; Climent, M.A. Influence of environmental conditions on durability properties of fly ash cement mortars. In Proceedings of the 2nd International Conference on Sustainable Construction Materials and Technologies, Ancona, Italy, 28-30 June 2010; pp. 655-666. * Kanna et al. (1998) Kanna, V.; Olson, R.A.; Jennings, H.M. Effect of shrinkage and moisture content on the physical characteristics of blended cement mortars. _Cem. Concr. Res._**1998**, _28_, 1467-1477. [CrossRef] * Cabeza et al. (2002) Cabeza, M.; Merino, P.; Miranda, A.; Novoa, X.R.; Sanchez, I. Impedance spectroscopy study of hardened Portland cement paste. _Cem. Concr. Res._**2002**, _32_, 881-891. [CrossRef] * Demirboga (2007) Demirboga, R. Thermal conductivity and compressive strength of concrete incorporation with mineral admixtures. _Build. Environ._**2007**, _42_, 2467-2471. [CrossRef] * Malhotra and Mehta (2002) Malhotra, V.M.; Mehta, P.K. _High-Performance, High-Volume Fly ash Concrete: Materials, Mixture Proportioning, Properties, construction Practice and Case Histories_; Supplementary Cementing Materials for Sustainable Development, Inc.: Ottawa, ON, Canada, 2002. * Comision Permanente del Hormigon (2008) Comision Permanente del Hormigon. _Instruccion De Hormigon Extructural EHE-08_; Ministerio de Fomento: Madrid, Spain, 2008. (In Spanish) * Thomas et al. (2011) Thomas, M.D.A.; Bremner, T.; Scott, A.C.N. Actual and modeled performance in a tidal zone. Concrete mixtures with supplementary cementitious materials evaluated after 25-year exposure at Treat Island. _Concr. Int._**2011**, _33_, 23-28.
Nowadays, getting a more environmentally sustainable cement production is one of the main goals of the cement industry. In this regard, the use of active additions, like fly ash and ground granulated blast-furnace slag, has become very popular. The behaviour, in the short-term, of cement-based materials with those additions is well-known when their hardening is produced under optimum conditions. However, real structures are exposed to different environments during long periods, which could affect the development of microstructures and the service properties of cementitious materials. The objective of this work is to analyse the effects in the long-term (up to 5 years approximately) produced by the exposure to different non-optimum laboratory conditions in the microstructure, mechanical and durability properties of mortars made with slag and fly ash commercial cements. Their performance was compared to that observed for ordinary Portland cement (OPC) mortars. The microstructure has been analysed using mercury intrusion porosimetry. The effective porosity, the capillary suction coefficient, the chloride migration coefficient and mechanical strengths were analysed too. According to the results, mortars prepared using slag and fly ash sustainable commercial cements, exposed to non-optimum conditions, show a good performance after 5-years hardening period, similar or even better than OPC mortars. ground granulated blast-furnace slag; fly ash; long-term exposure; environment; temperature; relative humidity + Footnote †: journal: _Physica D
Summarize the following text.
296
arxiv-format/2107_00592v1.md
Individual Tree Detection and Crown Delineation with 3D Information from Multi-view Satellite Images Changlin Xiao Rongjun Qin Xiao Xie Xu Huang ## Introduction Forest is one of the most important land surfaces and plays an important role in the global ecosystem. Timely and accurate measurements of the forest parameters at the individual tree level such as tree count, tree height, and crown size are essential for quantitative analysis of forest structure, ecological modeling, biomass estimation, and evaluation of deforestation [13, 20]. Over the past several decades, remote sensing techniques have greatly improved the capability of extracting forest metrics with high spatial resolution imagery [1, 21]. However, the majority of these methods use spectral or texture information and are limited to the radiometric quality which makes them vulnerable to erroneous detections, either over-/under- segmenting tree crowns at the individual plot level. The addition of 3D information to these forest metric estimations can greatly enhance the measurement accuracy. Recently, much attention has been given to lidar data, which provides an accurate 3D representation of the surface objects. A number of algorithms have been proposed to analyze the forest structure at individual tree level with this data [1, 19, 18]. Many of the methods that are based on either lidar or photogrammetric 3D points use the normalized digital surface model (dDSM) or the canopy height model (GIM, for the forest applications), which can naturally highlight the treetops and directly offer the tree heights [17, 16]. The GIM can be generated by subtracting the digital terrain model (DTM) from the digital surface model (DSM). Similar to the 2D image-based methods, the GIM-based methods use procedures such as image smoothing, local maximum localization, and template matching to detect the individual trees and their boundaries [1, 1, 21]. In [19], they used graph theory to model the forest topological structure and correct two potentially over-identified treetops. Also, multi-scale segmentations have been proposed to dynamically select the best set of apices and generate the final segmentation [20]. For tree crown delineation, image segmentation methods such as valley following, region growing, and watershed segmentation can be directly used on GIM [19, 18]. Among these methods, the watershed segmentation is the most popular as it can naturally and efficiently model the treetops and crowns, for example, [17] proposed the Fishing Net Dragging (Fnd) method which uses the watershed segmentation with the Gaussian filtering to find the boundaries of trees. Even though these methods have demonstrated great successes in their applications, they are limited by the cost of lidar surveying making them impractical for repetitive acquisition of at large scales [16]. Also, lidar systems offering compensatory spectrometers are not widely available to provide additional spectrum data for more advanced analysis, such as LAI (leaf area index), NDVI (normalized difference vegetation index) for vegetation classification. A demand for such data normally requires an additional light for multi-/hyper-spectral data acquisition while which subsequently brings in registration issues. Photogrammetric Engineering & Remote Sensing Vol. 85, No. 1, January 2019, pp. 51-xxx.0099-1112/18/51-xxx Considering the merits of both 2D spectral and 3D structure data, for the first time, we propose to use multi-view high-resolution satellite imagery to perform forest parameters retrieval at the individual tree level. With the growing number of high-resolution satellite sensors, the chances of a spot being viewed multiple times with multiple angles are greatly increased. These multi-view images can facilitate many remote sensing tasks, for example, Liu and Abd-Elrahman (2018) used multi-view images in a deep convolutional neural network for the wetlands classification. Also, the development of the advanced image matching algorithms makes it possible to produce comparably dense 3D measurements as lidar, while with much lower cost, higher flexibility in acquiring information in a large geographical region. With the highly accurate 3D digital surface models (DSM) and true orthophotes generated from these multi-view satellite images, we expect it will significantly enhance the performance of individual tree detection and crown delineation (ITDD). The terrain data is critical for many 3D points based tdd methods. However, for DSM generated from satellite images, terrain data under the forest canopy might not be captured. Moreover, considering that the accuracy of image-based DSM is normally lower than those from lidar, it can be particularly challenging to directly use the point cloud based methods on this DSM. Hence, to fully explore the multi-view satellite imagery based data, we propose a novel algorithm that utilizes both 2D spectral and 3D structural information with the orthophoto and DSM. Based on the assumption that tree crowns are normally well rounded in shape with a single maximum as the treetops, we propose to use morphological top-hat by reconstruction (thr) to detect treetops. Compared to other local maximum detectors, for example, window-based local maximum filters (Pouliot and King, 2005; Wulder _et al._, 2000), the thr detector is less sensitive to the window or filter size. For the crown delineation, we adopt a modified superpixel segmentation framework to generate compact segments that leverage the boundary of crowns based on the DSM and multispectral information, thus are able to account for crown delineation in both sparsely and densely forested area. Compared to the previous methods, for instance, valley following, region growing, and watershed segmentation (Gougoon and Leckie, 2006; Ke and Quackenbush, 2011), the modified superpixel segmentation is similar to region growing, but with an extra spatial constraint which ensures more compact shapes for trees. The tree crowns are complex in their 3D structure, including overlapping canopies, adjacent crowns reflecting similar spec-trums but different in height, and smooth crowns. Hence, the combination of both 2D spectral and 3D structural information would greatly help to identify the individual trees. To the authors' best knowledge, this work contributes to the community as the first to demonstrate and offer the use of multi-view high-resolution satellite imagery as an alternative data source for the forest parameter retrieval at the individual tree level. In addition, the contributions include the development of a novel top-hat and superpixel based detection framework that is able to (1) accommodate multi-modal data for segmenting objects under complex scenarios; (2) utilize biometric characteristics of trees through the allometric equation to constrain size and shape of tree segments; and (3) achieve high accuracy in areas with different canopy densities. In particular, the proposed method is able to account for densely forested regions, even without a high precision dtm. ### Study Area and the Data Processing The study area is located in Don Torcuato, a small city on the west side of Buenos Aires, Argentina. In this area, we choose three experimental sites with tree plots at different levels of density as illuminated in Figure 1 and Figure 2. Site A, covering an area of _0.30 x 0.30_ km\\({}^{2}\\), which we treat as the sparsely forested area mainly consists of sparsely distributed trees, wild shrubs, and grasslands. In the densely forested site B, covering _0.25 x 0.34_ km\\({}^{2}\\), different types of trees at different heights intersect with each other and only a small part of it being ground surface. Site C is part of a small town, covering an area of _0.30 x 0.30_ km\\({}^{2}\\), the surface objects of which are complicated: trees at the courtyard or around buildings with Figure 1: The study area near Buenos Aires, Argentina. The two large solid rectangles mark the areas where we have generated the DSM and orthophoto and A, B, C mark the experimental sites. Figure 2: The three experimental sites in the study area. From left to right are the orthophoto (upper) and the DSM (lower) of Site A, B, and C with different surfaces. different crown sizes and heights; shrubs on the street sides mixed with trees at different heights. The satellite images in this work are from the multi-view benchmark dataset provided by John's Hopkins University Applied Physics Lab (HuAPL) (Bosch _et al._, 2016; Bosch _et al._, 2017). The data contains 8 bands Worldwide2/3 images with the ground resolution around _0.3_ meters. To derive an accurate DSM, we selected five pairs of the on-track stereo images captured in December 2015, with the maximal off-hadir angle between _7-19_ degrees and the average intersection angle between _15-21_ degrees. We applied a fully automated pipeline (Qin, 2017) that consists of (1) pansharpening, (2) automatic feature matching, (3) pair-wise bundle adjustment, (4) dense matching, and (5) a bilateral-filter based depth-fusion, to generate a high-quality DSM and subsequently true orthophoto. Comparing to the ground truth lidar data, the root-mean-square errors (RMSE) of the DSM are varying between _2.5-4_ meters which is absolute accuracy at checking points which do not represent the relative accuracy of the object reconstruction. More details about the method and the accuracy evaluation can be found in Qin (2017) and Figure 2 shows the cropped orthophoto and dsm of the three experimental sites. ## 5 Methodology The proposed method includes several steps summarized in Figure 3: After the generation of an orthophoto and DSM, the vegetated area and terrain area are extracted to facilitate the treetop detection which is based on the local maximum of DSM through top-hat by reconstruction (THR) operation (Qin and Fang, 2014). To further improve the detection quality, we use above ground height check and non-maximum suppression with the allometric equation to eliminate the short and redundant detections. From the treetops, a modified superpixel segmentation that combines the 2D spectral and the 3D structural information is proposed to effectively delineate the tree crowns. Finally, a postprocessing for the crown refinement is used to further improve the detection accuracy. ## 6 Vegetation and Terrain Detection To identify the vegetation areas, we use the Normalized Difference Vegetation Index (NDVI) as the index and take the areas where their NDVI > 0 to be the vegetation area. u is empirically set as _0.3_ leveraged based on our experiments. Digital terrain model (DTM) is a useful source for the individual tree detection, and there have been several methods proposed to extract the DTM from 3D points (Gevaert _et al._, 2018; Hu _et al._, 2014). Such as in Hu _et al._ (2014), they proposed an adaptive surface filter (asf) that the threshold can vary according to the terrain smoothness to efficiently classify the airborne laser scanning data. DTM is used to offer the height information as normalized DSM (uDSM) or cmm (for forest application). However, in some densely forested areas, it may not be feasible to extract the DTM from DSM produced by images. Fortunately, the proposed method is not heavily depended on the tree height information. The treetop detection and crown delineation are mainly decided by the relative height. The absolute tree height is an extra cue to refine detections which will be discussed with more details in the experiments. Our method does not explicitly generate the uDSM or cmm. Instead, we focus on the height gradients and estimate the above ground tree height with an effective terrain detection method. By converting the pixels in DSM map as grid point cloud, we apply the cloth simulation filter (csf) (Zhang _et al._, 2016) that based on the height and surrounding information to classify the points into two categories: terrain and off-terrain points. The above-ground heights of trees which are subsequently used for crown size estimation can be estimated by subtracting the terrain height around it, and more details can be found in the next section. In the experiments, we used the open source software CloudCompare(r) as the segmentation tool, and an example can be found in Figure 4. ## 7 Treetop Detection ### 7.1 Detection of Local Maximum Points Similar to lidar-based treetop detection, we naturally assume that the local maximum in the DSM is the treetop. However, since the filter-based method requires a careful tuning of window size, we adopt the grey-level morphological top-hat by reconstruction Figure 4: The vegetation and terrain detection of site C. From left to right are the 3D visualization of the textured DSM, the vegetation area, and the detected terrain area. Figure 3: The flowchart of the proposed method. The main processing steps are in bold and the details are explained in the next sub-sections. operator [thr] to find the local maximum, as is an effective method of detection blob-like shapes and less sensitive to window size [30]. In the detection, a disk-shaped structuring element [se] is used to perform the grey-level morphology erosion on the bsm to generate a marker image \\(\\varepsilon\\)[sm, _e_], where the erosion operation only keeps the minimum value of all the pixels in the structuring element. The morphological reconstruction mask \\(B_{\\varepsilon\\text{(post, _e_)}}\\) is then generated through an iterative procedure in which the dilation operation that keeps the maximum value is utilized on the marker image. Finally, by subtracting \\(B_{\\varepsilon\\text{(post, _e_)}}\\) from the bsm, the peaks on the bsm can be extracted as blob-shaped peak regions with respect to the local maximum. To locate the local maximum at the pixel level, we first use the morphological opening operation to remove the weakly connected parts thus to separate a big region into several small ones. Then, for each region, we keep the highest point as the final treetop candidate. Figure 5 shows an example of the local maximum detection with two SEs (4 and 8 pixels). The detected local maximum regions with different se sizes are shown as white and red dots (with a shift) in the rightmost image of Figure 5. As we can observe they are almost identical indicating the proposed thr operation is less sensitive to the size of se. #### 4.2.2 Refinement with Above Ground Height Check and Non-Maximum Suppression To eliminate lower local maximum, we calculate per-point above ground height by subtracting its bsm height with nearby terrain height: \\[H(s)=DSM(s)-\\sum p_{i_{\\text{c}}\\in\\mathcal{N}}\\,p_{i_{\\text{c}}T}\\,DSM(p_{i} )\\,/\\,N\\,, \\tag{1}\\] where \\(H[s]\\) is the above ground height of point \\(s\\); bsm(s) and bsm(\\(p\\)) are the height values at point \\(s\\) and \\(p\\). \\(N\\) is the total number of the terrain points in the predefined search window \\(A_{i}\\) centered at \\(s\\), and \\(T\\) is the extracted terrain area as previously described. To remove the redundant maximum points in one tree, a common practice is to use non-maximum filters to locate the true maximum within a window. The filter is expected to achieve the best performance when the window size is close to the crown size. However, the crown size of different types of trees may vary significantly. Given that the allometric equations describe the biological relationships between the tree height and its crown size [11], it can be of a great value to use such cue to determine an optimal window size accountable for crown variations. While in general larger window is able to encapsulate small crowns, we consider to a type with a large crown/height ration to obtain a good estimate as for removing non-maximum. Hence, we adopt the allometric equation for the deciduous tree [10] to estimate the crown size (or filter window size) \\(\\chi\\) for each treetop as: \\[\\chi(s_{i})=3.09632+0.00895\\,^{*}h_{t}(s_{i})^{2}\\,, \\tag{2}\\] where \\(h\\), represents the above ground height of the treetop \\(s_{i}\\). Figure 6 gives an example of the refinement of the local maximum (red dots), many (yellow circle marked) of which are not treetops to the final refined treetops (stars with a blue dot in the center). Also, in the same figure, the rectangles mark the non-maximum suppression window for each potential treetop and we can observe that they are associated with a sizeable window which is able to account for crown size variation. #### 4.2.3 Crown Delineation The crown boundaries are usually indistinct in canopy density area and the texture information is normally not sufficient for the crown delineation. To utilize the 3D structure information of the bsm, we propose a modified superpixel segmentation utilizing multi-modal data to detect crown regions. The superpixel algorithm can provide a compact and homogenous representation towards all segments which naturally matches the shape of trees. In contrast to the original superpixel [10], the modified version considers both 2D and 3D information in its kernel distance function: \\[D(s_{i},t_{j})=W_{h}*D_{h}(s_{i},t_{j})+W_{v}*D_{v}(s_{i},t_{j})+W_{c}*D_{c}(s _{i},t_{j}) \\tag{3}\\] where \\(D(s_{i},t)\\) measures the spatial and spectral difference between treetop \\(s_{i}\\) and the test pixel \\(t_{j}\\), \\(D_{h}\\), \\(D_{v}\\), and \\(D_{c}\\) are both the Euclidean distance used to measure the horizontal, vertical distance and the spectral differences (\\(\\theta\\) bands) and normalized based on the statistical maximum and minimum range of tree crown size (estimated by allometric equation), tree height, Figure 5: Illustration of the local maximum detection with different se sizes. From image (a) to (d) are the orthophoto, the vegetated area, the bsm, and the local maximum (with 8 pixels se). Image (e) is the non-vegetation removed results of two ses (4 and 8 pixels) at the rectangle area in image (d). Figure 6: Treetop refinement. The initial detections are filtered by the above ground height check and non-maximum suppression with adaptive windows. and the vegetation spectrum. \\(W_{\\mu},W_{\ u},W_{e}\\) are the associated weights for different component which we empirically set as _0.8, 1, 0.5_, emphasizing spatial distance especially the vertical one. The lower \\(D(s,t)\\) means higher similarity between the treetop and the test pixel, but if the smallest difference between a pixel and all the treetops is still larger than a certain threshold value \\(\\theta\\), the pixel will be abandoned as non-tree area. The \\(\\theta\\) gives an extra constraint on the assignment of pixels that may not belong to trees. Smaller \\(\\theta\\) regulates the delineated crown to be more compact and closer to the treetop, but may lose part of the true crown. On the contrary, larger \\(\\theta\\) would reduce the restriction and let the delineation cover a larger area which may contain non-tree part. Hence, in the experiment, the \\(\\theta\\) is carefully \\(\\delta=-1\\) and as the maximum difference value the Equation 3 (\\(\\delta=1\\)). for example, the difference between the treetop at the leaf at the farthest crown. The idea of the revised segment[4] is similar to the supervoxels (Papon _et al._, 2013) used for full 3D data like those from terrestrial lidar. However, unlike the full 3D points, the DSM only represents 2.5 D information that only the surface points have height information. If using supervoxels, the facades (which were assumed to be cut-off planes) will be considered for segmentation which is unwanted. Besides, we have fixed the segmentation seeds at the treetops and the goal is to create a boundary delineation in the planar map. Hence, it is more appropriate to use this modified superpixel than the supervoxels. #### 4.2.2 Postprocessing Using the Crown Shape and Size Constraints In this study, we set two criteria to further verify the correctness of the crown's shape and size. The first criterion is that the treetop should be near the center of the crown. In the experiment, the one-third of the largest diameter of the segment is set as the maximum tolerance range. The falsely detected treetops that are far from the segment center are often the local maximum at the edge of the crowns and they are usually caused by the limited precision of the DSM. The other criterion is the coherence of the crown size. Normally, a tree and its neighbors belong to the same species and should share common biological features (e.g., the height-crown ratio). Therefore, we use the average crown size of the neighborhood as the reference crown size to remove abnormal crowns with the three-sigma rule. ### Experiments and Discussion #### 4.2.1 Accuracy Assessment Measure To quantitatively validate the individual tree detection and crown delineation accuracy, we use true positives (TP), false positives (FP) and false negatives (FN) to compute the detection accuracy (\\(DA\\)) and recall (\\(r\\)), commission error (\\(e_{\\text{\\tiny{cam}}}\\)) and the omission error (\\(e_{\\text{\\tiny{cam}}}\\)): \\[r=DA=\\frac{n_{TP}}{N},\\ \\ \\ e_{\\text{\\tiny{com}}}=\\frac{n_{FP}}{n_{TP}+n_{FP}},\\ \\ \\ e_{\\text{\\tiny{om}}}=\\frac{n_{FN}}{n_{TP}+n_{FN}}\\, \\tag{4}\\] where \\(n_{\\text{\\tiny{TP}}}\\), \\(n_{\\text{\\tiny{pc}}}\\) and \\(n_{\\text{\\tiny{pp}}}\\) are the number of trees in TP, FN and FP category. \\(N\\) is the total number of the reference trees. Also, the precision (\\(P\\)) and F-score(\\(F\\)) are derived as: \\[p=\\frac{n_{TP}}{n_{TP}+n_{EP}},\\ F=\\frac{2rp}{r+p}. \\tag{5}\\] These are normally effective in pixel-wise comparison or detection of trees in the sparsely vegetated area. However, it might be problematic when we are validating the algorithm in the densely vegetated area: one predicted tree may have several reference trees nearby. Thus, it is hard to pair the predictions and references. To enforce a one-to-one correspondence, we wish only match the pair that has the largest overlapping area to each other. Therefore, following the measurement employed by Pascal visual object classes (VOC) challenge (Everingham _et al._, 2010), we calculate the overlap ratio (OR) between all reference and predicted tree crowns to estimate how well they matched: \\[OR=\\frac{2^{*}A_{o}}{A_{r}+A_{p}}\\, \\tag{6}\\] where \\(A_{o}\\), \\(A_{r}\\), and \\(A_{p}\\) are the size of overlapped, reference and prediction crown, respectively. However, if the corresponding trees have small overlap ratio than \\(\\gamma\\), we will discard this pair. We adopt \\(\\gamma\\)=0.3 as the threshold as it used in Yin and Wang (2016). For cases that: (1) one with no corresponding reference, it will be counted as a false positive; (2) one reference tree does not correspond to a predicted tree, it will be counted as a false negative. Finally, for the crown delineation accuracy, we estimate the average overlap ratio of the matched pairs: \\[CA=\\frac{\\sum_{OR(TP_{i})}}{n_{TP}}\\, \\tag{7}\\] where \\(CA\\) is the crown accuracy, OR(_TP_) is the overlap ratio of correctly matched reference and prediction crowns. ## 5 Results In this work, we are limited to collect field samples and we generated the reference data by labeling the individual trees and their crowns with visual inspection as some previous studies did in their works (Zhen _et al._, 2014). The three experimental sites variably include densely forested area, sparsely forested area, and urban area, respectively. For each site, we calculated the detection accuracy (\\(DA\\)), commission error(\\(e_{\\text{\\tiny{cam}}}\\)), omission error (\\(e_{\\text{\\tiny{cam}}}\\)) and crown accuracy \\(CA\\), as well as the precision \\(P\\) and F-scores. To demonstrate the advantage of the top-hat local maximum detector that is less sensitive to filter size, we implemented several local maximum filter based treetop detectors described in Wulder _et al._ (2000). These include fixed-win-down filters with sizes of _3,7,11,15,19_ pixels corresponding to _1-6_ meters and the filter with a variable size calculated by the slope breaks (SB) (Wulder et al., 2000). In the experiment, we test these filters on the DSM in our comparative study while keeping all the other steps as the same and the final results can be found from Table 1 to Table 3. As shown in Table 1 to Table 3, the proposed top-hat detector has the majority of highest performances across the three test sites. We believe this is due to the top-hat's characteristics of robustness and filter size insensitivity. For the fixed-window filters, the window with \\(\\gamma\\) pixels has better performance in the sparsely vegetated area, while the window with _11_ pixels has better performance in the other two test sites. This shows that the performance of a fixed-window is dependent on the compatibility of the window size and the scenario. However, a suitable filter size cannot be predicted, and it is not possible to find a single filter suitable for all scenarios. And in general, the larger windows miss more small trees resulting in higher omission errors. Compared to the fixed-window detector, the proposed top-hat detector produces reliable treetop in all scenarios independent of the SE size. On the other hand, the variable-window filter based on the slope break obtains the worst results. This could be due to that the slope break distance is sensitive to its nearby environment and cannot accurately reflect the crown size. To understand the improvement of including 3D geometric information, we performed the treetop detection and superpixel segmentation without lsm. Following the idea in Wulder _et al._ (2000), we use the red channel which has the best performance with the same detection processes excluding the 3D information. In Equation 3, the vertical distance is removed and the other para = [s are tuned to obtain the optimal results which can be f = 4 in Table 4. Since the spectral-only based method solve considers the reflection variations of light off of trees, it is difficult to distinguish trees from other objects. Even with the help of ndiv precision (_P_) and F-scores (_F_). As shown in Table 1 and Figure 7, at the sparsely forested area, the proposed algorithm predicted _355_ trees comparing to _307_ reference trees. The detection accuracy (_DA_) can reach as high as _0.89_. For the one-to-one correctly matched trees, their average overlap is _0.64_, which is better than the perfect match defined as _0.5_ in Yin and Wang (2016). Figure 8 shows examples of errors such as false negatives (diamond) and false positives (rectangle), as well as the miss-segmentations (circle), which are mainly caused by incorrectly detected treetops. The false negatives mainly exist in the short trees which are filtered by the above ground height check and non-maximum suppression that were previously described. Errors resulting in over-detection (circle in Figure 8) are mainly caused by the fact that grass near a short tree is confused as part of the crown. In the densely forested area, the trees are highly overlapped with each other making it extremely difficult to distinguish individual crowns even by visual identification. As observed in Figure 9, the algorithm extracted the trees with a detection accuracy of _0.89_ (Table 2). As compared to the sparsely \\begin{table} \\begin{tabular}{c c c c c c c c} \\hline Detector & Np & DA & e\\({}_{\\text{em}}\\) & e\\({}_{\\text{em}}\\) & CA & P & F \\\\ \\hline TH & 1442 & **0.89** & 0.41 & **0.11** & **0.59** & 0.58 & 0.70 \\\\ F\\_3 & 349 & 0.24 & 0.34 & 0.76 & 0.58 & 0.66 & 0.36 \\\\ F\\_7 & 1193 & 0.82 & 0.35 & 0.18 & 0.58 & 0.65 & **0.73** \\\\ F\\_11 & 1330 & 0.83 & 0.41 & 0.17 & 0.58 & 0.59 & 0.69 \\\\ F\\_15 & 1009 & 0.74 & 0.30 & 0.1 & 0.58 & 0.70 & \\(\\color[rgb]{0,0,0}\\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \\pgfsys@color@gray@stroke{0}{0}{0}{0}{0}\\pgfsys@color@gray@fill{0}{0}{0}{0}{0}{0} \\pgfsys@color@gray@fill{0}{ forested area, the commission error (\\(e_{\\text{\\tiny{com}}}\\)) is relatively large, indicating relatively more false detections. This may due to that the dense canopies make the incorrectly detected treetops passed the postprocessing and identified as real trees. Furthermore, the low crown precision may be caused by partial detection and errors in the DSM. The partial detection refers to the detection that some branches of the tree are not included in the crown. For example, several treetops may be incorrectly detected in a large tree causing some branches could be misassigned to them. However, the post-processing may remove these incorrect detections and causes the true treetop only be assigned part of the crown. On the other hand, in the densely forested area, it is easy to mismatch feature points which are used to generate DSM from multi-view images. Hence, in this area, the 3D structure information for treetop detection may be not correct and further affect the superpixel segmentation. One consideration in this area is that the terrain area is too during the dense matching process, such as in specular reflection and textureless regions. Hence, we believe a better dsm can further improve the performance of the proposed method. ## Conclusions In this paper, we developed a novel and automated method to fully utilize the multi-view high-resolution satellite images for ttdd. As compared to previous image-based methods, we adopt the dsm (digital surface model) derived from the multi-view satellite images and combine the multi-spectral information to identify treetops and their crowns in areas with varying canopy densities. A quantitative evaluation of three different sites shows that the proposed method is able to detect individual trees in different regions with various surface covers. The algorithm had its highest performance in the sparsely forested area with _89%_ detection accuracy, _0.23_ commission errors and _0.12_ omission errors. Even for the densely forested area, traditionally deemed as particularly challenging, the algorithm still achieved _89%_ detection accuracy with the slightly larger commission and omission errors. Despite the superior results achieved by our methods, we are aware that significant vulnerabilities still exist, mainly due to the complicated surfaces of overly dense forests as well as propagated dem errors. Detection in highly heterogeneous forests with multiple layers is challenging for even manual identification. The variations of trees and man-made objects in close proximity to the vegetation in urban areas create a very complicated scenario for individual tree detection. In the future, we plan to include dynamically adjusted tree templates of various scales in both 2D and 3D to increase the robustness to dem errors and reduce overdetection in dense forest regions. ## Acknowledgments The authors would like to thank John Hopkins University Applied Physics Lab for providing the Multi-view 3D Benchmark dataset used in this study. We would also thank Xing Pei and Rupong Wang from Lanzhou Jiaotong University for providing the tree labels. Finally, we thank jiaqiang Li from Future Cities Laboratory in Singapore-eth Centre for the using of the CloudCompare\\({}^{\\text{\\text{\\textregistered}}}\\) software for the terrain detection. ## References * [1] * [2] Bosch, M., Kurtz, Z., Hagstrom, S., Brown, M., 2016. A multiple view stereo benchmark for satellite imagery, _Applied Imagery Pattern Recognition Workshop (AIPRI), 2016 IEEE_. IEEE, pp. 1-9. * [3] Bosch, M., Leichman, A., Chilcott, D., Goldberg, H., Brown, M., 2017. Metric evaluation pipeline for 3d modeling of urban scenes, _The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_, 42, 239. * [4] Desktop, E.A., 2011. Release 10. _Redlands, CA: Environmental Systems Research Institute_, pp. 437-438. * [5] Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A., 2010. The pascal visual object classes (voc) challenge, _International Journal of Computer Vision_ 88:303-338. * [6] Felzenszwalb, P.F., Huttenlocher, D.P., 2004. Efficient graph-based image segmentation. _International Journal of Computer Vision_, 59:167-181. * [7] Ferraz, A., Saatchi, S., Mallet, C., Meyer, V., 2016. Lidar detection of individual tree size in tropical forests, _Remote Sensing of Environment_ 183:318-333. * [8] Garrity, S.R., Meyer, K., Maurer, K.D., Hardiman, B., Bohrer, G., 2012. Estimating plot-level tree structure in a deciduous forest by combining allometric equations, spatial wavelet analysis and airborne lidar, _Remote Sensing Letters_, 3:443-451. * [9] Gevaert, C., Persello, C., Nex, F., Vosselman, G., 2018. A deep learning approach to dtm extraction from imagery using rule-based training labels, _ISPRS Journal of Photogrammetry and Remote Sensing_ 142:106-123. * [10] Goncalves, A.C., Sousa, A.M., Mesquita, P.G., 2017. Estimation and dynamics of above ground biomass with very high resolution satellite images in pinus pinaster stands, _Biomass and Bioenergy_ 106:146-154. * [11] Gougeon, F.A., Leckie, D.G., 2006. The individual tree crown approach applied to konos images of a coniferous plantation area, _Photogrammetric Engineering & Remote Sensing_, 72:1287-1297. * [12] Hu, H., Ding, Y., Zhu, Q., Wu, B., Lin, H., Du, Z., Zhang, Y., Zhang, Y., 2014. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy. _ISPRS Journal of Photogrammetry and Remote Sensing_ 92:98-111. * [13] Kathuria, A., Turner, R., Stone, C., Duque-Lazo, J., West, R., 2016. Development of an automated individual tree detection model using point cloud lidar data for accurate tree counts in a pinus radiata plantation, _Australian Forestry_, 79:126-136. * [14] Ke, Y., Quackenbush, L.J., 2011. A review of methods for automatic individual tree-crown detection and delineation from passive remote sensing, _International Journal of Remote Sensing_ 32:4725-4747. Figure 11: The mis-detection of thin trees. The left image is the 3D visualization of the test site and the right image is the detection results marked with reference trees. The ellipse marks the missing trees that only show as small bumps at the dsm. Figure 10: The elaboration of the results in the urban area. From the left to right are the terrain mask, the treetops detection and the crowns delineated by the algorithm. * Koch et al. (2006) Koch, B., Heyder, U., Weinacker, H., 2006. Detection of individual tree crowns in airborne lidar data, _Photogrammetric Engineering & Remote Sensing_, 72:357-363. * Liu and Abd-Elrahman (2018) Liu, T., Abd-Elrahman, A., 2018. Deep convolutional neural network training enrichment using multi-view object-based analysis of unmanned aerial systems imagery for wetlands classification, _ISPRS Journal of Photogrammetry and Remote Sensing_, 139:154-170. * Liu et al. (2015) Liu, T., Im, J., Quackenbush, L.J., 2015. A novel transferable individual tree crown delineation model based on fishing net dragging and boundary classification, _ISPRS Journal of Photogrammetry and Remote Sensing_ 110, 34-47. * Lu et al. (2014) Lu, X., Guo, Q., Li, W., Flanagan, J., 2014. A bottom-up approach to segment individual deciduous trees using leaf-off lidar point cloud data, _ISPRS Journal of Photogrammetry and Remote Sensing_, 94:1-12. * Mohan et al. (2017) Mohan, M., Silva, C.A., Klauberg, C., Jat, P., Cattes, G., Cardil, A., Hudak, A.T., Dia, M., 2017. Individual tree detection from unmanned aerial vehicle (lav) derived canopy height model in an open canopy mixed convier forest, _Fowsts_, 8:340. * Papon et al. (2013) Papon, J., Abramov, A., Schooler, M., Worgotter, F., 2013. Voxel cloud connectivity segmentation-supervoxels for point clouds, _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pp. 2027-2034. * Popescu et al. (2002) Popescu, S.C., Wynne, R.H., Nelson, R.F., 2002. Estimating plot-level tree heights with lidar: Local filtering with a canopy-height based variable window size, _Computers and Electronics in Agriculture_. 37:71-95. * Pouliot and King (2005) Pouliot, D., King, D., 2005. Approaches for optimal automated individual tree crown detection in regenerating coniferous forests. _Canadian Journal of Remote Sensing_, 31:255-267. * Qin (2017) Qin, R., 2017. Automated 3d recovery from very high resolution multi-view satellite images, _Proceedings of the ASPRS (IGTF) Annual Conference_, Baltimore, Maryland, p. 10. * Qin and Fang (2014) Qin, R., Fang, W., 2014. A hierarchical building detection method for very high resolution remotely sensed images combined with dsm using graph cut optimization, _Photogrammetric Engineering & Remote Sensing_80, 873-883. * Sousa et al. (2015) Sousa, A.M., Goncalves, A.C., Mesquita, P., da Silva, J.R.M., 2015. Biomass estimation with high resolution satellite images: A case study of quercus fundifolia, _ISPRS Journal of Photogrammetry and Remote Sensing_, 90:169-79. * Strimbu and Strimbu (2015) Strimbu, V.F., Strimbu, B.M., 2015. A graph-based segmentation algorithm for tree crown extraction using airborne lidar data, _ISPRS Journal of Photogrammetry and Remote Sensing_ 104:30-43. * Vega et al. (2014) Vega, C., Hamrouni, A., El Mokhtari, S., Morel, J., Bock, J., Renaud, J.-P., Bouvier, M., Durrieu, S., 2014. Ptrees: A point-based approach to forest tree extraction from lidar data, _International Journal of Applied Earth Observation and Geomimation_, 33:98-108. * Weng et al. (2015) Weng, E., Malyshev, S., Lichstein, J., Farrior, C., Dybzinski, R., Zhang, T., Shevilakova, E., Pacala, S., 2015. Scaling from individual trees to forests in an earth system modeling framework using a mathematically tractable model of height-structured competition, _Biogeosciences_, 12:2655-2694. * Wulder et al. (2000) Wulder, M., Niemann, K.O., Goodenough, D.G., 2000. Local maximum filtering for the extraction of tree locations and basal area from high spatial resolution imagery, _Remote Sensing of Environment_, 73:103-114. * Wulder et al. (2013) Wulder, M.A., Coops, N.C., Hudak, A.T., Morsdorf, F., Nelson, R., Newnham, G., Vastarana, M., 2013. Status and prospects for lidar remote sensing of forested ecosystems: _Canadian Journal of Remote Sensing_ 39:S1-S5. * Yin and Wang (2016) Yin, D., Wang, L., 2016. How to assess the accuracy of the individual tree-based forest inventory derived from remotely sensed data: A review, _International Journal of Remote Sensing_ 37:4521-4553. * Zhang et al. (2016) Zhang, W., Qi, J., Wan, P., Wang, H., Xie, D., Wang, X., Yan, G., 2016. An easy-to-use airborne lidar data filtering method based on cloth simulation, _Remote Sensing_, 8:501. * Zhen et al. (2014) Zhen, Z., Quackenbush, L.J., Zhang, L., 2014. Impact of tree-oriented growth order in marker-controlled region growing for individual tree crown delineation using airborne laser scanner (als) data, _Remote Sensing_, 6:555-579.
Individual tree detection and crown delineation (ITDD) are critical in forest inventory management and remote sensing based forest surveys are largely carried out through satellite images. However, most of these surveys only use 2D spectral information which normally has not enough clues for ITDD. To fully explore the satellite images, we propose a ITDD method using the orthophoto and digital surface model (DSM) derived from the multi-view satellite data. Our algorithm utilizes the top-hat morphological operation to efficiently extract the local maxima from DSM as treetops, and then feed them to a modified superpixel segmentation that combines both 2D and 3D information for tree crown delineation. In subsequent steps, our method incorporates the biological characteristics of the crowns through plant allometric equation to falsify potential outliers. Experiments against manually marked tree plots on three representative regions have demonstrated promising results - the best overall detection accuracy can be 89%. + Footnote †: journal: Information in Medicine
Give a concise overview of the text below.
201
arxiv-format/2210_08506v1.md
# ResAttUNet: Detecting Marine Debris using an Attention activated Residual UNet Azhan Mohammed GalaxEye Space ## 1 Introduction Marine debris is a rising concern for numerous reasons that envelope environmental, economic, and human health aspects. Plastic waste in the ocean can float for a very long time, and has been found in many areas throughout the world [1][2]. Numerous methods have been introduced to detect marine debris using the available open-source and commercial data sources[3], [4], [5], like the use of satellite imagery[4], and autonomous underwater vehicles[5]. Earth observation data from public and commercial satellite programs are very useful in detection of debris sites[6], [7], [8], [9], [10]. To better understand the spectral behaviour of marine debris, indices like Floating Debris Index (FDI)[11] and Plastic Index (PI)[10] have been used. Differentiating floating debris from other bright features like wave, sunlight, clouds, ships, foam, comes with challenges[12],[13] that arise due to the fact that plastics have complex properties and vary in color, chemical composition, size and submergence level in water body [7],[6]. Most of the earlier datasets focused on detecting objects like vessels[14], clouds over sea area[15] and macro algae[16]. To overcome the existing challenges in the field MARIDA- MARIne Debris Archive[17] introduced an open-access benchmark dataset, that contains images extracted using Sentinel-2 multispectral satellite data. Along with the dataset, MARIDA introduced baseline results for semantic segmentation task using weakly supervised labels using Machine Learning algorithms and deep neural network architectures. The contribution of this paper is a Marine Debris Detector using Sentinel S2 Satellite Imagery and a Attention Based Residual UNet, a novel convolution neural network specifically designed to segment sparse debris. The detection technique provides state-of-the-art results on the MARIDA open-source dataset that contains real world images that are temporally and geographically well-distributed. ## 2 Methods ### MARIDA Dataset MARIDA was constructed as a step by step process that included: * Collection of ground-truth data and relevant literature related to floating debris in coastal areas. The ground-truth data was collected for a period of 7 years, from 2015 to 2021, across coastal areas and river mouths in several countries. * Acquiring satellite data, processing images, calculating spectral indices,annotating the collected images to formulate ground truth maps and performing statistical analysis on the generated ground truth database. * Generating MARIDA Dataset and formulating ML and UNet benchmarks for weakly supervised image segmentation task. MARIDA contains 1381 patches saved as geo-tif files along with their respective pixel-wise segmentation masks and confidence scores stored in a JSON file, collected from 63 scenes using the Sentinel S2 Satellite mission. Additionally, MARIDA also contains the shapefile data of these geo-tif files in WGS'84/UTM projection. The selected study sites are distributed over 11 countries, Honduras, Guatema, Haiti, Santo Domingo, Vietnam, South Africa, Scotland, Indonesia, Philippines, South Korea and China. \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline Class Name & Acronym & Total Pixels & Pixel Percentage \\\\ \\hline Marine Debris & MD & 3399 & 0.4059103606 \\\\ \\hline Dense Sargassum & DenS & 2797 & 0.3340192052 \\\\ \\hline Sparse Sargassum & Sps & 2357 & 0.2814741747 \\\\ \\hline Natural Organic Material & NatM & 864 & 0.1031793326 \\\\ \\hline Ship & Ship & 5803 & 0.6929972999 \\\\ \\hline Clouds & Cloud & 117400 & 14.0199695 \\\\ \\hline Marine Water & MWater & 129159 & 15.42423544 \\\\ \\hline Sediment-Laden Water & SLWater & 372937 & 44.5363319 \\\\ \\hline Foam & Foam & 1225 & 0.1462901417 \\\\ \\hline Turbid Water & TWater & 157612 & 18.82210761 \\\\ \\hline Shallow Water & SWater & 17369 & 2.074215079 \\\\ \\hline Waves & Waves & 5827 & 0.6958633925 \\\\ \\hline Cloud Shadows & CloudS & 11728 & 1.400563904 \\\\ \\hline Wakes & Wakes & 8490 & 1.013880247 \\\\ \\hline Mixed Water & MixWater & 410 & 0.04896241478 \\\\ \\hline \\end{tabular} \\end{table} Table 1: MARIDA class distribution The dataset has 15 different classes, that are tabulated in Table 1, the table also discusses the pixel level distribution of classes and indicates the under representation of various classes, a problem that is tackled using Weighted Cross Entropy loss, explained in detail in upcoming section. A simpler approach to detect marine debris can be to transform the data into a binary classification problem, having just plastic and non-plastic classes, this makes the target problem easier and achieves better scores when compared to the baseline results introduced in MARIDA[18]. ### UNet The UNet[19] network architecture has been used in various image segmentation tasks in various domains, including satellite imagery[20],[21],[22],[23]. The network consists of a contracting path and an expanding path. The contracting path is similar to an ordinary Convolution Neural Network consisting of Convolution layers, followed by a LeakyReLU activation and Max Pooling layers. After each downsample block, the image height and width are reduced by half while the feature channels are doubled. The expansive block consists of an upsampling layer that doubles the height and width of image using a bilinear upsampling, followed by convolution blocks to reduce the number of features. Before expanding each feature map, it is concatenated with the corresponding feature map from the contracting path. The concatenation of features from contracting path ensures transfer of spatial information from the contracting path to the expanding path. The final layer is a convolution layer with 1x1 kernel, it takes in 32 channels and outputs channels corresponding to number of classes. ### Residual Blocks Training of deep learning networks faced many issues, including training instability. Residual Blocks[24] were introduced to help in stable training of very deep neural networks with the help of skip connections. It was noticed that when deep networks start converging, a degradation problem is exposed, where the accuracy first converges, then remains stables and then starts degrading. This degradation problem shows that deep models are not easy to optimize. To help with the issue, Residual blocks were introduced, where instead of hoping a few stacked layers to directly fit a desired output mapping \\(H(x)\\), the layers were made to fit simpler mapping \\(F(x)=H(x)-x\\), and with the help of skip connections, the mapping is then transformed into \\(F(x)+x\\) that was the original mapping. It has been noticed that using residual blocks in neural networks has a significant impact on the model's performance[25],[26],[27]. The introduced residual blocks in the model are employed after downsampling layers, and help in extracting deep features from the feature space obtained. ### Convolutional Block Attention Module Convolutional Block Attention Module[28], hereby referred to as CBAM is a means to infer attention map in two separate dimensions. The module has two sequential sub-modules, channel attention module and spatial attention module. For any given input feature map \\(F\\in R^{C\\times H\\times W}\\) the CBAM module's Channel Attention Module first generates a 1D attention map \\(M_{c}\\in R^{C\\times 1\\times 1}\\), followed by a 2D Attention Map \\(M_{c}\\in R^{1\\times H\\times W}\\) generated by Spatial Attention Module. The overall attention calculation can be written as: \\[F^{\\prime}=M_{c}(F)\\otimes F\\] \\[F^{\\prime\\prime}=M_{s}(F^{\\prime})\\otimes F^{\\prime}\\] Where \\(\\otimes\\) refers to an element-wise multiplication. Figure 1 shows the overall structure of CBAM. #### 2.4.1 Channel Attention Module The channel attention map is generated by leveraging the inter-channel relationship of features. Given an input feature map, the module first calculates the maximum and average value for each channel. The extracted features are then passed through a multilayer perceptron network (MLP Network), followed by element wise sum. The resultant is then passed through a sigmoid layer to generate the resultant channel attention map. Figure 2 depicts the architecture of Channel Attention Module. #### 2.4.2 Spatial Attention Module Similar to Channel Attention Module, the Spatial Attention Module uses inter-spatial relationship of features to generate spatial attention map. For a channel refined input feature, the Spatial Attention Module first calculates maximum and average value across the channels, followed by a convolution layer and sigmoid layer to generate the spatial attention map. Figure 2 depicts the architecture of Spatial Attention Module. ### ResAttUNet The introduced novel architecture uses CBAM at every downsampling and upsampling stage of the UNet model. Figure 3 shows a CBAM activated downsampling layer. Similar to the downsampling block, the upsampling block uses CBAM as well, the architecture is depicted in Figure 3. The final downsampling layer is followed by 3 residual blocks for deep feature extraction. A sample residual block is shown in Figure 3. The complete network architecture is shown in Figure 4. ## 3 Methodology and Results Various experiments were conducted to finalize the final model architecture. A plain UNet model was first trained to achieve baseline results. The model Figure 1: **Overview of CBAM.** The attention module has two sub-modules, Channel Attention Module and Spatial Attention Module downsample and upsample blocks were then incorporated with attention module. Followed by this, the residual layers were added, and later the residual layers were also incorporated with attention module to achieve the final model. As discussed earlier, the MARIDA dataset has a class imbalance, to deal with imbalanced classes, the model was trained using weighted cross entropy loss. The weights are derived using log of class frequency in the dataset. The weighted cross-entropy loss is given by: \\[xentloss_{weighted}=-\\frac{1}{N}\\sum_{n=1}^{N}\\omega r_{n}log(p_{n})+(1-r_{n}) log(1-p_{n})\\] Where, \\[\\omega=\\frac{N-\\Sigma_{n}p_{n}}{\\Sigma_{n}p_{n}}\\] Experiments were conducted using a sum of Weighted Cross Entropy Loss and Dice Loss to help with class imbalance, but dice loss only focuses on imbalance problem between the foreground and background, and overlooks the imbalance between hard samples and easy samples[29], and the same result was observed in our experiment. The trained model failed to perform at par with the baseline model. Figure 2: Channel Attention Module and Spatial Attention Module Experiments were also conducted using Focal loss[30] to deal with the class imbalance in the dataset. Focal loss is a modified version of existing cross entropy loss function, that attaches a modulating term that focuses on learning the hard misclassified samples. The focal loss function is given as: \\[focalloss=-(1-p_{t})^{\\gamma}log(p_{t})\\] Where \\(p_{t}\\) is the predicted probability, and \\(\\gamma\\) is the tuning control parameter. For our experiments, value of \\(\\gamma\\) was set as 2. As mentioned earlier, focal loss is a modified cross entropy loss function, so we did not use weighted cross entropy loss with focal loss to train the model. The validation loss for the model trained using focal loss converges pretty well as seen in Figure 7, but Figure 3: **ResAttUNet building blocks** the evaluation metrics reveal that the model was overfitting when tested on the test data. Hence focal loss was also discarded while training the final model. The finalized network architecture was trained using Weighted Cross Entropy loss, for 200 epochs with an initial learning rate of \\(1e-3\\) and the learning rate was gradually reduced by a factor of 0.5 at interval of 40 epochs. The final values of Precision, Recall, F1 score and IoU for each model is mentioned in Table 2. As seen in validation loss trend, shown in Figure 7, the Residual Attention UNet model outperforms the other models by a fair margin. The evaluation metrics shown in Figure 5 also shows the superiority of ResAttUNet model over other architectures and training procedures. This shows that using residual blocks with attention has great potential in detection of sparse debris Figure 4: **ResAttUNet Model Architecture.** The downsample block with CBAM is shown in Figure 3, upsample block with CBAM is shown in Figure 3 and the residual block with CBAM is shown in Figure 3 Figure 5: Evaluation metrics for UNet-MARIDA, UNet, Attention UNet, and Residual Attention UNet plotted as a graph Figure 6: **Training loss vs Epoch** for UNet, Attention UNet and Residual Attention UNet Figure 7: **Validation loss vs Epoch** for UNet, Attention UNet and Residual Attention UNet scattered throughout the sea, and can be used for various applications. ## 4 Conclusion Debris detection using Satellite imagery can be very helpful in tackling the problems caused by presence of debris in the sea and oceans. In this study, we propose a neural network capable of segmenting not only accumulated debris and sediment, but also segment out sparse debris particles from satellite imagery with increased accuracy and lesser rate of false positives. The overall F1 score achieved is 0.95, with an IoU score of 0.65. Proposed methodology is lightweight can be easily incorporated to perform debris segmentation using multispectral satellite imagery. In the upcoming future work, Synthetic aperture radar (SAR) imagery can be used along with Multi-spectral images to increase the performance of models and derive more useful information from the images. \\begin{table} \\begin{tabular}{|l|c|c|c|c|c|} \\hline Model & UNet-MARIDA & \\multicolumn{2}{c|}{UNet} & Attention UNet & Residual Attention UNet \\\\ \\hline Loss Function & \\multicolumn{1}{c|}{Xent} & \\multicolumn{1}{c|}{Xent} & \\multicolumn{1}{c|}{Focal} & \\multicolumn{1}{c|}{Dice} & \\multicolumn{1}{c|}{Xent} & \\multicolumn{1}{c|}{Xent} \\\\ \\hline Metric & \\multicolumn{1}{c|}{0.74} & 0.81 & 0.57 & 0.69 & 0.78 & **0.81** \\\\ \\hline Micro Precision & 0.89 & 0.88 & 0.90 & 0.90 & 0.92 & **0.95** \\\\ \\hline Weight Precision & 0.91 & 0.92 & 0.91 & 0.92 & 0.93 & **0.96** \\\\ \\hline Macro Recall & 0.69 & 0.71 & 0.45 & 0.61 & 0.74 & **0.77** \\\\ \\hline Micro Recall & 0.89 & 0.88 & 0.90 & 0.90 & 0.92 & **0.95** \\\\ \\hline Weight Recall & 0.89 & 0.88 & 0.90 & 0.90 & 0.92 & **0.95** \\\\ \\hline Macro F1 & 0.69 & 0.72 & 0.45 & 0.63 & 0.74 & **0.77** \\\\ \\hline Micro F1 & 0.89 & 0.88 & 0.90 & 0.90 & 0.92 & **0.95** \\\\ \\hline Weight F1 & 0.89 & 0.87 & 0.90 & 0.90 & 0.92 & **0.95** \\\\ \\hline Subset Accuracy & 0.89 & 0.88 & 0.90 & 0.90 & 0.92 & **0.95** \\\\ \\hline IoU & 0.57 & 0.61 & 0.38 & 0.54 & 0.62 & **0.67** \\\\ \\hline \\end{tabular} \\end{table} Table 2: Evaluation scores obtained by: UNet-MARIDA, UNet, Attention UNet, and Residual Attention UNet. The loss function used to train the model is mentioned in the sub-column under the model. The best score for each metric is highlighted in bold text. A graphical representation for the metrics can be found in Figure 5 ## References * (1) C. S. Punla, [https://orcid.org/](https://orcid.org/) 0000-0002-1094-0018, [email protected], R. C. Farro, [https://orcid.org/0000-0002-3571-2716](https://orcid.org/0000-0002-3571-2716), [email protected], Bataan Peninsula State University Dinalupihan, Bataan, Philippines, Are we there yet?: An analysis of the competencies of BEED graduates of BPSU-DC, International Multidisciplinary Research Journal 4 (3) (2022) 50-59. * (2) T. George, Modelling the marine microplastic distribution from municipal wastewater in saronikos gulf (e. mediterranean), Oceanogr. Fish. Open Access J. 9 (1). * (3) N. Maximenko, P. Corradi, K. L. Law, E. V. Sebille, S. P. Garaba, R. S. Lampitt, F. Galgani, V. Martinez-Vicente, L. Goddijn-Murphy, J. M. Veiga, R. C. Thompson, C. Maes, D. Moller, C. R. Loscher, A. M. Addamo, M. R. Lamson, L. R. Centurioni, N. R. Posth, R. Lumpkin, M. Vinci, A. M. Martins, C. D. Pieper, A. Isobe, G. Hanke, M. Edwards, I. P. Chubarenko, E. Rodriguez, S. Aliani, M. Arias, G. P. Asner, A. Brosich, J. T. Carlton, Y. Chao, A.-M. Cook, A. B. Cundy, T. S. Galloway, A. Giorgetti, G. J. Goni, Y. Guichoux, L. E. Haram, B. D. Hardesty, N. Holdsworth, L. Lebreton, H. A. Leslie, I. Macadam-Somer, T. Mace, M. Manuel, R. Marsh, E. Martinez, D. J. Mayor, M. L. Moigne, M. E. M. Jack, M. C. Mowlem, R. W. Obbard, K. Pabortsava, B. Robberson, A.-E. Rotaru, G. M. Ruiz, M. T. Spedicato, M. Thiel, A. Turra, C. Wilcox, Toward the integrated marine debris observing system, Frontiers in Marine Science 6. doi:10.3389/fmars.2019.00447. URL [https://doi.org/10.3389/fmars.2019.00447](https://doi.org/10.3389/fmars.2019.00447) * (4) C. Hu, Remote detection of marine debris using satellite observations in the visible and near infrared spectral range: Challenges and potentials, Remote Sensing of Environment 259 (2021) 112414. doi:10.1016/j.rse. 2021.112414. URL [https://doi.org/10.1016/j.rse.2021.112414](https://doi.org/10.1016/j.rse.2021.112414) * (5) M. Valdenegro-Toro, Submerged marine debris detection with autonomous underwater vehicles, in: 2016 International Conference on Robotics and Automation for Humanitarian Applications (RAHA), IEEE, 2016. doi:10.1109/raha.2016.7931907. URL [https://doi.org/10.1109/raha.2016.7931907](https://doi.org/10.1109/raha.2016.7931907) * (6) K. Topouzelis, A. Papakonstantinou, S. P. Garaba, Detection of floating plastics from satellite and unmanned aerial systems (plastic litter project 2018), International Journal of Applied Earth Observation and Geoinformation 79 (2019) 175-183. doi:10.1016/j.jag.2019.03.011. URL [https://doi.org/10.1016/j.jag.2019.03.011](https://doi.org/10.1016/j.jag.2019.03.011) * (7) A. Kikaki, K. Karantzalos, C. A. Power, D. E. Raitsos, Remotely sensing the source and transport of marine plastic debris in bay islands of honduras (caribbean sea), Remote Sensing 12 (11) (2020) 1727. doi:10.3390/rs12111727. URL [https://doi.org/10.3390/rs12111727](https://doi.org/10.3390/rs12111727) * (8) M. Kremezi, V. Kristollari, V. Karathanassi, K. Topouzelis, P. Kolokoussis, N. Taggio, A. Aiello, G. Ceriola, E. Barbone, P. Corradi, Pansharpening PRISMA data for marine plastic litter detection using plastic indexes, IEEE Access 9 (2021) 61955-61971. doi:10.1109/access.2021.3073903. URL [https://doi.org/10.1109/access.2021.3073903](https://doi.org/10.1109/access.2021.3073903) * (9) T. Acuna-Ruz, D. Uribe, R. Taylor, L. Amezquita, M. C. Guzman, J. Merrill, P. Martinez, L. Voisin, C. M. B., Anthropogenic marine debris over beaches: Spectral characterization for remote sensing applications, Remote Sensing of Environment 217 (2018) 309-322. doi:10.1016/j.rse.2018.08.08. URL [https://doi.org/10.1016/j.rse.2018.08.008](https://doi.org/10.1016/j.rse.2018.08.008)* (10) L. Biermann, D. Clewley, V. Martinez-Vicente, K. Topouzelis, Finding plastic patches in coastal waters using optical satellite data, Scientific Reports 10 (1). doi:10.1038/s41598-020-62298-z. URL [https://doi.org/10.1038/s41598-020-62298-z](https://doi.org/10.1038/s41598-020-62298-z) * (11) K. Themistocleous, C. Papoutsa, S. Michaelides, D. Hadjimitsis, Investigating detection of floating plastic litter from space using sentinel-2 imagery, Remote Sensing 12 (16) (2020) 2648. doi:10.3390/rs12162648. URL [https://doi.org/10.3390/rs12162648](https://doi.org/10.3390/rs12162648) * (12) S. P. Garaba, H. M. Dierssen, An airborne remote sensing case study of synthetic hydrocarbon detection using short wave infrared absorption features identified from marine-harvested macro- and microplastics, Remote Sensing of Environment 205 (2018) 224-235. doi:10.1016/j.rse.2017.11.023. URL [https://doi.org/10.1016/j.rse.2017.11.023](https://doi.org/10.1016/j.rse.2017.11.023) * (13) H. M. Dierssen, S. P. Garaba, Bright oceans: Spectral differentiation of whitecaps, sea ice, plastics, and other flotsam, in: Recent Advances in the Study of Oceanic Whitecaps, Springer International Publishing, 2020, pp. 197-208. doi:10.1007/978-3-030-36371-0_13. URL [https://doi.org/10.1007/978-3-030-36371-0_13](https://doi.org/10.1007/978-3-030-36371-0_13) * (14) P. Heiselberg, H. Heiselberg, Ship-iceberg discrimination in sentinel-2 multispectral imagery by supervised classification, Remote Sensing 9 (11) (2017) 1156. doi:10.3390/rs9111156. URL [https://doi.org/10.3390/rs9111156](https://doi.org/10.3390/rs9111156) * (15) V. Kristollari, V. Karathanassi, Artificial neural networks for cloud masking of sentinel-2 ocean images with noise and sunlight, International Journal of Remote Sensing 41 (11) (2020) 4102-4135. doi:10.1080/01431161.2020.1714776. URL [https://doi.org/10.1080/01431161.2020.1714776](https://doi.org/10.1080/01431161.2020.1714776) * (16) M. Wang, C. Hu, Automatic extraction of Sargassum features from sentinel-2 MSI images, IEEE Transactions on Geoscience and Remote Sensing 59 (3)(2021) 2579-2597. doi:10.1109/tgrs.2020.3002929. URL [https://doi.org/10.1109/tgrs.2020.3002929](https://doi.org/10.1109/tgrs.2020.3002929) * Kikaki et al. (2022) K. Kikaki, I. Kakogeorgiou, P. Mikeli, D. E. Raitsos, K. Karantzalos, Marida: A benchmark for marine debris detection from sentinel-2 remote sensing data, PLOS ONE 17 (1) (2022) 1-20. doi:10.1371/journal.pone.0262247. URL [https://doi.org/10.1371/journal.pone.0262247](https://doi.org/10.1371/journal.pone.0262247) * Booth et al. (2022) H. Booth, W. Ma, O. Karakus, High-precision density mapping of marine debris and floating plastics via satellite imagery, 2022. * Ronneberger et al. (2015) O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, CoRR abs/1505.04597. arXiv:1505.04597. URL [http://arxiv.org/abs/1505.04597](http://arxiv.org/abs/1505.04597) * Subraja and Venkataskhar (2022) N. Subraja, D. Venkataskhar, Satellite image segmentation using modified u-net convolutional networks, in: 2022 International Conference on Sustainable Computing and Data Communication Systems (ICSCDS), 2022, pp. 1706-1713. doi:10.1109/ICSCDS53736.2022.9760787. * Zhao et al. (2020) Y. Zhao, Z. Shangguan, W. Fan, Z. Cao, J. Wang, U-net for satellite image segmentation: Improving the weather forecasting, in: 2020 5th International Conference on Universal Village (UV), 2020, pp. 1-6. doi:10.1109/UV50937.2020.9426212. * Chaudhary et al. (2022) V. Chaudhary, P. K. Buttar, M. K. Sachan, Satellite imagery analysis for road segmentation using u-net architecture, The Journal of Supercomputing 78 (10) (2022) 12710-12725. doi:10.1007/s11227-022-04379-6. URL [https://doi.org/10.1007/s11227-022-04379-6](https://doi.org/10.1007/s11227-022-04379-6) * Larionov et al. (2020) R. Larionov, V. Khryashchev, V. Pavlov, Separation of closely located buildings on aerial images using u-net neural network, in: 2020 26th Conference of Open Innovations Association (FRUCT), IEEE, 2020. doi:10.23919/fruct48808.2020.9087365. URL [https://doi.org/10.23919/fruct48808.2020.9087365](https://doi.org/10.23919/fruct48808.2020.9087365) * (24) K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, CoRR abs/1512.03385. arXiv:1512.03385. URL [http://arxiv.org/abs/1512.03385](http://arxiv.org/abs/1512.03385) * (25) Road topology extraction from satellite images by knowledge sharing, 2019. * (26) M. Wu, Z. Shu, J. Zhang, X. Hu, Hrlinknet: Linknet with high-resolution representation for high-resolution satellite imagery, 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (2021) 2504-2507. * (27) R. Sanya, G. Maiga, E. Mwebaze, Exploiting semantic coherence to improve prediction in satellite scene image analysis: Application to disease density estimation, 2019. * ECCV 2018, Springer International Publishing, 2018, pp. 3-19. doi:10.1007/978-3-030-01234-2_1. URL [https://doi.org/10.1007/978-3-030-01234-2_1](https://doi.org/10.1007/978-3-030-01234-2_1) * (29) R. Zhao, B. Qian, X. Zhang, Y. Li, R. Wei, Y. Liu, Y. Pan, Rethinking dice loss for medical image segmentation, in: 2020 IEEE International Conference on Data Mining (ICDM), 2020, pp. 851-860. doi:10.1109/ICDM50108.2020.00094. * (30) T.-Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollar, Focal loss for dense object detection, IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (2) (2020) 318-327. doi:10.1109/TPAMI.2018.2858826.
Currently, a significant amount of research has been done in field of Remote Sensing with the use of deep learning techniques. The introduction of Marine Debris Archive (MARIDA), an open-source dataset with benchmark results, for marine debris detection opened new pathways to use deep learning techniques for the task of debris detection and segmentation. This paper introduces a novel attention based segmentation technique that outperforms the existing state-of-the-art results introduced with MARIDA. The paper presents a novel spatial aware encoder and decoder architecture to maintain the contextual information and structure of sparse ground truth patches present in the images. The attained results are expected to pave the path for further research involving deep learning using remote sensing images. The code is available at [https://github.com/sheikhazhanmohammed/SADMA.git](https://github.com/sheikhazhanmohammed/SADMA.git) keywords: Marine Debris Detection, High-Resolution Optical Remote Sensing (RS) Image, Attention Mechanism, Image Segmentation
Condense the content of the following passage.
202
isprs/95c2f6c5_1a74_4dde_b1ed_06df1d2ab655.md
# Monitoring Erosion and Accretion Situation in the Coastal Zone at Kien Giang Province Nguyen Thi Hong Diep1 1 Nguyen Tan Loi1 Nguyen Trong Can1 1 Land Resources Department, College of Environment and Natural Resources, Can Tho University, Vietnam ## 1 Introduction ### Background The Mekong Delta and the Lower Mekong Basin are one of the seven major ecological areas of the Mekong River Basin in which a specific characterized terrain and diversity ecosystems (MRC-Mekong River Commission, 2010), to supply the most important food crop in the Southeast Asia and focusing on a world-class of biodiversity to be affected by human activities and subsidy issues and shoreline erosions (Edward J. Anthony et al., 2015). The coastal area of Kien Giang province is usually impacted by natural disasters of climate change and sea level rise, and the estimated sea level rise by 2100 is increasing to 100 cm (Carew-Reid, 2008). At present, coastal erosion and coastal protection forests in Kien Giang are serious changes (Due Van, 2014). Kien Giang coastal is unstable and changes year by year with less accretion area, more erosion than deposition, and there are two shoreline areas increasing erosion (Ngo C. N. et al, 2014). Coastal erosion and sedimentation are the major concern in coastal management. Changes in morphology and location along the coast of Kien Giang have caused a major impact on land use and socio-economic development in coastal area (Nguyen Hai Hoa et al., 2010). Besides of these causes, endogenous factors such as impacts from stratigraphic change, flow, water level fluctuation, storm, wave and wind and exogenous factors caused by human impacts, thus coastal area monitoring is very necessary. Rapid appraisal methods are needed to update coastal maps of affected areas and to monitor shoreline change rate. There have currently no assessment of coastal shoreline dynamics, coastal vulnerability and mangrove functions that were conducted in Kien Giang province (Nguyen H.H.et al, 2010). With the continuous development process of science and technology, the detection of coastal erosion or accretion areas are relatively implemented quickly and less expensive than conventional measurement methods and predicted areas where severe erosion or accretion to early warning (R. Kanan & MV, 2016). The assessment of land use and the effective management of shoreline erosion are necessary because of the complicated developmental issues and detecting what is happen negatively to the coast. The research is implemented to detect shoreline changes and to assess coastal land use impacts in the western of Kien Giang province during 40 years using remote sensing and GIS technology, and support an effective solution to ensure for local development and shoreline changes to effect on natural environment as well as living condition in the coastal areas. ### Study area The focused area is the west sea which belongs to Kien Giang province. It is laid from 9023'50\" to 10032'30\" North and 104deg2640 - 105deg32'40 East with the coastline over 200 kilometers, and it is from Miu Nai (Ha Tien- Kien Giang) to An Minh province (Kien Giang). This is an area where complexed shoreline has been occurring with the processes of erosion and accretion. According to the Department of Agriculture and Rural Development, the coastlines Hon Dat - Kien Luong, An Minh - An Bien and some river mouths have extremely effects by erosion due to wave, strong wind in the monsoon season, and loss of mangrove forest (Figure 1). ## 2 Methodology ### Data collection Remote sensing data using Landsat images were collected in each 5 years period including in 1975, 1990, 1995, 2000, 2005, 2010 and 2015 in the western coastal area of Kien Giang province on the website at [http://earthexplorer.usgs.gov](http://earthexplorer.usgs.gov) and [https://libra.developpmentseed.org](https://libra.developpmentseed.org). Secondary data consist of the administration map in the Mekong Delta. ### Pre-processing imagery Geometric and atmospheric correction: Landsat images were used as the map projection and reference ellipsoid of WGS84, UTM Zone 48N. Striped image correction: to solve this situation using the Gapfill tool using on the ERDAS software. ### Classification Image interpretation keys are based on eight factors: size, shape, shade, tone, color, texture, pattern and key combination (objects location on the image). The objects will be classified on the image in Kien Giang province including forest; forest and aquaculture; urban and accretion area. Using specific band combination to identify accurately each object based on spectral characteristics. ### Creating water index (NDWI - Normalized Difference Water Index) The NDWI is a remote sensing based indicator sensitive to the change in the water content of leaves (Gao, 1996). \\[\\text{NDWI}=(\\text{NIR}-\\text{SWIR})/(\\text{NIR}+\\text{SWIR}) \\tag{1}\\] Where NIR is Near-Infrared band and SWIR is Short Wave Infrared band. ### Shoreline extraction Based on wavelength spectrum on Landsat imagery, combining between image ratio and extraction of soil and water were developed shoreline quickly (Thao, P. T. P. et al., 2009). This method was highlight the land and water boundaries due to threshold value (Thuy, D. T. N., 2016). Water reflection is equal zero on infrared band and soil reflection value is usually higher than water reflection value (Pritam Chand and Prasenjit Acharya, 2010). Water is a strong absorb in near infrared spectrum (NIR band, 0.7-0.8 \\(\\mu\\)m) and effective detection of submerged surfaces, distinguished between dry and wet soils, supplying an information of coastal wetland (Van, TT, & Binh, T. T, 2008) The threshold value method was implemented on NDWI and ratio images. The highest and lowest threshold values are selected by zero on the ratio and NDWI images to convert water pixel value into 0 and non-pixel value by 1. A new image was created by adding each value on two images overlapping that have been assigned threshold value on threshold value image where water contains value by 0 and other pixel values are equal to 1 or 2. Continuing implement the threshold value again to create threshold value image with the lowest value is 1 and the maximum value is 2. NDWI separated soil and water objects and using ratio images to highlight shoreline as following formula: \\[\\text{Shoreline}=(\\text{Green}\\ /\\ \\text{NIR})\\ \\text{x}\\ (\\text{Green}\\ /\\text{SWIR}))+\\text{NDWI} \\tag{2}\\] Where NIR is Near-Infrared band, SWIR is Short Wave Infrared band and NDWI is water index. ### GIS method Shoreline data was transferred to ArcGIS software by semi-automatic digitizer to create shoreline maps and identify erosion and accretion area on land use in coastal area. Figure 1: The research area in Kien Giang province Figure 2: Flowchart of study area ## 3 Results and Discussion ### Erosion and accretion changes in the coastal zone of Kien Giang province from 1975 to 2015 The shoreline changes from Thuan Hoa commune, An Minh district to Nam Thai commune, An Bien district, Kien Giang province There is a curved coastline from Thuan Hoa commune, An Minh district to Nam Thai district, An Bien district thus erosion and accretion process are complicated in which accretion process is predominant. From 1975 to 1990, total accretion area was 400.3 ha, total erosion area was 56.3 ha. In the period of 1990 to 1995, the total area of sedimentation significant decreased at 56.5 ha and the total erosion area continuously decreased at 35.8 ha. From 1995 to 2000, the total area of accretion slightly decreased at 54.8 ha, however, the total area of erosion increased 54.8 ha. In the period of 2000 to 2005, the total accretion area slightly decreased at 44.9 ha and the erosion dramatically decreased at 12.5 ha. From 2005 to 2010, the total area of accretion was increasing by 95.6 ha, and total erosion area slightly increased by 14 ha. In the last period of 2010 to 2015, total deposition area and total erosion area continuously increased at 117.7 ha and 26.6 ha, respectively (Figure 3). The accretion process in Nam Thai commune tends to increase and more increase than in Thuan Hoa commune. The erosion process tends to decrease instead of increasing of accretion process however it was complicated process. The reason is that topography in this area is overhanging on the sea thus an alluvial sediment is kept in Cai Lon estuary and gradually deposition to create an accretion area. In the west coast, shore area is affected by the tide of the west sea in the Gulf of Thailand that created Kien Giang coastal shape and tends to deposition in this area. This is the erosion and accretion processes accumulation and effect mainly on forest and forest-aquaculture. Erosion process occurred in Thuan Hoa commune and accretion process was happened in Nam Thai commune. In the first stage of accretion process, deposition situation tended to decrease and then increase from 2000 to 2005 at 44.9 ha and adding more at 17.7 ha from 2010 to 2015. The shoreline changes from Tay Yen commune, An Bien district to Vinh Thanh Van commune, Rach Gia city In this coastal area, erosion and accretion processes were also complicated in which deposition process is a main trend from Tay Yen commune, An Bien district to Vinh Thanh Van, Rach Gia city. From 1975 to 1990, the total area of accretion was 253.5 ha and the total area of erosion was 123 ha. In the period of 1990 to 1995, the total area of accretion significant decreased at 65.6 ha, the total area of erosion continuously decreased at 54.6 ha. In the period of 1995 to 2000, total deposition area increase of 171.7 ha, however, total erosion area continued to decrease dramatically of 18 ha. From 2000 to 2005, total deposition area continuously increased to 178.6 ha and erosion area increased at 55.3 ha. In the period of 2005 to 2010, the total area of accretion decreased up to 53.8 ha, total erosion area decreased to 28.4 ha. From 2010 to 2015, the total accretion area continued to decrease by 45.8 ha, however, the total area of erosion increased by 33.1 ha (Figure 4). In this area, erosion process is complicated because Cai Lon river mouth is tending to the Gulf of Thailand and square root with the southwest monsoon, most of alluvial sediment from estuarine is flowed and transported out by the southwest wind Figure 3: The shoreline processes from 1975 to 2015 in coastal areas from Thuan Hoa to Nam Thai commune, Kien Giang provinceinto encroachment sea of Rach Gia city, thus a huge amount of alluvial sediment is not deposition in this area. Besides, the deforestation in protection forest has led to decrease forest density and strong influence of the southwest monsoon, the impact of strong waves and high tide cause to increase erosion in this area. Coastal area from Tay Yen to Vinh Thanh Van communes has a complication of erosion and accretion process. The accretion situation is dominated in each period on three main of land used consist of forest, forest-aquaculture and urban area. The strong accretion stage was in two stages including the period from 1995 to 2000 at 171.1 ha and the period of 2000 to 2005 increase at 178.6 ha, because of the fast increase of urban area due to Rach Gia encroachment at Rach Gia City to be developed in 1999. In addition, the aluminum sediment in the Cai River is bringing to the encroachment and continuing deposition each year. However, wave and flow impacts and the other factors has lost some of land making erosion but have no serious. It is necessary to solve this issue such as dyke restoration and reforestation to keep alluvial sediment from river mouth. The shoreline changes from Son Kien commune to Binh Son commune, Hon Dat district The coastal area from Son Kien to Binh Son communes, Hon Dat district, Kien Giang province is about 37 km long with intermixed between erosion and accretion processes. From 1975 to 1990, total accretion area was 834.7 hectares and total erosion area was 65.2 hectares. In the period of 1990 to 1995, the accretion process significantly decreased with the total area of 88.6 ha, total erosion area decreased of 28.8 ha. In the period of 1995-2000, the total accretion area and total erosion area continuously decreased of 74.2 ha and 13.6 ha, respectively. From 2000 to 2005, the erosion process was more dominant than the accretion process, total area of sedimentation was 18 ha and erosion was 76 ha. In the period of 2005 to 2010, the accretion process increased with the total area of 87.5 hectares and the total area of erosion decreased of 32.9 hectares. From 2010 to 2015, the accretion process slightly decreased with the total deposition area of 62.7 ha and the total area of erosion increase of 59.6 ha (Figure 5). The direct influence of the southwest wind in summer was strongly impacted and the other effects due to the wind such as waves, flow, etc. in this area. Shoreline topography is mainly composed of rock which is large particles thus it is less washed away by the flow. Only abrasive and erosion for a long period due to the effect of natural factors cause of erosion issues. Besides, the policy of restoration of protection forest in 2005, the erosion has been reducing. This is the area of erosion and accretion intermixed processes in each period occurring at some locations in Binh Son commune. In the period of 2000 to 2005, the erosion was dominated with total erosion area of 76 ha on forest area of 49.2 ha and forest-aquaculture at 26.8 ha, extending from Son Kien to Binh Son commune, Hon Dat district. The accretion process was only 18 ha in which on the forest at 11.2 ha and on forest- aquaculture at 6.8 ha. Figure 4: The shoreline changes in the period of 1975 to 2015 from Tay Yen to Vinh Thanh Van commune, Kien Giang province The shoreline changes form Ha Tien encroachment to Phao Dai commune, Ha Tien town From 1975 to 1990, the accretion process was dominant with the total area of 163.5 ha and the total erosion area was 91.4 ha. In the period of 1990 to 1995, the erosion process was dominated with the total area of 42.3 ha, the total deposition area was decreased of 61.2 ha. In the period of 1995 to 2000, the accretion process continuously decreased with a total area of 27.2 ha, total erosion area was significantly decrease to 8.4 ha. From 2000 to 2005, accretion process was more dominant than erosion, total area of sedimentation is 105.8 ha and erosion was 17.9 ha. In the period of 2005 to 2010, accretion process was slightly decrease with the total area of accretion of 89.4 ha, the total area of erosion was continuously of 14.5 ha. In the last period of 2010 to 2015, the accretion process was slightly decreased to 81 ha, total erosion area increased at 18.9 ha (Figure 6). Figure 5: The shoreline changes from 1975 to 2015 in Son Kien to Binh Son commune, Hon Dat district, Kien Giang province Figure 6: The shoreline processes in the period of 1975 to 2015 from Ha Tien encroachment to Phao Dai commune, Rach Gia city, Kien Giang province Due to sedimentation amount, coastal flow and the influence of the southwest wind each year, the accretion area increased significantly in Ha Tien encroachment. At the same time, the sea encroachment is concentrated thus the deposition trend is predominant. This area is avoided the northeast wind thus the amount of alluvial sediment lost due to the effect flow of the northeast wind is not significant. In general, Ha Tien encroachment area was developed, then deposition process was appearance, erosion and accretion processes are intermixed together. In the period of 1990 to 1995, the erosion was predomination with a total area of 61.2 ha occurring mainly in the area of Phoa Dai commune with an area of 42.3 ha. In the period of 2000 to 2005, Ha Tien encroachment was developed, the area of accretion increased sharply to 105 ha, then decreased gradually through the next period from 2010 to 2015 at 81 ha. ### Discussion of shoreline progress in the period of 1975 to 2015 The erosion changes in the study area: total erosion area of 2,156.7 hectares, erosion processes increased sharply in the period of 1990 to 1995 of 511.3 hectares, then gradually decrease through the period from 1995 to 2010 and increase again in the period of 2010 to 2015 at 310 hectares. Total area of erosion is 1,326.6 ha on forest-aquaculture of 748.8 ha and urban land of 81.3 ha. The accretion changes in the study area: total area of accretion was 5,935.7 ha, accretion processes increased dramatically in the period of 1975 to1990 at 3,352.7 ha and from 1990 to 2000, the accretion area slightly decreased then gradually increase in which total accretion area on land use area at 3,474.2 ha in forest at 1,934.2 ha, forest and aquaculture at 599.1 ha and urban land at 599.1 ha. The result in each period shows that coastline changes were changing in a complicated process to be divide into three processes: (1) typical accretion process, (2) typical erosion process and (3) the intermixed of accretion and erosion processes (Figure 7). Identification of coastal erosion and accretion has been described in detail through shoreline changes maps from remote sensing classification. The coastline in the study area in a period of 1975 to 2015 was identified in a complicated processed and development in the general trend to the seashore. The west coast is stable; which accretion process is more predominant than erosion. ### The erosion and accretion processes in Kien Giang province Typical erosion areas are consisting of Thuan Hoa commune, An Minh district, Kien Giang province, from Nam Yen commune area to Vinh Hoa Hiep commune, Rach Gia city, Kien Giang province. Typical accretion areas are including from Thuan Hoa commune, An Minh district to Nam Thai commune, An Bien district, Kien Giang province, Rach Gia and Ha Tien sea encroachment area, Ha Tien town, Kien Giang province. ### Recommendations To apply the solution of dyke construction combined with mangrove reforestation for coastal protection. Reforestation and restoration of coastal protection mangrove forest, managing and propagating the awareness of forest protection for local farmers in combination, pay attention and long-term protection in the coastal area. ## References * International Centre for Environmental Management, (February), 82. * Casse et al. (2012) Casse, C., Bach, P., Thi, P., Nhung, N., Phi, H., & Dao, L. (2012). Remote sensing application for coastline detection in ca mau, Mekong Delta. Proceeding of International Conference on Geometics for Spatial Infrastructure Development in Earth and Allied Science-GIS IDEAS. * Dang Thi Ngoc Thuy (2016) Dang Thi Ngoc Thuy. (2016). Research on coastline changes in Phu Quoc Island from 1973 to 2010, 3(28), 64-69. * Edward Anthony et al. (2015) Edward J. Anthony, Guillaume Brunier, Manon Besset, Marc Goichot, P. D. & V. L. N. (2015). * The relationship between erosion and human activities in the Mekong Delta. Science Journal, 2007. [https://doi.org/10.1016/j.landusepol.2007.03.005](https://doi.org/10.1016/j.landusepol.2007.03.005) * Gao (1996) Gao, B., 1996. NDWI A Normalized Difference Water Index for Remote Sensing of Vegetation Liquid Water From Space. Remote Sensing of Environment, 58, 257-266 * MRC-Mekong River Commission (2010) MRC-Mekong River Commission. (2010). State of the Basin Report 2010. Mekong River Commission. [https://doi.org/ISSN](https://doi.org/ISSN) 1728:3248 * Ngo et al. (2014) Ngo, C. N., Pham, T. H., Do, S. D., & Nguyen, B. N. (2014). Status of coastal erosion of Vietnam and proposed measures for protection, 22. * Nguyen et al. (2010) Nguyen, H. H., Pullar, D., Duke, N., Mcalpine, C., & Nguyen, H. T. (2010). Historic shoreline changes: an indicator of coastal vulnerability for human landuse and development in kien giang, vietnam. ACRS 2010: 31st Asian Conference on Remote Sensing, (December 2014), 1835-1843. * Nguyen (2017) Nguyen, N. A. (2017). Historic drought and salinity intrusion in the Mekong Delta in 2016: Lessons learned and response solutions. Vietnam Journal of Science and Technology, 1(1), 2015-2018. * Pritam Chand (2010) Pritam Chand, P. A. (2010). Shoreline change and sea level rise along coast of Bhitarkanika wildlife sanctuary, Orissa: An analytical approach of remote sensing and statistical techniques, 1(3), 436-455. * R, & MV (2016) R, K., & MV, R. (2016). Shoreline Change Monitoring in Nellore Coast at East Coast Andra Pradesh District Using Remote Sensing and GIS. Journal of Fisheries & Livestock Production, 4(1). [https://doi.org/10.4172/2332-2608.1000161](https://doi.org/10.4172/2332-2608.1000161) * Van & Binh (2008) Van, T. T., & Binh, T. T. (2008). Shoreline change detection to serve sustainable management of coastal zone in Cuu Long estuary, 1-6
Kien Giang is one of the coastal provinces in the Mekong Delta which is facing the problem of coastal erosion to affect people's life in the coastal area. This project aims to monitor shoreline and to assess landslide and accretion situation in the period from 1975 to 2015 in the coastal area of Kien Giang province. The study applied Normalized Difference Water Index (MNWI) method and water level extraction using LANDSAT imagery from 1975 to 2015 for highlight the shoreline. Thus, analysis was identified erosion and accretion areas based on shoreline changes and land use influenced by landslides and deposition. The results show to create shoreline changes from 1997 to 2015 in the coastal area of Kien Giang province. A landslide occurred in the west from Nguyen Viet Khai commune to Thuan Hoa commune and Nam Yen commune to Vinh Hoa Hiep commune, Rach Gia city, Kien Giang province. An accretion situation was determined in the areas from Thuan Hoa commune, An Minh district to Nam Thai commune, An Bien district, Kien Giang province, Rach Gia sea encroachment at Rach Gia town and Ha Tien encroachment area at Ha Tien town, Kien Giang province. In general, the coastal area of Kien Giang province has a predominant tendency of accretion, however, the occurrence of erosion and accretion are happened interlacing in the coastal area at Kien Giang province. erosion/accretion, Kien Giang province, Normalized Difference Water Index (NDWI), shoreline
Summarize the following text.
320
arxiv-format/2404_10033v1.md
On the bubble-bubbleless ocean continuum and its meaning for the lidar equation: Lidar measurement of underwater bubble properties during storm conditions D. Josset\\({}^{\\dagger}\\) S. Cayula\\({}^{\\dagger}\\) B. Concannon\\({}^{2}\\) S. Sova\\({}^{3}\\) A. Weidemann\\({}^{4}\\) 1 U.S. Naval Research Laboratory (NRL), Stennis Space Center (SSC), Ocean Sciences Division, 1009 Balch Blvd, 39529 Stennis Space Center, MS. 2 Naval Air Warfare Center - Aircraft Division (NAWCAD). 3 NRL, SRR, 1009 Balch Blvd, 39529 Stennis Space Center, MS. 4 NRLSSC retired. \\({}^{\\star}\\)[email protected]_ ## 1 Introduction The breaking of surface waves injects air into the water column. It happens all over the ocean, even if most people are only familiar with the breaking of waves near the shore and the associated foamy, bubbly surface called surf. This entrainment of air forms bubble clouds underwater [1-3]. These entrained bubbles, in turn, change the optical and acoustic properties of the water column [4]. In addition to the sound speed change and acoustic transmission loss increase, the breaking generates acoustic noise [5]. In high-wind conditions, bubbles become a critical component of the air-sea gas exchange [6, 7], especially the uptake of carbon dioxide and oxygen [8]. In general, bubbles play a significant role in air-sea exchanges of mass, heat, energy, and momentum [9, 10]. Although it has been known for quite some time that the lidar return is sensitive to whitecaps and bubbles [11, 12], there are not many publications on this topic, and additional published studies discussing the lidar return of bubbles in the ocean would allow us to understand the added value of this instrument. The impact of whitecaps on the CALIPSO space lidar return is briefly shown and discussed in [13] and [14]. In the context of the fundamental lidar equation [15], we stressed the importance of more studies using lidar depolarization \"at high wind speeds when bubbles are forming inside the water column.\" More recently, [16] showed and discussed case studies of the lidar return of bubbles created by ship wakes. This paper presents: * The Naval Research Laboratory (NRL) shipboard lidar and its calibration procedure. * The first dataset of bubble profiles observed by shipboard lidar in high wind conditions. A demonstration that the depolarization of the bubble features comes from small-angle multiple scattering. For this reason, the vertical depth is measured accurately with the shipboard lidar. * The fact that whitecaps and bubbles have a strong and unambiguous depolarization signature. It is used for a feature detection algorithm of ocean bubbles (a \"bubble mask\"). * The discussion of a necessary paradigm shift for the whitecap term embedded within the lidar equation for in-water laser light propagation. * The lidar void fraction retrieval procedure and its accuracy. Lidar can provide simultaneous vertical information on both the atmosphere and the ocean. As such, it can provide information on the bubble's vertical properties (bubble depth and void fraction) within the context of wave height and sea spray injection. Despite the shortcomings of the data due to sparse coverage and the inability to completely decouple and decorrelate the different geophysical contributions from the different scatterers, lidar use opens exciting possibilities for future studies of the air-sea interface. ## 2 Instrument Design ### System Overview The NRL shipboard lidar is one of the significant assets of the NRL Ocean Sciences division at the NASA Stennis Space Center. It measures the elastic backscattering of laser light at 532 nm. The primary data products are the ocean backscatter coefficient, total attenuation coefficient, and degree of linear polarization [19]. The lidar has been used on ship deployments continuously from 2013 to 2019 (East Sound in Washington state, Chesapeake Bay, Gulf of Maine, Atlantic Ocean, Lake Erie, and Gulf of Mexico). The system has been receiving regular modifications since 2013 to fit the needs of different projects. A typical setup of the lidar, is to be on the bow of research vessels, and typically setup at an angle between 15 and 20 degrees, to limit the ocean surface backscatter intensity. It was mounted at an angle of 6.3 degrees on the R/V Sikuliaq because the anticipated adverse environmental conditions led us to design a much sturdier mount to protect the optics. The lidar, as set on the R/V Sikuliaq, is shown in Fig. 1. We built the mount in steel, based on 0.635 cm thick plates and 20.32 cm I-beams, secured to the bow with 2.54 cm diameter bolts. The power supply was mounted directly under the lidar against the hull, and the bow served as a protecting structure against the waves. The intensity of the ocean surface signal at the 6.3 degrees angle did not induce difficulty for the data analysis, because the rough ocean condition (ship roll and rough surface) led to a relatively low surface return, and because the 15-20 degree setup is conservative when the lidar system is properly designed. We designed the instrument to be as compact as possible while ensuring enough structural robustness to enable deployment on a ship while underway. This compact and robust design allows the system to sample ocean properties, even in the harsh environment of the open ocean. The soundness of the design and the setup was demonstrated without ambiguity during the 18 days of deployment (12/05/2019 to 12/23/2019) in the Gulf of Alaska in high winds and storm conditions, with wave heights recorded up to 17 m. Fig. 1 illustrates one of the many wave events experienced by the lidar. The boat camera takes pictures regularly, but their frequency (12 per hour) is not sufficient to capture a representative sample of wave events. Therefore, most of the waves event experienced by the lidar are documented only through the lidar data. The crew reported that the lidar was under 2 m of water three times due to large waves reaching the bow on the night of 12/11/2019. It did not show any degradation of capabilities after this or any other wave events it experienced. ### Lidar system description The system comprises a laser transmitter, beam polarization optics, photoreceivers, data collection, and control hardware. The transmitted beam path is in Fig. 2, describing the key elements. The receivers have a 140 mrad Field of View (FOV). The 1 ns pulse width of the laser combined with the 800 MHz digitization rate permits a vertical sampling of about 0.14 m underwater. In contrast, the 50 Hz sampling yields an along-track resolution of approximately 0.02 to 0.1 m. In order to provide sufficient overlap between the transmitted beam spread and detector FOV, the shipboard lidar is mounted at least 4.2 m above the sea surface. Figure 1: Top left and bottom: The NRL shipboard lidar mounted on the bow of the R/V Sikuliaq. The red arrows show the position of the lidar (the small white dot on the bow). Top right: Picture of one of the waves that reached the bow. This picture is representative of a relatively minor size wave. There are no photographs of the more significant wave events. NAWCAD designed and built the system, and Welch Mechanicals, LLC built the watertight enclosure. Fig. 2 shows the optical transmission path of the NRL shipboard lidar in its original state. In the original (2013) design, an Electro-Optic (EO) modulator could modify the polarization of the laser light from circular to linear. In this configuration, two receivers would measure the linear polarization states (co- and cross), and two receivers would measure the circular polarization states. In 2015, we removed the modulator to reduce the amount of electronic noise in the signal and optimize the system for oil research [17]. The laser polarization is now fixed and linear. The transmission of the laser beam goes through an optical window situated in the center of up to six receiver units. Each receiver consists of a telescope, which focuses the lidar return signal to a photomultiplier tube (PMT). The electrical signal from the PMT connects to a digitizer channel. The receivers are identical except for a different polarizer or/an optical filter at the entrance aperture to allow detection of the polarization of interest (or wavelength for fluorescence). There are six receiver positions, and the acquisition software is written for up to 6 channels, making the system modular and reconfigurable. Up to four receivers and two digitizers (with two channels each) have been used so far during deployments. Two receivers measure the (linear) co-polarized backscattered light, one measures the (linear) cross-polarized return, and one is sensitive to the fluorescence of oil with a relatively wide (50 nm) bandpass filter centered at 575 nm. We present the characteristics of our system in the following discussion. Key characteristics of the transmitter and receiver are in Table 1 and Table 2. The data acquisition system provides remote control and diagnostics of the instrument, even if there is no safe access to the ship's bow. As a result, it can run 24/7, even without the possibility of manual adjustment or repair, even during several weeks of boat deployment in storm conditions. A master laptop computer sends all parameters (PMT gains, gate timing) and control commands to the lidar, and the lidar system box stays sealed. Figure 2: Figure of the optical transmission path of the NRL shipboard lidar. A GPS/IMU unit collects attitude and position information for each laser shot, which goes into the lidar data stream in real time. Due to the proximity to the surface, the signal-to-noise is typically higher than for airborne or spaceborne systems (3 to 5 order of magnitudes). In addition, a custom compression algorithm created and implemented by NAWCAD allows the system to go beyond the 11 Effective Number of Bytes of the digitizer and reach a dynamic range \\(>\\)14 Bytes. However, even low signal-to-noise lidars will be sensitive enough to monitor the water body down to some depth, and a well-designed ocean lidar system has a sufficient vertical resolution and penetration depth to detect the features of interest. A critical difference between this lidar and most other operating systems is the high vertical resolution of the oceanic feature it can detect (14 cm underwater). This is a key characteristic of this system, that makes the dataset so unique. Additionally, the low speed of the boat and relatively fast sampling rate create an almost stationary measurement where the lidar does not move. However, the ocean feature evolves under it as a function of time. From a scientific point of view, it is complementary to airborne and spaceborne lidar observations. These platforms travel so fast that they are better adapted to study the spatial scale of ocean features. ## 3 Field mission research objectives The lidar was on the bow of the R/V Sikuliaq from 4th December 2019 to 23rd December 2019. The winter deployment in the Gulf of Alaska was in collaboration with the UNOLS cruise of the \"Wave breaking and bubble dynamics\" (Breaking Bubble), led by Principal Investigator (P.I.) J. Thomson, co-P.I. M. Derakhti and funded by the National Science Foundation (NSF). The project aimed to understand the turbulence beneath waves breaking at the ocean surface. The dynamics associated with bubble plumes generated during the breaking process are a particular focus. P.I. Thomson invited the NRL researchers to bring the lidar on the cruise. The NRL internal IMPACT 6.2 project supported the NRL researchers' participation in this work. This project aims to derive the vertical properties of bubble clouds with lidar technology and use this information to understand the ocean environment better. ## 4 Meteorological conditions and sea state As shown in Fig. 3, during the 18 days at sea exercise, the R/V Sikuliaq experienced several storm conditions associated with the passage of low-pressure fronts below 1,000 hPa. The average wind speed value was 9.7 m/s (\\(\\pm\\)4.8), with a minimum of 0.05 and a maximum of 26.3 m/s (with wind gusts up to 32.9 m/s). Wave heights ranged from 3 to 10 m, with extreme wave events in the area as recorded by the Swift buoys [18, 19] up to 17 m. The wind originated primarily from the West and South-West (average 216.08 \\(\\pm\\) 64.44 degrees). Water salinity, temperature, and chlorophyll-a content were relatively stable at 32.14 Figure 3: Top: track of the R/V Sikuliaq across the Gulf of Alaska. Bottom left: pressure levels (m/s) during the travel. Bottom right: same as left but for wind speed (m/s). \\(\\pm\\) 0.14 gram of salt per 1000 grams of water (or Practical Salinity Unit, p.s.u), 10.22 \\(\\pm\\) 1.53\\({}^{\\circ}\\)C, and 1.94 \\(\\pm\\) 0.32 mg.m-3, respectively. Regarding shipboard conditions, the boat experienced regularly 20 to 30-degree roll as the multidirectional wave systems made it difficult to find a stable heading for the ship. The ship's roll for the data presented in this paper is in Fig. 4. The conditions were ideal for finding bubbles generated by breaking wave events due to the high wind speed (Fig. 3, bottom right). These conditions do not affect too much the lidar initial setup, and the off-nadir angle is 6.7 degree on average, very close from the initial 6.3 degree setup. On average, the lidar height is 9 m above the water surface. This determines the calibration altitude, as discussed in section 5. The lidar acquired data while the boat faced the wind and maintained a forward speed of around 0.5 to 1 m/s. Overall, we gathered data from the 6\\({}^{\\text{th}}\\) to 22\\({}^{\\text{nd}}\\) (system setup and initial tests on the 5\\({}^{\\text{th}}\\), start to clean up and package on the 23\\({}^{\\text{rd}}\\)), which resulted in the collection of more than 113 hours of data at 50Hz (around 20M ocean profiles) that span different winds and wave conditions. This lidar dataset allows us to understand the statistical occurrence of bubbles in the ocean as measured by a lidar. It is the first published result of this kind. ## 5 Data calibration procedure The dataset is analyzed and presented for every individual laser shots. The lidar records data in its own reference frame, and provides information as a function of the lidar distance. The boat and lidar GPS/IMU allow to correct for the lidar altitude and attitude variations. For clarity of the presentation, the shots are presented as vertical in the pictures, but as previously explained, they are on average taken at a 6.7 off-nadir angle. As a consequence, the bubble depth we discuss is biased low. The average bias would be easy to correct, but it would raise the question on why to not correct on a shot to shot basis. If such a correction is applied, the data would need to be presented in 3 dimensions, which is not how bubbles data are usually presented. It would be unique and extremely interesting to discuss the 3 dimensional structure of the bubble clouds, but it is not within the scope of this paper which focuses on the lidar observations and the algorithm. We used both the atmospheric backscatter and the ocean surface as calibration targets. Interestingly, we found that using the ocean surface calibration return as a calibration target is not trivial for the shipboard lidar. In contrast, it works very well for space lidar [14]. The exact cause would require more investigation. However, one issue seems to be that the wave systems we experienced were loosely related to wind speed. The ship did not roll significantly less Figure 4: Time series of the ship’s roll as a function of time for the lidar data under analysis. (Fig.3 and Fig. 4) when the speed decreased from 20 m/s to 10 m/s. In addition, preliminary analysis (not shown) seems to indicate that the slope of the waves varied so much that the reference for the mean square slope of the waves would need to be adjusted (i.e., the ocean surface return changes significantly between the two sides of a large wave). We have never noticed this issue in the calibration of the CALIPSO lidar, probably because of the larger laser footprint and because waves are, in statistics, smaller than what we experienced during this cruise. This variability with the incidence angle is the reason why we did not use the ocean surface as the calibration reference for this paper. Previous publications describe the general principle of the atmospheric backscatter calibration procedure based on Rayleigh scattering [20, 21]. For a shipboard lidar, the accuracy of this calibration will be much lower than, for example, a space or airborne lidar, which can rely on clear air in the upper atmosphere as a calibration target [22, 23]. When the lidar is at low altitude, the presence of aerosols in the atmospheric return cannot be overlooked, and the calibration procedure requires a specific methodology to limit the uncertainty due to aerosol contamination. The air temperature and pressure are part of the standard measurements from the RV Sikuliaq. They are performed with a fan-aspirated MET4A Meteorological Measurement Systems by Paroscientific, Inc, mounted on the forward mast. The pressure accuracy is better than \\(\\pm\\)0.08 hPa, and the temperature accuracy is better than \\(\\pm\\)0.1\\({}^{\\circ}\\)C. These measurements are used directly to determine the air density and the backscatter of air molecules. This is the calibration reference for the lidar signal. In order to take into account the movement of the boat and the ocean surface height changes, the calibration reference is the average lidar signal between 3.5 and 4.2 m above the ocean surface, without any extrapolation of the vertical profile shape of the atmospheric density (i.e. the procedure used in [24] was unnecessary). With an average 9 m distance from the water, this procedure is a compromise to have enough data for the aerosol filtering procedure, far enough from the lidar but not too close to the ocean surface. In order to limit the influence of heavy sea spray events on the statistics, we make the calibration on profiles with an atmospheric backscatter coefficient equal to the median value for the whole file. Although this does not correspond to the minimum of aerosols, this ensures that we have enough data within the file considered for the calibration while lowering the amount of aerosol contamination. Due to the presence of aerosols, we anticipate that the accuracy of this calibration procedure is low and the ocean backscatter coefficients are biased low. Assuming the average aerosol optical thickness of 0.13-0.14 [25] to be spread evenly within a 500 to 1000 m boundary layer and a value of lidar ratio of 20-25 sr [26, 27], the calibration is biased low by a factor 4 to 8 (i.e., up to one order of magnitude of calibration error). This aerosol background does not affect the bubble mask, as it does not impact the depolarization in the ocean, but this is important for discussing void fraction retrieval. The improvement of the void fraction retrieval accuracy is a crucial motivator to use the ocean surface as a calibration target, and the development of a correction accounting for the slope of the waves in the future. ## 6 Results and discussion ### Lidar depolarization and multiple scattering considerations As expected from [16], bubble clouds significantly affect the cross-polarization channel. Moreover, it is much more noticeable than in the co-polarization channel. This difference between co-polarized and cross-polarized channels is interesting, considering the signal intensity relates to the void fraction [28], and the bubble clouds were visible in the co-polarized channel in the laboratory environment [29]. For the dataset presented in this study, the co-polarization signal is slightly larger in presence of bubbles. As shown in section 6.5, it makes it possible to remove the contribution from the water molecules and biological content, to create a void fraction retrieval algorithm. It would however, be challenging to accurately determine the presence of bubbles from the co-polarization signal alone. This is a major difference with the results of [29], using the same system in the laboratory, three months prior to this deployment. We attribute this difference to the larger contribution of the biological content in the ocean to the co-polarized signal background, but it could also be that the void fraction of the bubble cloud in the laboratory environment was significantly larger than anything we observed in the Ocean. The detection of depolarization of spherical particles in the backscatter direction implies multiple scattering of the lidar beam in an optically dense medium. For the NRL shipboard lidar, because the system is so close to the target, it is not detecting light scattered back at 180 degrees (i.e., the backscatter direction). The exact angle will depend on the distance to the target. How this slight angle deviation affects the measured signal can be shown by looking at the M12 element of the Mueller matrix of spherical particles [30]. As shown in Fig. 5, this element has a significant variability between 180 and 177 degrees. However, this single scattering calculation has little meaning if the multiple scattering regime applies to the lidar observations, as it will change the scattering geometry. However, it means that the NRL shipboard lidar should, within its sensitivity limits, be able to detect depolarization by bubble features optically thin enough to fall into the single scattering regime. In that case, observing the same bubble cloud with a change of the angle of observation due to the boat attitude or wave height change will provide information on the particle sizes. We expect most instances of bubbles in the ocean to be optically dense and the single scattering considerations to not be relevant. In that case, an important matter to understand is which regime of multiple scattering lidar observations fall into [31, 32]. Because of the ambiguity of the scattering event's time, the lidar's bubble depth estimate become in Figure 5: Illustration of the depolarization element of the Mueller matrix for spherical particles near the backscatter direction. The calculation uses the distributions proposed by [30] (15 \\(\\upmu\\)m solid line, 30 \\(\\upmu\\)m dashed line). of wide-angle multiple scattering. This is due to side scattering events being measured as if they are coming from a greater distance. Wide angle multiple scattering of light occurs when the width of the \"footprint\" (X) projected by the field of view of the receiver at the range of the target is in the same order as the transport mean free path (MFP) of light [32]. In other words, the lidar can accurately measure the vertical extent of the bubble field only if \\[X<<\\frac{MFP}{\\left(1-\\omega_{0}g\\right)} \\tag{1}\\] The single scattering albedo \\(\\omega_{0}\\) for air bubbles is approximately equal to 1 [28, 30]. The asymmetry factor g is approximately 0.8443 [30]. The NRL shipboard lidar receiver has an angle of 140 mrad, corresponding to a telescope footprint of around 1.2 m. This value varies slightly with the waves' height, the lidar's attitude (pitch, roll, yaw), and the boat's heave. For the conditions of Eq. 1 to apply, the extinction due to the scattering of the bubble cloud must be much lower than 5.35 m-1. The statistical characteristics of the profile of the lidar backscatter coefficient provides the order of magnitude of the extinction coefficient due to the bubbles, which can then be compared to this value. The statistic of the profile is shown in Fig. 6. Even if the backscatter itself decreases as a function of depth as fewer and fewer bubbles reach these levels, the profile is monotonic enough that the average slope provides the proper order of magnitude for this coefficient. Even if the signal in the bubble stays valid down to 20-30 m (see sections 6.2 and 6.3), it is clear from Fig. 6 that the best quality of the backscatter signal for clear water is limited to a depth of 5-10 m. No geophysical explanation exists for the change of backscatter coefficient slope around 5 m and 12 m. The increase of the backscatter coefficient as a function of depth (below 12 m) is a signal artifact. It is important to be aware that, even if this is typically not a limiting factor, the NRL shipboard lidar is designed to study features with stronger scattering than clear water. The average slope of the logarithm of the backscatter coefficient is -0.24 m-1, which corresponds to an extinction coefficient of 0.24 m-1 if we assume a multiple scattering Figure 6: Illustration of the average backscatter coefficient of bubbles and bubbles lidar profiles. coefficient of 0.5 [31, 33]. This extinction coefficient is related to an MFP significantly below the threshold to create wide-angle multiple scattering even for a different value of the multiple scattering coefficient, and with a different assumption relating the scattering MFP to the lidar observations. This statement comes from the fact that the theoretical maximum of the multiple scattering coefficient should be close to 0.1 for full isotropization of light polarization in a dense medium [34, 35]. As a side note, the 90 m footprint of the CALIPSO lidar implies that this system is well within the wide-angle scattering regime for underwater bubble clouds. Concerning small angle scattering, following [32], the criteria is \\[MFP\\frac{\\lambda}{\\pi a}\\lessapprox X \\tag{2}\\] For the MFP corresponding to our observations, all bubbles with a radius \\(>\\) 0.5 \\(\\mu\\)m are in the small angle scattering regime. Although the exact number of the such small bubbles is typically not measured by acoustic sensors, a peak of the bubble size distribution around 10 to 30 \\(\\mu\\)m has been suggested in the past [30, 36]. The tank experiment we conducted in the breaking wave tank of the littoral high bay of the Laboratory of Autonomous System Research [29] seemed to peak around 1 - 2 \\(\\mu\\)m based on acoustic resonance estimates, so even these artificially generated wave conditions would fall under this scattering regime. In addition to the capability to measure the depth of bubble clouds, the impact of small-angle multiple scattering is an apparent reduction of the laser attenuation. Once the attenuation is corrected, the signal is identical to the single scattering return, and the underlying assumption of the formalism of [28] applies to our dataset (see section 6.5). ### Bubble feature detection Whitecaps and bubbles have a clear depolarization signature. This first version of the bubble mask relies on a simple threshold of depolarization as defined by the ratio of the cross-polarization channel to the co-polarization channel. After removing some detection artifacts (data with linear depolarization larger than 1), only the data with a depolarization ratio larger than 0.015 are considered bubbles. To determine the bubble depth, the algorithm also includes a criterion of continuity. The bubble depth corresponds to the number of continuous data points below the water surface whose depolarization value is above this threshold. This continuity criterion removes some false positives, which manifest as isolated points due to the noise. However, it prevents the detection of bubble clouds that do not connect to the surface. The results of the bubble mask are illustrated in Fig. 7. As previously mentioned, the intensity (co-polarization) and depolarization features are different, with no clear features visible in the lidar intensity. There are a few false positives remaining above the ocean surface, marked by the yellow/red curved on the intensity (Fig. 7a). The bubble depth calculation removes these false positives as they are not a continuous structure below the ocean surface. The conical structures of bubbles measured by the lidar from depolarization are qualitatively similar to previous studies of bubbles created by breaking waves [37, 38]. It is important to stress that the lidar has more capabilities than the simple approach presented here. As shown in Fig. 8, we can separate the continuum of observations into four broad domains: low surface and subsurface depolarization (bubbleless ocean), high surface depolarization with low subsurface depolarization (whitecaps), high surface and subsurface depolarization (extended bubble clouds) and low surface but high subsurface depolarization (underwater bubble clouds). The distinction between these different domains may be necessary for future research based on a more complex bubble mask, which would compare the lidar measurement with previous studies of whitecaps. Very few studies rely on lidar systems to study bubble properties, and the sensors used in most of the other research have different limitations. Typically, non-lidar instruments above the water surface can only detect the whitecaps with limited vertical sensitivity, but contrary to instruments deployed underwater, they have to advantage to not affect the water flow and the bubble properties. Active acoustic instruments can measure the bubble vertical profile, but the signal is dominated by the resonant frequency. The comparison with the lidar is interesting, especially as an indication of the bubble size distribution [29]. Underwater cameras can measure the bubble size directly, as long as it belongs to a limited range of bubble sizes. Figure 7: (a): lidar attenuated backscatter intensity (decimal logarithm) for a segment starting at 11\\({}^{\\text{th}}\\) Dec 2019 23:09Z, (b): same as a) but for the lidar depolarization, (c): same as b) but the non-underwater bubble features have been removed with the bubble mask (above water features are filtered at a later step). Both b) and c) share the same color code (decimal logarithm of the depolarization). ### Maximum penetration depth As described in the section 6.1, the NRL shipboard lidar observations fall into the regime of small-angle multiple scattering. Consequently, the lidar measures the vertical extension of the bubble field directly, and the presence of the scattering forward peak increases the penetration depth. The bubble feature detection algorithm is described in section 6.2. As previously explained, this bubble mask is a depolarization threshold as bubbles depolarize the lidar signal significantly. Fig. 9 illustrate one extreme case of bubble depth (Fig. 9a), and the bubble depth statistic from the bubble mask (9b). The bubble mask quantifies this system's maximum penetration depth in bubble clouds. As shown in Fig. 9a) and b), bubble depths over 25 m and up to 28 m are part of this dataset. However, most bubble cloud observations extend between 0 and 10 m, and the observed bubble depth does not reach 30 m. Because of the apparent low occurrences of these deep clouds and the novelty of the bubble mask, the data represented in Fig. 9b) do not allow to discriminate between a limitation of the lidar sensitivity, a physical limitation of bubble injection processes, or a limitation of the bubble depth in this specific dataset. This observed depth is extremely encouraging, considering that [39] maximum observed bubble depth from one year of echo sounders data very infrequently reaches 38 m deep, with most of their data also in the 0 to 10 m range. [1] present downward-looking echosounder data from the same deployment (R/V Sikuliaq) with a mean and maximum bubble plume penetration depths that exceed 10 m and 30 m. Note that even if there's a very good agreement with lidar and echo sounders data in bubble plume structure [29], the exact detection Figure 8: Depolarization of the subsurface signal (higher for extended bubble clouds) as a function of the depolarization of the surface signal (higher for whitecaps). The color code is the decimal logarithm of the number of observations. threshold of bubbles is instrument and algorithm dependent. The lidar algorithms and the dataset presented in this study are too new to exactly quantify the detection threshold in term of bubble properties, but it seems extremely close to the detection algorithms discussed in [1, 29, 39]. Ideally, we can refine the void fraction algorithm accuracy in the future to provide a void fraction threshold for the bubble mask. The lack of correlation between bubble depth and wind speed in our dataset is unusual and has two origins. First, we were in conditions of a very steep swell, above the breaking threshold, and we had formation of bubble clouds in relatively low wind speed conditions. This is very interesting in term of ocean physics, but beyond the scope of to this paper, which focuses on what the lidar observations were. Second, if we consider a bubble cloud (Fig. 9a), there are many values of the bubble depth for a single bubble cloud. This paper presents high resolution observations, whereas the max bubble depth or average bubble depth is the quantity that is typically correlated with wind speed [39]. ### Whitecaps contribution in the lidar equation Interestingly, no strong gradient of the surface return intensity is associated with the presence of the bubble clouds. Note that a moderate increase in the signal intensity is still present, which allows us to calculate a void fraction (see section 6.5). The signal intensity increase appears more clearly in an experiment that we conducted in the breaking wave tank of the NRL Laboratory for Autonomous System Research in September 2019 [29]. The smooth transition between the bubble and bubble-less ocean is a new result as the current lidar equation formalism [11, 15] creates a clear boundary between the specular reflectance of the ocean and the reflectance of whitecaps. It can be seen from Eq. 2 and Eq. 21 of [15] that the specular reflectance term will become close to 0 as W converges towards 1. If this formalism were correct, a clear horizontal intensity gradient would be associated with the strong depolarization induced by the bubbles. As we can see from Fig. 10, there is no correlation between the horizontal gradient of the ocean surface return and the gradient of surface depolarization Figure 9: a) Depolarization of the lidar signal for one extreme case of bubble depth. Green features are bubbles, and blue features the ocean (bubbleless) water. b) Statistic of bubble depth from the bubble mask (i.e., only features going from the surface to some depth) as a function of wind speed. The magenta dot in b) is the depth of the bubble cloud shown in a). The color code are a) the decimal logarithm of the depolarization and b) the decimal logarithm of the number of observations. (correlation coefficient R is -0.1871, the coefficient of determination R\\({}^{2}\\) is 0.035), so a continuum of states would be more appropriate to describe the physics of bubbles in the ocean. This seem to imply that the correct formalism would be for the specular reflectance to not be a function of the presence of bubbles, and the whitecaps would continue to be an additive term. In other words, [15] derived the following equation to describe \\(\\gamma_{s}\\) the lidar specular reflectance, for a \\(\\theta\\) off-nadir angle \\[\\gamma_{s}=\\frac{(1-W)\\rho}{4\\,\\cos^{5}\\theta}p\\big{(}\\zeta_{x},\\zeta_{y} \\big{)}\\] The fraction of the surface covered with whitecaps is \\(W\\), and p(c,c) is the probability of slopes of waves c and c in both along- and cross-wind directions, respectively. \\(\\rho\\) (sr-) is the Fresnel reflectance coefficient at nadir angle. Neglecting the subsurface return, the meaning of this equation is that for a given surface of the ocean, there are patches with whitecaps which have the whitecaps surface reflectance and patches without whitecaps which have the specular reflectance. Assuming the whitecaps reflectance is correct, it suggests that the way the whitecaps coverage fraction \\(W\\) appears in the specular reflectance is not appropriate, and at least for high resolution datasets, would be more accurately described with \\[\\gamma_{s}=\\frac{\\rho}{4\\cos^{5}\\theta}p\\big{(}\\zeta_{x},\\zeta_{y}\\big{)}\\] Note that it could also be an issue with the definition of \\(W\\). It may be ill-suited for high resolution data as in that case, \\(W\\) should be either 0 or 1. Addressing the lidar equation will be the subject of further study. [15] did not address the validity of the whitecaps term, and we would need to revisit again the lidar equation to ensure consistency between the lidar and passive observation of whitecaps based on both theory and this dataset. The IMPACT project also included the collocated observations of the whitecap's coverage fraction from a camera with lidar bubble profile observations in 2022 in Greenland and Iceland, and the analysis of this dataset could be necessary for the next step of the work on the lidar equation. In the meantime, the fact that the bubble contribution is an additive term allows us to use [28] as is, as long as a surface correction is included. [28] mentions surface contamination as an error source, which is possible only if the specular reflectance and the bubble signal both contribute to the lidar signal. This underlying assumption is definitely supported by this dataset (Fig. 10), and it is enough to adapt his formalism to our dataset. However, the inconsistency with the lidar theory will need to be addressed in the future. ### Void fraction retrieval The relationship between the lidar backscatter coefficient and the bubble void fraction has been derived by [28]. The link is a simple multiplicative constant, and the lowest value of this constant is for bubbles without surfactants. Before this multiplicative constant can be applied to the NRL lidar observations, the signal has to be corrected from the total attenuation and the scattering contribution from the background (water molecules, biology) needs to be removed from the signal to keep only the signal from the bubble cloud. In order to do so and for all profiles with bubbles: - Using the bubble mask, the algorithm first selects all the profiles without bubbles (so this includes water molecules and biology). It then determines the extinction from the signal decrease as a function of depth from the average profile. This dataset's average extinction is around 0.1083 m\\({}^{\\text{-1}}\\), consistent with the water chlorophyll content and the diffuse attenuation of previous studies [40]. - Using the backscatter intensity just below the bubble clouds, the algorithm then determines the attenuation of scattering by water molecules. Specifically, for each profile, we store the logarithm of the backscatter intensity 0.47 m below the lowest depth as determined by the bubble mask. Going slightly below the bubble cloud minimizes the likelihood of still having bubbles in the signal and allows to measure the backscatter of water molecules attenuated by the bubble cloud. Fig. 11 shows this attenuation value. The data suggests that there could be at least two or three regimes of attenuation of the bubble clouds (discontinuity of high attenuation Figure 10: Gradient of the surface intensity (co-polarization channel) as a function of the gradient of the surface depolarization. The color code is the decimal logarithm of the number of observations. around 4-8 m and low attenuation around 2 - 6 m). Due to the novelty of the bubble mask, it is preferable to not to reach too many conclusions at the moment. To simplify the correction procedure for this first version and to lower the likelihood of divergence of the retrieval, we use the retrieved extinction as low as 1.5 m. This depth corresponds to the highest number of data. In addition, only one attenuation regime seems to exist below this depth in Fig. 11. For deeper observations, the two-way extinction is set at 1.0892. This value corresponds to the decay of the logarithm average backscatter of bubbles between 2 and 5 m (Fig. 11). Furthermore, this lower attenuation value is consistent with less attenuation as the bubble density decreases. This simplification should be revisited in future versions of the algorithm. - After correction of the total extinction, the average water molecules backscatter coefficient can be subtracted from the profile to retrieve the bubble backscatter coefficient and the associated void fraction. An example of the void fraction retrieval is shown in Fig. 12. for the data of Dec 11, 2019, at 21:13Z. In this case, the bubble cloud on the left (around second 17) has a much lower value of the void fraction at the surface than the bubble cloud observed a few seconds later (around second 19). This difference in void fraction could imply that it is older than the bubble cloud at the right of the picture. Figure 11: **a)** decimal logarithm of the signal below the bubbles cloud. The color code is the decimal logarithm of the number of observations. b) Two-way transmission of the bubble cloud. The void fraction estimates depend on several algorithms that are either new or newly applied to the NRL shipboard lidar. These algorithms include the bubble mask, the calibration procedure, and the correction for attenuation. We anticipate applying these algorithms to the whole shipboard lidar dataset (from other deployments, most without bubbles but with phytoplankton/zooplankton layers). The analysis of this extended dataset will provide further insight into the domain of validity of this algorithm and the associated uncertainty. Beyond the current issue related to the lidar equation, there are also significant uncertainties due to the calibration coefficient and the lack of knowledge of the bubble surfactant. However, due to error compensation (i.e., the bias in these two factors partially compensates each other), we anticipate that the overall effect on the void fraction retrieval is a bias that cannot be higher than the calibration bias. As mentioned in section 5, the accuracy of our first algorithm is relatively low (factor 4 to 8 uncertainty of the calibration procedure, so almost one order of magnitude). The advantage of lidar for bubble research is that it is largely independent of the limitations of other instruments. There is no limitation concerning bubble properties related to depth or bubble size range. It is an advantage to currently available techniques to measure bubble properties. Lidar is limited in how deep they can monitor the water body. This limitation is a function of the hardware, software, and water turbidity. For bubbles, the strong scattering signal and the reduction of attenuation from the small-angle multiple scattering regime optimizes this feature's detectability. The ocean bubbles are an ideal target for an ocean lidar system. Lidar provide a new insight into the physics of the ocean at high wind speed, and the Figure 12: Example of void fraction retrieval for a lidar case on Dec 11, 2019 (21:13:00Z). The color code is the decimal logarithm of the void fraction. lack of lidar research in this topic (lack of dataset, lack of theory agreement with the observations) should guarantee fast progress. ## 7 Perspectives As a short term, next step, this work allows us to determine the link between several bubble properties and the lidar measurements. Specifically, there is a link between bubble properties (bubble depth, void fraction) and the integrated lidar depolarization. This link creates estimates of bubble properties measured by a space lidar with depolarization. Preliminary results of global scale bubble depth maps are encouraging, and we anticipate presenting them in a future work. Another interesting aspect of this study is the intrinsic difference between passive sensor observations of the bubble field in natural ocean conditions and what is detected by a lidar system that can penetrate the water surface and observe the whitecaps and various intensities of spray. Specifically, the water surface scattering properties that come from the statistic of the wave slope distribution do not exhibit drastic changes at the boundary between the bubbleless part of the ocean and waters with either whitecaps or extended bubble clouds. The ocean manifests a feature continuum above or below the water's surface from the lidar perspective. As previously discussed, this will help to guide future research related to the lidar equation. ## 8 Conclusions Lidar is an ideal tool for obtaining information about the bubble environment. The bubbles create a strong, unambiguous depolarization, and the lidar simultaneously provides the context of the air-sea interface (surface height). A bubble mask is straightforward to create from a lidar with depolarization. It provides the vertical information of the bubble cloud structure that is consistent with other research based on acoustic echo sounders data. The multiple scattering regime of lidar observations can be derived based on both theory, and a statistic of the observations. The void fraction retrieval algorithm is complex, but does converge towards a solution that is relatively reasonable. It does not create negative values, or instabilities. The uncertainty is large but typical of backscatter lidar limitations (calibration accuracy, attenuation correction, scatterer refractive index). The relatively high uncertainty in this first version of the algorithm is due in large part to the novelty of the application. Subsequent versions will be more accurate. Bubbles have the advantage of showing a very strong scattering signature, and there are physical limits that bind the void fraction retrieval. A void fraction cannot be larger than 1, and the retrieval accuracy will increase as future lidar experiments validate our results. The smoothness of the observation is inconsistent with the currently derived lidar theory of whitecaps, and it suggests that future research should address this issue. ## Acknowledgements Funding of this research was provided by the NRL internal project (IMPACT). P.I. J. Thomson and co-P.I. Morteza Derakhti are greatly acknowledged for allowing the NRL researchers to be onboard, mount and use the NRL shipboard lidar on the bow of the R.V. Sikuliaq during the Wave breaking and bubble dynamics project (Breaking Bubble) funded by the National Science Foundation (NSF). We would like to acknowledge the help of Lucia Hosekova with the meta data. The crew of the RV Sikuliaq is greatly acknowledged for supporting this research. Victorija Morris is greatly acknowledged the creation of the 3D model of the mount of the NRL shipboard lidar. ## Disclosures The authors declare no conflicts of interest. The patent is pending for the algorithms presented in this study. It correspond to patent application # 18/476,959. Data Availability Statement The data gathered during NRL funded project are archived at NRL and are usually not available to the public due to the sensitive nature of NRL work. However, NRL has a process in place to request the data to be distributed (close or open release). If you have an interest in this dataset, please contact the main author [email protected], with an explanation of the need, so that we can follow the appropriate release process. ## References * [1] M. Derakhti, J. Thomson, C. S. Bassett, et al., \"Statistics of bubble plumes generated by breaking surface waves,\" ESS Open Archive, DOI: 10.22541/essoar.167751591.11265648/v1 (2023). * [2] J. R. Gemmrich and D. M. Farmer, \"Observations of the Scale and Occurrence of Breaking Surface Waves,\" _J. Phys. Oceanogr._, **29**, 2595-2606, [https://doi.org/10.1175/1520-0485](https://doi.org/10.1175/1520-0485)(1999)029<2595:OOTSAO>2.0.CO;2 (1999). * [3] J. Thomson, M. S. Schwendeman, S. F. Zippel, S. Moghimi, J. Gemmrich, and W. E. Rogers, \"Wave-breaking turbulence in the ocean surface layer,\" _J. Phys. Oceanogr._, _46_, 1857-1870, doi:10.1175/JPO-D-15-0130.1 (2016). * [4] E. J. Terrill, W. K. Melville, and D. Stramski, \"Bubble entrainment by breaking waves and their influence on optical scattering in the upper ocean,\" _J. Geophys. Res.: Oceans,106_, 16,815-16,823, doi:10.1029/2000JC000496 (2001). * [5] R. Manasseh, A. V. Babanin, C. Forbes, K. Rickards, I. Bobevski, and A. Ooi, \"Passive Acoustic Determination of Wave-Breaking Events and Their Severity across the Spectrum,\" _J. Atmos. and Oceanic Tech._, _23_, 599-618, doi:10.1175/JTECH1853.1 (2006). * [6] S. Thorpe, \"On the clouds of bubbles formed by breaking wind-waves in deep water, and their role in air-sea gas transfer,\" _Phil. Tran. R. Soc. Lond. A_, _304_, 155-210, doi:10.1098/rsta.1982.0011 (1982). * [7] D.K. Woolf, I. Leifer, P. Nightingale, T-S. Rhee, P. Bowyer, G. Caulliez, G. de Leeuw, S. Larsen, M. Liddicoat, J. Baker, and M.O. Andreae, \"Modelling of bubble-mediated gas transfer; fundamental principles and a laboratory test,\" Journal of Marine Systems, 66(1-4), 71-91 (2007). * [8] M. Rhein et al., \"Chapter 3: Observations: Ocean, in: Climate Change 2013 The Physical Science Basis,\" Cambridge University Press (2013). * [9] W. K. Melville, \"The role of surface-wave breaking in air-sea interaction,\" _Ann. Rev. Fluid Mech._, _28_, 279-321 (1996). * [10] L. Deike, \"Mass transfer at the ocean-atmosphere interface: The role of wave breaking, droplets, and bubbles,\" _Ann. Rev. Fluid Mech._, _54_, 191-224, doi:10.1146/annurev-fluid-030121-014132 (2022). * [11] R. T. Menzies, D. M. Tratt, and W. H. Hunt, \"Lite measurements of sea surface directional reflectance and the link to surface wind speed,\" Appl. Opt., 37 :5550-5559 (1998). * [12] C. Flamant, J. Pelon, D. Hauser, C. Quentin, W. M. Drennan, F. Gohin, Chapron B., and J. Gourrion, \"Analysis of surface wind speed and roughness length evolution with fetch using a combination of airborne lidar and radar measurements,\" J. Geophys. Res., 108 (2003). * [13] Y. Hu, K. Stamnes, M. Vaughan, J. Pelon, C. Weimer, D. Wu, M. Cisewski, W. Sun, P. Yang, B. Lin, A. Omar, D. Flittner, C. Hostetler, C. Trepte, D. Winker, G. Gibson and M. Santa-Maria, \"Sea surface wind speed estimation from space-based lidar measurements,\" Atmos. Chem. Phys., 8, 3593-3601, [https://doi.org/10.5194/acp-8-3593-2008](https://doi.org/10.5194/acp-8-3593-2008) (2008). * [14] D. Josset, J. Pelon, and Y. Hu, \"Multi-instrument calibration method based on a multiwavelength ocean surface model,\" IEEE Geoscience and Remote Sensing letter, 7, 195-199, doi:10.1109/LGRS.2009.2030906, (2010). * [15] D. Josset, P. Zhai, Y. Hu, J. Pelon, and P. L. Lucker, \"Lidar equation for ocean surface and subsurface,\" Opt. Express 18, 20862-20875 (2010). * [16] J. H. Churnside, \"Review of profiling oceanographic lidar,\" Opt. Eng. 53(5) 051405, [https://doi.org/10.1117/1.OE.53.5.051405](https://doi.org/10.1117/1.OE.53.5.051405) (2013). * [17] R. W. Gould, Jr., D. Josset, S. Anderson, W. Goode, R. N. Conmy, B. Schaeffer, S. Pearce, T. Mudge, J. Bartlett, D. Lemon, D. Billenness, O. Garcia, \"Estimating Oil Slick Thickness with LiDAR Remote Sensing Technology,\" Bureau of Safety and Environmental Enforcement (BSEE) Oil Spill Response Research Branch ; [https://www.bsee.gov/sites/bsee.gov/files/research-reports/1091aa.pdf](https://www.bsee.gov/sites/bsee.gov/files/research-reports/1091aa.pdf) (2019). * [18] J. Thomson, \"Wave Breaking Dissipation Observed with \"SWIFT\" Drifters,\" _Journal of Atmospheric and Oceanic Technology_, _29_(12), 1866-1882 (2012). * [19] J. Thomson _et al._, \"A new version of the SWIFT platform for waves, currents, and turbulence in the ocean surface layer,\" _2019 IEEE/OES Twelfth Current, Waves and Turbulence Measurement (CWTM)_, pp. 1-7, doi: 10.1109/CWTM43797.2019.8955299 (2019). * [20] Lord Rayleigh, \"On the light from the sky, its polarization and colour,\" Phil. Mag., XLI :107-120 (1871). * [21] A. T. Young, \"Rayleigh scattering,\" Phys. Today, pages 42-48 (1982). * [22] C. A. Hostetler, Z. Liu, J. Reagan, M. Vaughan, D. Winker, M. Osborn, W. H. Hunt, K. A. Powell, and C. Trepte, \"CALIOP Algorithm Theoretical Basis Document : Calibration and level 1 data products,\" (2006). * [23] J. Kar et al., \"CALIPSO lidar calibration at 532 nm: version 4 nighttime algorithm,\" Atmos. Meas. Tech., 11, 1459-1479, [https://doi.org/10.5194/amt-11-1459-2018](https://doi.org/10.5194/amt-11-1459-2018) (2018). * [24] D. Josset, S. Tanelli, Y. Hu, J. Pelon and P. Zhai, \"Analysis of Water Vapor Correction for CloudSat W-Band Radar,\" in _IEEE Transactions on Geoscience and Remote Sensing_, vol. 51, no. 7, pp. 3812-3825, doi: 10.1109/TGRS.2012.2228659 (2013). * [25] L. A. Remer et al., \"Global aerosol climatology from the MODIS satellite sensors,\" J. Geophys. Res., 113, D14S07,doi:10.1029/2007JD009661 (2008). * [26] K. W. Dawson, N. Meskhidze, D. Josset, and S. Gasso, \"Spaceborne observations of the lidar ratio of marine aerosols,\" Atmos. Chem. Phys., 15, 3241-3255, [https://doi.org/10.5194/acp-15-3241-2015](https://doi.org/10.5194/acp-15-3241-2015) (2015). * [27] Z. Li, D. Painemal, G. Schuster, M. Clayton, R. Ferrare, D. Josset, J. Kar, and C. Trepte, \"Assessment of tropospheric CALIPSO Version 4.2 aerosol types over the ocean using independent CALIPSO-SODA lidar ratios,\" amt-2021-378 (2021). * [28] J. H. Churnside, \"Lidar signature from bubbles in the sea,\" Vol. 18, No. 8/OPTICS EXPRESS (2010). * [29] D. Wang et al., \"An Experimental Study on Measuring Breaking-Wave Bubbles with LiDAR Remote Sensing,\" _Remote Sens., 14_, 1680. [https://doi.org/10.3390/rs14071680](https://doi.org/10.3390/rs14071680) (2022). * [30] A. A Kokhanovsky, _J. Opt. A: Pure Appl. Opt._ 5 47 (2002). * [31] E. W. Eloranta, \"Practical model for the calculation of multiply scattered lidar returns,\" Appl. Opt. 37, 2464-2472 (1998). * [32] R. J. Hogan, \"Fast Lidar and Radar Multiple-Scattering Models. Part I: Small-Angle Scattering Using the Photon Variance-Covariance Method,\" _Journal of the Atmospheric Sciences_, _65_(12), 3621-3635 (2008). * [33] D. Josset, J. Pelon, A. Garnier, Y. Hu, M. Vaughan, P.-W. Zhai, R. Kuehn, and P. Lucker, \"Cirrus optical depth and lidar ratio retrieval from combined CALIPSO-CloudSat observations using ocean surface echo,\" J. Geophys. Res.,117,D05207, doi:10.1029/2011JD016959, (2012). * [34] M. Xu and R. R. Alfano, \"Random walk of polarized light in turbid media,\" Phys. Rev. Lett., 95 :213901-1-4 (2005). * [35] Y. Hu, Z. Liu, D. Winker, M. Vaughan, V. Noel, L. Bissonnette, G. Roy, and M. McGill, \"Simple relation between lidar multiple scattering and depolarization for water clouds,\" Opt. Lett., 31 (12) :1809-1811 (2006). * [36] S. Vagle and D. M. Farmer, \"The measurement of bubble-size distribution by acoustical backscatter,\" J. Atmos. and Ocean. Tech., 8 630-644 (1991). * [37] J. C. Novarini, R. S. Keiffer, and G. V. Norton, \"A model for variations in the range and depth dependence of the sound speed and attenuation induced by bubble clouds under wind-driven sea surfaces,\" IEEE J. oceanic Eng., 23, 423-438 (1998). * [38] M. Derakhti, and J. Kirby, \"Bubble entrainment and liquid-bubble interaction under unsteady breaking waves,\" _Journal of Fluid Mechanics, 761_, 464-506. doi:10.1017/jfm.2014.637 (2014). * [39] K. O. Strand, et al., \"Long-term statistics of observed bubble depth versus modeled wave dissipation,\" _Journal of Geophysical Research: Oceans, 125_, e2019JC015906. [https://doi.org/10.1029/2019JC015906](https://doi.org/10.1029/2019JC015906) (2020). * [40] A. Morel and S. Maritorena, \"Bio-optical properties of oceanic waters: A reappraisal,\" J. Geophys. Res. Oceans 106 (C4), 7163-7180 (2001).
This paper presents the NRL shipboard lidar and the first lidar dataset of underwater bubbles. The meaning of these lidar observations, the algorithms used and their current limitations are discussed. The derivation of the lidar multiple scattering regime is derived from the lidar observations and theory. The detection of the underwater bubble presence and their depth is straightforward to estimate from the depolarized laser return. This dataset strongly suggest that the whitecaps term in the lidar equation formalism needs to be revisited. Void fraction retrieval is possible and the algorithm is stable with a simple ocean backscatter lidar system. The accuracy of the void fraction retrieval will increase significantly with future developments.
Condense the content of the following passage.
135
arxiv-format/2203_08221v1.md
# Development of Decision Support System for Effective COVID-19 Management Shuvrangshu Jana Post-doctoral Fellow, Department of Aerospace Engineering, Indian Institute of Science, Bangalore, [email protected] Rudrashis Majumder Ph.D. Student, Department of Aerospace Engineering, Indian Institute of Science, Bangalore, [email protected] Aashay Bhise Project Associate, Department of Aerospace Engineering, Indian Institute of Science, Bangalore, [email protected] Nobin Paul Ph.D. Student, Department of Aerospace Engineering, Indian Institute of Science, Bangalore, [email protected] Stuti Garg Project Assistant, Department of Aerospace Engineering, Indian Institute of Science, Bangalore, [email protected] Debasish Ghose Professor, Department of Aerospace Engineering, Indian Institute of Science, Bangalore, [email protected] ###### \\(5^{th}\\) World Congress on Disaster Management IIT Delhi, New Delhi, India, 24-27 November 2021 **Keywords:** COVID-19; Decision Support System; GUI; Resource allocation; Lockdown management; PredictionIntroduction Management of COVID-19 involves optimal lockdown planning, estimation and arrangement of critical medical resources, and allocation of resources among the units in an optimal manner. The human intervention for this set of activities is sometimes not optimal because of bias and other inaccuracies. Hence, building an autonomous decision support system (DSS) to handle all the activities related to disaster management can address the problems arising from the situation more effectively. In the past, developing DSS has been given much importance by disaster researchers. Some early works of literature mainly focus on designing the decision support system for the influenza pandemic response. Jenvald et al. (2007) propose a simulation framework to simulate the pandemic environment and relate it to the decision support system (DSS) for influenza preparedness and response. (Fair et al., 2007) developed another simulation setup to model the time-dependent evolution of influenza.(Arora et al., 2012) focus on the complexities associated with the decision-making during the outbreak of influenza pandemic. (Araz et al., 2013) design a DSS to take decisions on the closure and reopening of schools to minimize influenza infection. (Araz, 2013) present a general multi-criteria decision-making framework to make preparedness plans with DSS in order to integrate the estimation of important epidemiological parameters. (Fogli and Guida, 2013) develops a decision system for emergency response during a pandemic. (Shearer et al., 2020) incorporates dynamic information about the pandemic from the situational awareness framework and integrates it into the broader DSS for pandemic response. Authors in (Phillips-Wren et al., 2020) mainly focus on decision-making during the stressful outbreak of COVID-19. (Currion et al., 2007) talk about the significance of an open-source software model for disaster management. A brief introduction to Sahana software is also given in this paper. (Iyer and Mastorakis, 2006) discuss the important elements of disaster management and the development of a software tool to facilitate disaster DSS. Several other papers like (Li et al., 2013; Shukla and Asundi, 2012) also focus on the software modeling for emergency and disaster management. Researchers of different fields focused on developing DSS to mitigate the effect of the COVID-19 pandemic outbreak in 2020. (Guler and Gecici, 2020) find the solution for the problem of shift scheduling of the physicians by using mixed-integer programming and then using it in a DSS. In (Sharma et al., 2020), a multi-agent intelligent system is developed for decision-making to assist the patients. (Hashemkhani Zolfani et al., 2020) address hospital location selection problem for COVID-19 patients using gray-based DSS. The paper (Govindan et al., 2020) develops a practical decision support system depending on the health practitioners' knowledge and fuzzy information system to break the chain of COVID infection. (Marques et al., 2021) uses AI-based prediction model for decision making in COVID-19 situation. The DSS related to COVID-19 reported in the literature is mostly focused on specific activities related to resource management by medical personnel in a hospital environment. However, an integrated DSS covering estimation, allocation, and lockdown management for government authority will help efficient disaster management. This paper describes autonomous DSS that addresses prediction, allocation, and optimal lockdown management for efficient management of COVID-19 in India. The algorithms incorporated in DSS are scalable and flexible, and thus it applies to any level of a government authority. The algorithms are based on a short-term prediction of COVID-19 cases using the time series data of reported cases. A graphical GUI is developed for decision making and the GUI inputs are demand/availability of critical items and total cases, recovered cases, and deceased cases. The proposed DSS could help the authorities to take crucial allocation and lockdown decisions in an optimal manner. The rest of the paper is described as follows: Section 2 describes the current status of the decision-making process in COVID-19 and the requirement of DSS. The overview of the algorithms incorporated in DSS is presented in Section 3. The description of the GUI of DSS is shown in Section 4. ## 2 COVID-19 management The important tasks of a COVID management authority are the prediction of COVID-19, computation of demand, allocation and distribution of critical items, and decision making for the level of lockdown. In general, COVID-19 is managed through different hierarchical levels of authorities, and each level has different responsibilities. COVID-19 management structure in the context of India is shown in Fig. 1. In this case, COVID management is performed through centre, state, district, and block hierarchy. Different responsibilities at each level are also mentioned. At centre level critical resourceslike oxygen and ventilator are allocated to different states, and those items are further distributed to district authorities to be distributed to the hospital. Clearly, the allocation of critical resources is performed at different levels, and the allocation factor is not the same at each level. The allocation should generally depend on the parameters related to active cases, total cases, test positive ratio, and existing resources. At the initial stages of the pandemic, the decision for the lockdown was taken only at the top level; however, at later stages, the local authorities also took the drastic decision of lockdown. The primary factors behind the resource allocation and lockdown decision are the prediction of total cases and active cases. Currently, different physical models are adopted by different authorities; however, there is no standardized model approved by the government authority. Also, modeling the dynamics of the pandemic is complex, and its parameter needs to be calibrated for each new area. Therefore, sometimes the factors behind the allocation are not justifiable in a quantitative sense. ### Decision Support System for COVID-19 COVID-19 management at the government level is a complex task to ensure proper estimation of demand, optimal allocation of resources, and optimal lockdown management. The critical decision needs to be taken considering economic, medical, and complex social factors. Also, the decision-making criteria need to be formulated using quantitative tools rather than qualitative factors for fair and optimal allocation. As decision-making has to be performed at each level, it might not be possible for each allocation authority to model these factors quantitatively. Currently, a COVID management authority has no standard tool to ensure optimal allocation and lockdown. Clearly, it is not easy to always make optimal decisions by a COVID-19 management authority without access to high-level technical expertise. A DSS for taking the decision of allocation and lockdown could help the government authorities in the optimal decision and avoid catastrophic failure. In general, a decision support software should be able to integrate the information from the multiple layers of spatial and statistical databases and aid in decision making. In the case of COVID-19 management, DSS should be designed such that it could take the decision of estimation, allocation, and lockdown management from the observed cases of total cases recovered cases and deceased cases without the requirement of complex algorithm parameters. Apart from allocation and lockdown analysis, the prediction of COVID-19 should be available to authorities for decision-making. The prediction module should be integrated into suitable external information sources for the early prediction of disaster. ## 3 Algorithms In this section, the overview of algorithms for decision-making is discussed. Algorithms need to be designed, keeping in mind that they should be scalable and flexible. The scalability is required to ensure that DSS is applicable at lower levels consisting of the population in millions to higher-level consist of the population in tens of millions. Flexibility is required so that the algorithms are mostly independent of the region. The three Figure 1: COVID management at different level important algorithms for prediction, allocation, and lockdown management is discussed. ### COVID-19 prediction The various COVID-19 prediction models are available in literature based on physical modeling, data-driven modeling, and a hybrid approach that combines physical and data-driven modeling. The physical and hybrid models will require transmission parameters for each region; however, the data-driven model will need only the observed values of total cases, recovered cases, and deceased cases. Physical modeling tries to model the actual dynamics of the pandemic, but most of the pandemic-related government decisions are taken based on the reported cases. Public perception of COVID management is mostly based on the reported cases rather than the actual COVID cases. The data-driven model will satisfy the flexible criteria and also be simple for lower-level authorities. We have considered a data-driven adaptive short-term model reported by Jana and Ghose (2020) for the development of DSS. In this case, case prediction is developed using time series data of previous observations. The prediction function is adaptively updated based on weighted least square functions to track the current dynamics of COVID-19. The total cases, active cases, deceased cases, and active cases could be predicted using the previous observations, and the prediction is found to be reasonable up to 2-3 weeks. As the function is adaptive and the decision frequency of administration will not be generally higher than two weeks, this algorithm could be integrated into DSS. ### Allocation Disaster management authorities allocate the lower units based on resources available from the upper administrative level and the demand requirement for the lower administrative level. The resource allocation module should be able to allocate optimally among the lower level using their demand and severity (Jana et al., 2021). The allocation mechanism in COVID-19 is complex as the unit demand of each critical item might vary from region to region based on region-specific medical and social factors. For example, the amount of oxygen to be provided to the patient could differ depending on the medical protocol developed by the individual region. So, the demand for medical items of low-level units is difficult to compare because of variation in protocol and unavailability of data of existing resources. So, for this, algorithms need to be selected which can incorporate those uncertain factors. In this case, we have considered an optimization model which considers the demand of lower units and demand based on their active cases (Jana et al. (2021b)). For allocation of each item, the inputs are the demand of each item and the maximum value of the active cases over the next seven days. This prediction of active cases is performed using the prediction algorithm described earlier. This algorithm provides a closed-form solution, and it is scalable for any number of units. ### Lockdown management The decision for lockdown allows less medical load, but it has a detrimental effect on the economy and various social factors. However, it is difficult to incorporate all these factors accurately in the decision-making process as modeling of this factor is not possible at each level. To develop a simple DSS, it should be independent of those factors. Since ensuring the availability of the medical items is the main criteria for lockdown, we have adopted a lockdown algorithm for DSS, proposed by Jana and Ghose (2021) to ensure that there is no scarcity of medical items. In this algorithm, the demand of each item over the next 14 days is checked, and lockdown is recommended if the availability of any of the critical items is lower than the demand at any point. The demand for the next 14 days of each item is calculated as a function of predicted active cases. ## 4 Graphical user interface design Graphical user interface to be designed so that it is easily implementable at lowest lower level of authorities with limited access to technical resources. Tentative input and output of the DSS is shown in Fig. 2. Clearly, user needs to enter the COVID statistic and the demand of critical items for decision making process and no specific parameter related to a particular area is needed. Generally, statistics of COVID cases are available in a specific format, and GUI has to be designed so that the output is obtained with limited manual intervention. The input data could be linked with the database of COVID-19 to reflect the automatic change in the status. Currently, We have developed the DSS in MATLAB specific to the Indian context; however, it is easily adaptable for different countries. The snapshots of the different tabs of GUI are shown in Figs. 3 -8. GUI has three tabs: prediction, allocation,and lockdown. GUI is currently developed for the central government of India, and few states are considered for demonstration of GUI. The prediction tab selects cases history from the CSV file, and the prediction graph is generated. In the case of India, this file is directly available from the \"[https://api.covid19india.org/](https://api.covid19india.org/)\". Based on the states and type of graph tab, the predicted graph is generated. The snapshots related to the prediction tab are shown in Fig. 3 - Fig. 5. The GUI is currently run using the data up to 10 May 2021, and the graph in Fig. 5 shows the prediction of active cases of Karnataka for the next 14 days. In the allocation tab (Fig. 6), the user can select the items to be allocated and the corresponding units for allocation. In this case, we have demonstrated for four states. The user only needs to enter the corresponding demand of each state and the total amount. The back-end algorithm predicts the active cases and incorporates that in the allocation algorithm. An allocation scenario is considered to demonstrate the effective integration of the allocation algorithm with GUI. As shown in Fig. 6, the allocation of oxygen is performed considering four states, and their individual demands are 2000, 1000, 1200, 300 MT. The total available oxygen for allocation is 3200 MT. The final allocation is shown in Fig. 7. The allocation algorithm considered the prediction of active cases of states on the back-end side. The final allocation is shown to be 1072, 789.8, 1185, 152.9 MT. In the lockdown tab (Fig. 7) user only needs to enter the available values of the critical items. Figure 2: Input-output block diagram of GUI The back-end checks using predicted demand and recommend the lockdown requirement. Figure 4: Prediction Tab (b) Figure 3: Prediction Tab (a) Figure 5: Prediction Tab (c) Figure 6: Allocation Tab (a) Figure 8: Lockdown Tab Figure 7: Allocation Tab (b) Conclusions In this paper, a decision support system to aid in the decision-making of authorities for the management of COVID-19 is presented. The proposed DSS consists of the prediction of COVID-19, optimal allocation, and lockdown management. The backend algorithms for the DSS are developed based on a data-driven approach so that the proposed DSS is applicable for any region. A MATLAB GUI is developed incorporating the proposed DSS. Authorities could use the developed DSS for managing the allocation and lockdown-related tasks related to COVID-19. ## References * Araz (2013) Araz, O. M. (2013). Integrating complex system dynamics of pandemic influenza with a multi-criteria decision making model for evaluating public health strategies. _Journal of Systems Science and Systems Engineering_, 22(3):319-339. * Araz et al. (2013) Araz, O. M., Lant, T., Fowler, J. W., and Jehn, M. (2013). Simulation modeling for pandemic decision making: A case study with bi-criteria analysis on school closures. _Decision Support Systems_, 55(2):564-575. * Arora et al. (2012) Arora, H., Raghu, T., and Vinze, A. (2012). Decision support for containing pandemic propagation. _ACM Transactions on Management Information Systems (TMIS)_, 2(4):1-25. * Currion et al. (2007) Currion, P., Silva, C. d., and Van de Walle, B. (2007). Open source software for disaster management. _Communications of the ACM_, 50(3):61-65. * Fair et al. (2007) Fair, J. M., LeClaire, R. J., Wilson, M. L., Turk, A. L., DeLand, S. M., Powell, D. R., Klare, P. C., Ewers, M., Dauelsberg, L., and Izraelevitz, D. (2007). An integrated simulation of pandemic influenza evolution, mitigation and infrastructure response. In _2007 IEEE Conference on Technologies for Homeland Security_, pages 240-245. IEEE. * Fogli and Guida (2013) Fogli, D. and Guida, G. (2013). Knowledge-centered design of decision support systems for emergency management. _Decision Support Systems_, 55(1):336-347. * Govindan et al. (2020) Govindan, K., Mina, H., and Alavi, B. (2020). A decision support system for demand management in healthcare supply chains considering the epidemic outbreaks: A case study. _Journal of the Operational Society of America_, 10(1):1-10. * Govindan et al. (2013)study of coronavirus disease 2019 (COVID-19). _Transportation Research Part E: Logistics and Transportation Review_, 138:101967. * Guler and Gecici (2020) Guler, M. G. and Gecici, E. (2020). A decision support system for scheduling the shifts of physicians during COVID-19 pandemic. _Computers & Industrial Engineering_, 150:106874. * Hashemkhani Zolfani et al. (2020) Hashemkhani Zolfani, S., Yazdani, M., Ebadi Torkayesh, A., and Derakhti, A. (2020). Application of a gray-based decision support framework for location selection of a temporary hospital during COVID-19 pandemic. _Symmetry_, 12(6):886. * Iyer and Mastorakis (2006) Iyer, V. and Mastorakis, N. E. (2006). Important elements of disaster management and mitigation and design and development of a software tool. _WSEAS Transactions on Environment and Development_, 2(4):263-282. * Jana and Ghose (2020) Jana, S. and Ghose, D. (2020). Adaptive short term COVID-19 prediction for india. _MedRxiv_. * Jana and Ghose (2021) Jana, S. and Ghose, D. (2021). Optimal lockdown management using short term COVID-19 prediction model (submitted). _World Congress on Disaster Management(WCDM)_. * Jana et al. (2021a) Jana, S., Majumder, R., Menon, P. P., and Ghose, D. (2021a). Decision support system (DSS) for hierarchical allocation of resources and tasks for disaster management. Presented at _5th International Conference on Dynamics Of Disasters (DOD)_, Athens, Greece. * Jana et al. (2021b) Jana, S., Rudrashis, M., and Ghose, D. (2021b). Critical medical resource allocation during COVID-19 pandemic (submitted). _World Congress on Disaster Management(WCDM)_. * Jenvald et al. (2007) Jenvald, J., Morin, M., Timpka, T., and Eriksson, H. (2007). Simulation as decision support in pandemic influenza preparedness and response. _Proceedings ISCRAM2007_, pages 295-304. * Li et al. (2013) Li, J. P., Chen, R., Lee, J., and Rao, H. R. (2013). A case study of private-public collaboration for humanitarian free and open source disaster management software deployment. _Decision Support Systems_, 55(1):1-11. * Li et al. (2014)Marques, J. A. L., Gois, F. N. B., Xavier-Neto, J., and Fong, S. J. (2021). Prediction for decision support during the COVID-19 pandemic. In _Predictive Models for Decision Support in the COVID-19 Crisis_, pages 1-13. Springer. * Phillips-Wren et al. (2020) Phillips-Wren, G., Pomerol, J.-C., Neville, K., and Adam, F. (2020). Supporting decision making during a pandemic: Influence of stress, analytics, experts, and decision aids. _The Business of Pandemics: The COVID-19 Story_, page 183. * Sharma et al. (2020) Sharma, A., Bahl, S., Bagha, A. K., Javaid, M., Shukla, D. K., Haleem, A., et al. (2020). Multi-agent system applications to fight COVID-19 pandemic. _Apollo Medicine_, 17(5):41. * Shearer et al. (2020) Shearer, F. M., Moss, R., McVernon, J., Ross, J. V., and McCaw, J. M. (2020). Infectious disease pandemic planning and response: Incorporating decision analysis. _PLoS medicine_, 17(1):e1003018. * Shukla and Asundi (2012) Shukla, M. M. and Asundi, J. (2012). Considering emergency and disaster management systems from a software architecture perspective. _International Journal of System of Systems Engineering 4_, 3(2):129-141.
This paper discusses a Decision Support System (DSS) for cases prediction, allocation of resources, and lockdown management for managing COVID-19 at different levels of a government authority. Algorithms incorporated in the DSS are based on a data-driven modeling approach and independent of physical parameters of the region, and hence the proposed DSS is applicable to any area. Based on predicted active cases, the demand of lower-level units and total availability, allocation, and lockdown decision is made. A MATLAB-based GUI is developed based on the proposed DSS and could be implemented by the local authority.
Condense the content of the following passage.
116
arxiv-format/0103269v1.md
# The Role of Clouds in Brown Dwarf and Extrasolar Giant Planet Atmospheres Mark S. Marley Andrew S. Ackerman ## 1 Introduction Even before the first discovery of brown dwarfs and extrasolar giant planets (EGPs) it had been apparent that a detailed appreciation of cloud physics would be required to understand the atmospheres of these objects (e.g. Lunine et al. 1989). Depending on the atmospheric effective temperature, Fe, Mg\\({}_{2}\\)SiO\\({}_{4}\\), MgSiO\\({}_{3}\\), H\\({}_{2}\\)O, and NH\\({}_{3}\\) among others may condense in substellar atmospheres. Since every atmosphere in the solar system is influenced by clouds, dust, or hazes, the need to follow the fate of condensates in brown dwarf and EGP atmospheres is self-evident. What has become clearer over the past five years is that details such as the vertical structure and particle sizes in clouds play a decisive role in controlling the thermal structure and emergent spectra from these atmospheres. Indeed the available data are already sufficient to help us choose among competing models. In this contribution we will briefly summarize some of the roles clouds play in a few solar system atmospheres to illustrate what might be expected of brown dwarf and extrasolar giant planet atmospheres. Then we will summarize a new cloud model developed to study these effects, present some model results, and compare them to data. Since brown dwarfs have similar compositions and effective temperatures to EGPs and a rich dataset already exists, we focus on the lessons learned from the L- and T-dwarfs. We then briefly review the importance of clouds to EGP atmospheres and future observations. ## 2 Clouds in the Solar System Clouds dramatically alter the appearance, thermal structure, and even evolution of planets. Venus glistens white in the morning and evening skies because sunlight reflects off of its bright cloud tops. If there were no condensates in Venus' atmosphere the planet would take on a bluish hue from Rayleigh scattered sunlight. Mars' atmosphere is warmer than it would otherwise be thanks to absorption of incident solar radiation by atmospheric dust (Pollack et al. 1979). The effectiveness of the CO\\({}_{2}\\) greenhouse during Mars's putative warm and wet early history is tied to poorly understood details of its cloud physics and radiative transfer (Mischna, et al. 2000). Indeed the future climate of Earth in a fossil-fuel-fired greenhouse may hinge on the role water clouds will play in altering Earth's albedo and scattering or absorbing thermal radiation. The appearance of the Jovian planets is controlled by the extensive cloud decks covering their disks. On Jupiter and Saturn thick NH\\({}_{3}\\) clouds, contaminated by an unknown additional absorber, reflect about 35% of incident radiation back to space. CH\\({}_{4}\\) and H\\({}_{2}\\)S clouds play a similar role at Uranus and Neptune. The vertical structure of the jovian cloud layers was deduced by variation of their reflected spectra inside and outside of molecular absorption bands. Figure 1 illustrates this process. In the left hand image incident sunlight penetrates relatively deeply into the atmosphere and is scattered principally by a cloud deck over the south pole and a bright cloud near the northern mid-latitude Figure 1: Near consecutive HST images of Uranus taken through different filters. The filter employed for the left hand image probes a broad spectral range from 0.85 to 1 \\(\\mu\\)m while the right hand image is taken through a narrow filter sensitive to the 0.89 \\(\\mu\\)m CH\\({}_{4}\\) absorption band. The relative visibility of various cloud features between the two images is a measure of the cloud height as the incident photon penetration depth is modulated by methane absorption. Images courtesy H. Hammel and K. Rages. limb. The relative heights of these two features cannot be discerned from this single image. The right hand image, however, was taken in the strong 0.89-\\(\\mu\\)m methane absorption band. Here the south polar cloud is invisible since incident sunlight is absorbed by CH\\({}_{4}\\) gas above the cloud before it can scatter. We conclude that the bright northern cloud lies higher in the atmosphere since it is still visible in this image. The application of this technique to spectra and images of the giant planets has yielded virtually all the information we have about the vertical structure of these atmospheres (e.g. West, Strobel, & Tomasko 1986; Baines & Hammel 1994; Baines et al. 1995). A similar reasoning process can be applied to brown dwarf and EGP atmospheres. The large body of work on jovian clouds cannot be easily generalized, but two robust results are apparent. First, sedimentation of cloud droplets is important. Cloud particles condense from the atmosphere, coagulate, and fall. The fall velocity depends on the size of the drops and the upward velocity induced by convection or other motions in the atmosphere. They do not stay put. A diagnostic often retrieved from imaging or spectroscopic observations of clouds is the ratio of the cloud particle scale height to that of the gas. If condensates were distributed uniformly vertically in the atmosphere this ratio would be 1. Instead numerous investigations have found a ratio for Jupiter's ammonia clouds of about 0.3 (Carlson, Lacis, & Rossow 1994). The clouds are thus relatively thin in vertical extent. The importance of sedimentation is borne out even for unseen Fe clouds, for example, by Jupiter's atmospheric chemistry (Fegley & Lodders 1994). A second important result is that cloud particles are large, a result of coagulation processes within the atmosphere. Sizes are difficult to infer remotely and the sizes to which a given observation is sensitive depend upon the wavelength observed. Nevertheless it is clear that Jupiter's ammonia clouds include particles with radii exceeding 1 to 10 \\(\\mu\\)m, much larger than might be expected simply by direct condensation from vapor in the presence of abundant condensation nuclei (Carlson et al. 1994; Brooke et al. 1996). Similar results are found for ammonia clouds on Saturn (Tomasko et al. 1984) and methane clouds in Uranus and Neptune (Baines et al. 1995). These two lessons from the solar jovian atmospheres - clouds have finite vertical extents governed by sedimentation and large condensate sizes - guide us as we consider clouds in brown dwarf and extrasolar giant planet atmospheres. ## 3 Evidence of Clouds in Brown Dwarf Atmospheres The first models of the prototypical T-dwarf Gl 229 B established that grains play a minor role, if any, in controlling the spectrum of the object. The early Gl 229 B models of Marley et al. (1996), Allard et al. (1996) and Tsuji et al. (1996) all found best fits to the observed spectrum by neglecting grain opacity. This provided strong evidence that any cloud layer was confined below the visible atmosphere. All the models, however, shared the same shortcoming of predicting infrared water bands deeper than observed. Another difficulty with the early models is that they either predicted too much flux shortwards of 1 \\(\\mu\\)m (Marley at al. 1996) or used unrealistic molecular opacities (Allard et al. 1996) to lower the optical flux. Griffith, Yelle, & Marley (1999) and Tsuji, Ohnaka,& Aoki (1999) suggested variations of particulate opacity to lower the flux, but ultimately Burrows, Marley, & Sharp (2000) argued that broadened alkali metal bands were responsible for the diminution in flux, a prediction verified by Liebert et al. (2000). The first confirmation that dust was present in the atmospheres of at least some brown dwarfs came with the discovery of the warmer L-dwarfs. These objects, unlike the methane-dominated T-dwarfs, have red colors in \\(J-K\\) and spectra that have been best fit with dusty atmosphere models (Jones & Tsuji 1997), although a complete analysis does not yet exist. The difficulty arose in explaining how the dusty, red L-dwarfs evolved into the clear, blue T-dwarfs (Figure 2). Models in which dust does not settle into discrete cloud layers (Chabrier et al. 2000) predict that cooling brown dwarfs would become redder in \\(J-K\\) with falling effective temperature as more and more dust dominates the atmosphere. Since the atmosphere models employed in this work ignore the lessons learned from our jovian planets (they employ sub-micron particle sizes and do not allow the dust to settle) it is not surprising that they do not fit the data. ## 4 A New Cloud Model A number of models have been developed to describe the cloud formation processes in giant planet and brown dwarf atmospheres. Ackerman & Marley (2001) describe these in some detail. In general these models suffer from a number of drawbacks which limit their utility for brown dwarf and EGP modeling. Some rely upon free parameters which are almost impossible to predict while others do not predict quantities relevant to radiative transfer of in the atmosphere. For example, the atmospheric supersaturation cannot be specified without a detailed knowledge of the number of condensation nuclei available. Ackerman & Marley developed a new eddy sedimentation model for cloud formation in substellar atmospheres that attempts to predict cloud particle sizes and vertical extents. Ackerman & Marley argue that in terrestrial clouds the downward transport of large drops as rain removes substantial mass from clouds and reduces their optical depth. Yet properly modeling the condensation, coagulation, and transport of such drops requires a complex microphysical model and a concomitant abundance of free parameters. In an attempt to account for the expected effects of such microphysical processes without modeling them in detail, they introduce a new term into the equation governing the mass fraction \\(q_{t}\\) of an atmospheric condensate at a given altitude \\(z\\) in an atmosphere: \\[K\\frac{\\partial q_{t}}{\\partial z}+f_{\\rm rain}w_{*}q_{c}=0. \\tag{1}\\] Here the upward transport of the vapor and condensate is by eddy diffusion as parameterized by an eddy diffusion coefficient \\(K\\). In equilibrium this upward transport is balanced by the downward transport of condensate \\(q_{c}\\). The free parameter \\(f_{\\rm rain}\\) has been introduced as the ratio of the mass-weighted droplet sedimentation velocity to \\(w_{*}\\), the convective velocity scale. In essence \\(f_{\\rm rain}\\) allows downward mass transport to be dominated by massive drops larger than the scale Figure 2: \\(J-K\\) color of brown dwarfs as a function of \\(T_{\\rm eff}\\). Open datapoints represent L- and T-dwarf colors measured by Stephens et al. (2001) with L-dwarf temperatures estimated from fits of \\(K-L^{\\prime}\\) to models of Marley et al. (2001). Since \\(K-L^{\\prime}\\) is relatively insensitive to the presence or absence of clouds for the L-dwarfs it provides a good \\(T_{\\rm eff}\\) scale (Marley et al. 2001). The early T-types (\\(0.5<J-K<1\\)) are arbitrarily all assigned to \\(T_{\\rm eff}=1100\\,{\\rm K}\\). Likewise model \\(T_{\\rm eff}\\)s are given estimated error bars of \\(\\pm 100\\,{\\rm K}\\). The filled circle represents the position of the prototypical T-dwarf Gl 229 B (Saumon et al. 2000; Leggett et al. 1999). Four model cases are shown from the work of Marley et al. (2001): evolution with no clouds, and with clouds following the prescription of Ackerman & Marley (2001) with \\(f_{\\rm rain}\\) (rainfall efficiency, see text) varying from 7 (heavy rainfall) to 3 (moderate rain). Also shown are colors (C00) from models by Chabrier et al. (2000) in which there is no downward transport of condensate. The Marley et al. model lines are for objects with gravity \\(g=1000\\,{\\rm m\\,sec^{-2}}\\), roughly appropriate for a \\(30\\,{\\rm M_{J}}\\) object. There is little dependence of \\(J-K\\) on gravity in this regime. The Chabrier et al. lines are for 30 and \\(60\\,{\\rm M_{J}}\\) objects. set by the local eddy updraft velocity: in other words, rain. Ackerman & Marley (2001) treat \\(f_{\\rm rain}\\) as an adjustable parameter and explore its consequences. ## 5 Clouds and the L- to T-dwarf transition Given the importance of clouds to the L-dwarf spectra and the absence of significant cloud opacity in the T-dwarfs, it is clear that the departure of clouds with falling \\(T_{\\rm eff}\\) is an important milestone in the transition from L- to T-dwarfs. Marley (2000) demonstrated that a simple cloud model in which the silicate cloud was always one scale-height thick could account for the change in \\(J-K\\) color from the red L-dwarfs to the blue T-dwarfs. Now using the more physically motivated cloud model of Ackerman & Marley we can better test this hypothesis. Figure 3 illustrates the brightness temperature spectra of six brown dwarf models with three different \\(T_{\\rm eff}\\). In the warmest and coolest cases (\\(T_{\\rm eff}=1800\\) and 900 K) models with and without clouds appear similar. In the warmer case silicate and iron clouds are just forming in the atmosphere and are relatively optically thin, so their influence is slight. In the cooler case as in the right-hand image of Uranus in Figure 1, the main cloud deck forms below the visible atmosphere. In the intermediate case (\\(T_{\\rm eff}=1400\\,\\)K) an optically thick cloud forms in the visible atmosphere and substantially alters the emitted spectrum. The atmospheric structure predicted by the Ackerman & Marley (2001) model for this case is similar to that inferred by Basri et al. (2000) from Cs line shapes in L-dwarf atmospheres. Thus a cooling brown dwarf moves from relatively cloud free conditions to cloudy to clear. The solid lines in Figure 2 show how the \\(J-K\\) color evolves with \\(T_{\\rm eff}\\). Objects first become red as dust begins to dominate the visible atmosphere, then blue as water and methane begin to absorb strongly in K band. Models in which the dust does not settle (Chabrier et al. 2000) predict \\(J-K\\) colors much redder than observed. Instead the colors of the L-dwarfs are best fit by models which include some precipitation as parameterized by \\(f_{\\rm rain}=3\\) to 5. The data clearly require models for objects cooler than the latest L-dwarfs to rapidly change from \\(J-K\\sim 2\\) to 0 over a relatively small \\(T_{\\rm eff}\\) range. While models with \\(f_{\\rm rain}=3\\) to 5 do turn blue as the clouds sink below the visible atmosphere (Figure 2), the variation is not rapid enough to satisfy the observational constraints. Ackerman & Marley suggest that holes in the clouds may begin to dominate the disk-averaged spectra as the clouds are sinking out of sight. Jupiter's 5-\\(\\mu\\)m spectrum is indeed dominated by flux emerging through holes in its clouds. Bailer-Jones & Mundt (2000) find variability in L-dwarf atmospheres that may be related to such horizontal cloud patchiness. Despite the successes of the Ackerman & Marley model, clearly much more work needs to be done to understand clouds in the brown dwarfs. Perhaps three dimensional models of convection coupled to radiative transport will be required. ## 6 Extrasolar Giant Planets The issues of cloud physics considered above of course will also apply to the extrasolar giant planets (Marley 1998; Marley et al. 1999; Seager, Whitney, &Figure 3: Model brightness temperature spectra from Ackerman & Marley (2001). Spectra depict approximate depth in the atmosphere at which emission arises. Solid curves depict cloudy models and dotted curves cloud-free models with the same \\(T_{\\rm eff}\\) (all for \\(g=1000\\,{\\rm m\\,sec^{-2}}\\) & \\(f_{\\rm rain}=3\\)). Horizontal dashed and solid lines demark the level at which cloud opacity, integrated from above, reaches 0.1 and the base of the silicate cloud, respectively. In the early-L like model (a) and the T-dwarf like model (c) clouds play a relatively small role as they are either optically thin (a) or form below the level at which most emission arises (c). Only in the late-L case (b) do the optically-thick clouds substantially alter the emitted spectrum and limit the depth from which photons emerge. Cloud base varies with pressure and cloud thickness varies with strength of convection, accounting for the varying cloud base temperature and thickness. Sasselov 2000; Sudarsky, Burrows, & Pinto 2000). These papers demonstrate that the reflected spectra of extrasolar giant planets depends sensitively on the cloud particle size and vertical distribution. As already demonstrated by the brown dwarfs in the foregoing section, the emergent thermal flux is similarly affected. Indeed Sudarsky et al. suggest that a classification scheme based on the presence or absence of specific cloud layers be used to categorize the extrasolar giant planets. Moderate spectral resolution transit observations of close-in EGPs, if the bandpasses are correctly chosen, will certainly provide first-order information on cloud heights and vertical profiles of these atmospheres (Seager & Sasselov 2000; Hubbard et al. 2001). Coronagraphic multi-wavelength imaging of extrasolar giant planets will provide similar information (see Figure 1). ## 7 Conclusion It is ironic that although the physics governing the vast bulk of the mass of brown dwarfs and extrasolar planets is very well in hand, the old problem of weather prediction governs the radiative transfer and thus the only remotely sensed quantity. The good news is that there will soon be much more weather to talk about, even if we aren't any farther along in doing anything about it. This work was supported by NASA grant NAG5-8919 and NSF grants AST-9624878 and AST-0086288. The authors benefited from conversations with Dave Stevenson, Sara Seager, Adam Burrows, and Bill Hubbard. Heidi Hammel and Kevin Zahnle offered particularly helpful comments on an earlier draft of this contribution. ## References * [1] Ackerman, A. & Marley, M. 2001, ApJ in press * [2] Allard, F., Hauschildt, P. H., Baraffe, I. & Chabrier, G. 1996, ApJ, 465, L123 * [3] Bailer-Jones, C. A. L. & Mundt, R. 2001, A&A in press. * [4] Baines, K. H. & Hammel, H. B. 1994, Icarus 109, 20 * [5] Baines, K., Hammel, H., Rages, K., Romani, P., and Samuelson, R. 1995, in Neptune (Univ. Ariz. Press), 489 * [6] Basri, G., Mohanty, S., Allard, F., Hauschildt, P. H., Delfosse, X., Martin, E. L., Forveille, T. & Goldman, B. 2000, ApJ 538, 363 * [7] Brooke, T. Y., Knacke, R. F., Encrenaz, T., Drossart, P., Crisp, D. & Feuchtgruber, H. 1998, Icarus 136, 1 * [8] Burrows, A., Marley, M. S. & Sharp, C. M. 2000, ApJ 531, 438 * [9] Carlson, B. E., Lacis, A. A. & Rossow, W. B. 1994 J. Geophys. Res. 99, 114623 * [10] Chabrier, G., Baraffe, I., Allard, F. & Hauschildt, P. 2000, ApJ 542, 464 * [11] Fegley, B. J. & Lodders, K. 1994, Icarus 110, 117 * [12] Griffith, C. A., Yelle, R. V. & Marley, M. S. 1998, Science 282, 2063 * [13] Hubbard, W., Fortney, J., Lunine, J., Burrows, A., Sudarsky, D., & Pinto, P. (2001) ApJ submitted* (1999) Leggett, S. K., Toomey, D. W., Geballe, T. R. & Brown, R. H. 1999, ApJ 517, L139 * (2000) Liebert, J., Reid, I. N., Burrows, A., Burgasser, A. J., Kirkpatrick, J. D. & Gizis, J. E. 2000, ApJ 533, L155 * (1989) Lunine, J. I., Hubbard, W. B., Burrows, A., Wang, Y. -. & Garlow, K. 1989, ApJ 338, 314 * (1996) Marley, M. S., Saumon, D., Guillot, T., Freedman, R. S., Hubbard, W. B., Burrows, A. & Lunine, J. I. 1996, Science 272, 1919 * (1999) Marley, M. S., Gelino, C., Stephens, D., Lunine, J. I. & Freedman, R. 1999, ApJ 513, 879 * (1998) Marley, M. S. 1998, in Brown dwarfs and extrasolar planets, ASP Conf. Series #134, eds. R. Rebolo; E. Martin; M. Zapatero Osorio, 383 * (2000) Marley, M. S. 2000, in From Giant Planets to Cool Stars, ASP Conf. Series #212, eds. C. Griffith & M. Marley, 152 * (2000) Mischna, M. A., Kasting, J. F., Pavlov, A., & Freedman, R. 2000, Icarus 145, 246 * (1979) Pollack, J. B., Colburn, D. S., Flasar, F. M., Kahn, R., Carlston, C. E. & Pidek, D. G. 1979, J. Geophys. Res. 84, 2929 * (2000) Saumon, D., Geballe, T. R., Leggett, S. K., Marley, M. S., Freedman, R. S., Lodders, K., Fegley, B. & Sengupta, S. K. 2000, ApJ 541, 374 * (2000) Seager, S. & Sasselov, D. D. 2000, ApJ537, 916 * (2000) Seager, S., Whitney, B. A. & Sasselov, D. D. 2000, ApJ540, 504 * (2001) Stephens, D., Marley, M., Noll, K., & Chanover, N. 2001, ApJ, submitted * (2000) Sudarsky, D., Burrows, A. & Pinto, P. 2000, ApJ 538, 885 * (1984) Tomasko, M. G., West, R. A., Orton, G. S. & Teifel, V. G. 1984, in Saturn (Univ, Ariz. Press), 150 * (1997) Jones, H. R. A. & Tsuji, T. 1997, ApJ 480, L39 * (1996) Tsuji, T., Ohnaka, K., Aoki, W. & Nakajima, T. 1996, A&A 308, L29 * (1999) Tsuji, T., Ohnaka, K. & Aoki, W. 1999, ApJ 520, L119 * (1986) West, R. A., Strobel, D. F. & Tomasko, M. G. 1986, Icarus 65, 161
Clouds and hazes are important throughout our solar system and in the atmospheres of brown dwarfs and extrasolar giant planets. Among the brown dwarfs, clouds control the colors and spectra of the L-dwarfs; the disappearance of clouds helps herald the arrival of the T-dwarfs. The structure and composition of clouds will be among the first remote-sensing results from the direct detection of extrasolar giant planets. NASA Ames Research Center; Mail Stop 245-3; Moffett Field, CA and New Mexico State University; Department of Astronomy; Las Cruces, NM 88003 NASA Ames Research Center; Mail Stop 245-4; Moffett Field, CA 94035
Give a concise overview of the text below.
141
mdpi/031ba812_83e8_49f3_8b6f_7acab2e2089e.md
Spatial Structure, Short-temporal Variability, and Dynamical Features of Small River Plumes as Observed by Aerial Drones: Case Study of the Kodor and Bzyp River Plumes Alexander Osadchiev 1Shirshov Institute of Oceanology, Russian Academy of Sciences, Nakhimovskiy prospect 36, 117997 Moscow, Russia; [email protected] Alexandra Barymova 3Mexin Research Center at Lomonosov Moscow State University, Leninskie Gory 1, 119992 Moscow, Russia; [email protected] Roman Sedakov 1Shirshov Institute of Oceanology, Russian Academy of Sciences, Nakhimovskiy prospect 36, 117997 Moscow, Russia; [email protected] Roman Zhiba 4Institute of Ecology, Academy of Sciences of Abkhazia, Krasnomayatskaya str. 67, 384900 Sukhum, Abkhazia; [email protected] (R.Z.); [email protected] (R.D.) Roman Dbar 4Correspondence: [email protected] Received: 20 August 2020; Accepted: 17 September 2020; Published: 20 September 2020 ## 1 Introduction Airborne remote sensing of sea surface is constantly expanding during the last ten years due to significant progress in development of aerial drones, especially low-cost quadcopters [1; 2; 3; 4; 5; 6]. Many previous works used airborne data to study various marine processes including mapping of coastal topography [7; 8] and bathymetry [9; 10; 11; 12], surveying of marine flora and fauna [13; 14; 15; 16; 17; 18; 19; 20; 21; 22], and monitoring of water quality and anthropogenic pollution [23; 24; 25; 26; 27; 28]. Several works used airborne data to study physical properties of sea surface layer including estimation of turbulence [29] and reconstruction of surface currents [30; 31]. However, applications of aerial remote sensing are still rare in physical oceanography, especially in comparison with numerous studies based on satellite remote sensing. Studies of river plumes provide a good example of this situation. Hundreds of related works were based on high-resolution [32; 33], medium-resolution [34; 35; 36; 37], and low-resolution [38; 39; 40] optical satellite data, satellite-derived temperature [41; 42; 43], salinity [4; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46], and roughness [47; 48; 49; 48; 49; 46] of sea surface. On the other hand, only several studies used airborne remote sensing to study river plumes [50; 51; 25; 52; 53; 26]. Moreover, we are not aware of any study, which specifically addressed structure, variability, and dynamical features of small plumes using aerial remote sensing data. This point provides the main motivation of the current work. General aspects of the structure and dynamics of river plumes as well as their regional features were addressed in many previous studies. Nevertheless, these works were mostly focused on large river plumes, while small rivers plumes received relatively little attention. However, small rivers play an important role in global land-ocean fluxes of fluvial water and suspended and dissolved sediments [54; 55; 56]. Small rivers form buoyant plumes that have small spatial scales and, therefore, small residence time of freshened water, which is equal to hours and days, due to relatively low volume of river discharge and its intense mixing with ambient sea [57]. Dissipation of freshened water as a result of mixing of a small plume with subjacent saline sea limitedly influences ambient sea and does not result in accumulation of freshwater in adjacent sea area. As a result, small plumes are characterized by sharp salinity and, therefore, density gradients at their boundaries with ambient sea. This feature is not typical for large river plumes and results in significant differences in spreading and mixing between small plumes and large plumes. Sharp vertical density gradient at the bottom boundary of a small plume hinders vertical energy transfer between a small river plume and subjacent sea [57]. This feature strongly affects spreading dynamics of a small plume due to the following reasons. First, the majority of wind energy transferred to sea remains in a small plume, because the vertical momentum flux diminishes at the density gradient between a plume and subjacent sea. Therefore, wind stress is concentrated in a shallow freshened surface layer that causes higher motion velocity and more quick response of dynamics of a small plume to variability of wind forcing, as compared to ambient sea [58; 59]. It results in wind-driven dynamics of small plumes, which is characterized by very energetic short-temporal variability of their positions, shapes, and areas [60; 61; 62; 63; 64]. Study of structure and variability of small river plumes at small spatial and temporal scales is essential for understanding the fate of freshwater discharge from small rivers to sea and the related transport of suspended and dissolved river-borne constituents. However, high short-temporal variability of small plumes and their small vertical sizes inhibit precise in situ measurements of their thermohaline and dynamical characteristics [57]. Satellite remote sensing also does not provide the necessary spatial resolution and temporal coverage for small river plumes. As a result, many important aspects of structure, variability, and dynamics of small river plumes at small spatial and temporal scales remain unstudied. Quadcopters are especially efficient in observation of small river plumes because they can continuously observe sea surface with high spatial resolution from relatively low altitude. Quadcopters can be used during overcast sky when optical satellite instruments cannot observe sea surface. The main drawback of their usage is relatively short duration of continuous operation (less than several hours), limited weight of carried instruments, and inability of their operation during strong wind, rain, snow, low temperature, and other inappropriate weather conditions. Despite these limitations, usage of quadcopters provides unprecedented ability to study structure of small river plumes, detect and measure their short-temporal variability, and register various dynamical features of these plumes. Therefore, the main goal of the current work is to describe structure and temporal variability of small river plumes on small spatial (from meters to hundreds of meters) and temporal (from minutes to hours) scales, which are limitedly covered by in situ measurements and satellite imagery and remain almost unaddressed by the previous studies. In this work we use aerial remote sensing supported by synchronous in situ measurements and satellite observations to study small river plumes formed in the northeastern part of the Black Sea. We show that usage of aerial drones, first, strongly enhances in situ and satellite observations of structure and variability of small plumes, second, provides ability to perform accurate, continuous, and high-resolution measurements of their spatial characteristics and current velocity fields, and, finally, significantly improves operational organization of field measurements. Owing to continuous and high-resolution aerial remote sensing, we report several novel results about spatial structure, short-temporal variability, and dynamical features of small river plumes. These results include strongly inhomogeneous structures of small river plumes manifested by complex and dynamically active internal frontal zones; undulate (lobe-cleft) form of a sharp front between a small river plume and ambient sea; energetic lateral mixing across this front caused by its baroclinic instability; internal waves generated by river discharge near a river estuary and propagating within the inner plume; and internal waves generated by vortex circulation of river plume and propagating within the outer plume. The obtained results reveal significant differences in structure, variability, and dynamics between small plumes and large plumes. The paper is organized as follows. Section 2 provides the detailed information about the aerial, in situ, and satellite data, as well as the processing methods used in this study. The results derived from aerial observations of small river plumes supported by in situ measurements and satellite observations are described in Section 3. Section 4 focuses on discussion and interpretation of the revealed features of spatial structure, short-temporal variability, and dynamics of small river plumes. The summary and conclusions are given in Section 5. ## 2 Data and Methods ### Study Area In this work, we focused on the Kodor and Byp river plumes formed in the northeastern part of the Black Sea (Figure 1). These rivers were chosen as the case sites due to the following reasons. First, these rivers have high concentrations of suspended sediments (300-500 g/m\\({}^{3}\\) in the Kodor River and 100-300 g/m\\({}^{3}\\) in the Byp River) [65], therefore, the turbid Kodor and Byp plumes can be effectively detected by optical aerial and satellite imagery. Second, the Kodor and Byp rivers are relatively small, their catchment areas are 2000 and 1500 km\\({}^{2}\\), respectively, and their average annual discharges are approximately 130 and 120 m\\({}^{3}\\)/s, respectively [65]. As a result, the Kodor and Byp plumes are small enough to be observed by aerial remote sensing from relatively small altitude (\\(<\\) 200 m). However, both rivers are mountainous with large mean basin altitudes (\\(>\\) 1500 m) and slopes (\\(>\\) 0.02%o), as well as high drainage density (\\(>\\) 0.8 1/km). Therefore, during spring freshet and short-term rain-induced floods the runoffs from the Kodor and Byp rivers dramatically increase by 1-2 orders of magnitude. Third, despite their relatively small spatial extents, the Kodor and Byp plumes are the largest plumes in the study area. As a result, structure and dynamics of these plumes are not influenced by interaction with other river plumes. Fourth, the Kodor and Byp rivers have different mouth morphologies that affect the structure of their plumes. The majority of the Byp River runoff inflows to sea from the main river channel, however, a small side-channel is formed during high discharge periods. The Kodor River inflows to sea from three large river channels, which form the Kodor Delta. The mouths of these deltaic branches are located along the 2 km long segment of the coastline. Finally, wind, cloud, and rain conditions in the study area are favorable for aerial and satellite observations of the river plumes during the majority of the year. The continental shelf at the study area is very steep and narrow. The distance between the coastline and the 500 m isobath is less than 10 km near the Kodor and Bzyp mouths (Figure 1). The main coastline features at the study area are large cages, namely the Iskuria and Pitsunda cages, located to the south from the Kodor and Bzyp deltas, respectively (Figure 1). The local sea circulation from surface to the depth of 200-250 m is governed by alongshore currents due to the current system cyclonically circulating along the continental slope, which is generally referred to as the Black Sea Rim Current [66; 67]. Sea surface circulation in the study region is also influenced by nearshore anticyclonic eddies, which are regularly formed between the main flow of the Rim Current and the coast owing to baroclinic instability caused by wind forcing and coastal topography [68; 69; 70]. Tidal circulation at the study area is very low and tidal amplitudes are less than 6 cm [71; 72]. Salinity in the coastal sea, which is not influenced by river discharge, is 17-18 [67; 73]. ### Aerial, In Situ, and Satellite Data Aerial observations of the Kodor and Bzyp plumes were performed by a quadcopter (_DJI Phantom 4 Pro_) equipped with a 12 MP/4K video camera. Aerial observations of the plumes were supported by ship-borne in situ measurements of salinity, temperature, turbidity, and current velocity within the plumes and the adjacent sea. The size of this quadcopter is small enough to be launched from and landed on a small boat. It provides opportunity for a quadcopter operator to be onboard the research vessel and to effectively coordinate synchronous in situ measurements and water sampling. Figure 1: Bathymetry of the study region, locations of the Iskuria and Pitsunda cages, the Bzyp and Kodor rivers, and other smaller rivers of the study region. Location of the study region at the northeastern part of the Black Sea is shown in the inset. Red boxes indicate areas of aerial observations and in situ measurements at the Bzyp and Kodor plumes. Green stars indicate locations of meteorological stations. Aerial observations and in situ measurements of the Kodor plume were conducted on 1-2 September 2018 and 1-3 April 2019, while aerial observations and in situ measurements of the Bzyp plume were performed on 31 May-1 June 2019. Below we provide the protocols of these aerial surveys according to the scheme suggested by Doukari et al. (2019). The quadcopter was flying over coastal sea areas adjacent to the Kodor and Bzyp river mouths. The take-off and landing spot was located on a vessel/boat that provided opportunity to perform flights at different areas of the plumes without any limitations on their distance to the seashore. The distance between the quadcopter and the research vessel/boat did not exceed 1 km. Quadcopter shooting altitude depended on the spatial scale of the sensing sea surface process and varied from 10-30 m for the small-scale frontal circulation to 150-200 m for detection of plume position and area. Weather conditions during the field surveys were favorable for usage of the quadcopter. Wind forcing during the flights was moderate (\\(<\\) 8 m/s), air temperature varied between 15 and 30\\({}^{\\circ}\\) C, and air humidity varied between 60% and 90%. The flights were conducted during no-rain conditions from morning to evening. In case of clear sky conditions, sun glint strongly affected the quality of the aerial data during the daytime. Wave heights were \\(<\\) 0.5 m during the flights. In situ measurements performed in the study areas were the following. Continuous salinity and temperature measurements in the surface sea layer (0.5-1 m depth) were performed along the ship tracks using a shipboard pump-through system equipped by a conductivity-temperature-depth (CTD) instrument (_Yellow Springs Instrument 6600 V2_) (_Zel that the displacement \\(\\Delta\\overrightarrow{\\mathrm{x}}\\) is constant in any small neighborhood, i.e., we search for a displacement that minimizes the constraint error: \\[\\mathrm{E}\\!\\left(\\overrightarrow{\\mathrm{x}}\\right)\\;=\\;\\sum_{\\overrightarrow {\\mathrm{x}}}\\mathrm{g}\\!\\left(\\overrightarrow{\\mathrm{x}}\\right)\\!\\left( \\mathrm{V}\\!\\left(\\overrightarrow{\\mathrm{x}},\\mathrm{t}\\right)\\!\\cdot \\!\\overrightarrow{\\mathrm{u}}+\\mathrm{I}_{\\!\\left(\\overrightarrow{\\mathrm{x} },\\mathrm{t}\\right)}\\right)^{2} \\tag{3}\\] where \\(\\mathrm{g}\\!\\left(\\overrightarrow{\\mathrm{x}}\\right)\\) is a weight function. Thus, minimization of \\(\\mathrm{E}\\!\\left(\\overrightarrow{\\mathrm{x}}\\right)\\) with respect to \\(\\overrightarrow{\\mathrm{u}}\\) provides an additional condition for Equation (2). The resulting vector field \\(\\overrightarrow{\\mathrm{u}}\\) calculated from Equations (2) and (3) is regarded as an optical flow estimate. In this work, we used the Farneback weight function [78] freely available in the OpenCV computer vision library ([https://opencv.org/](https://opencv.org/)). This algorithm approximates a neighborhood of a pixel in each pair of frames by a quadratic polynomial function applying the polynomial expansion transform. Therefore, a constraint equation is based on a polynomial approximation of the given signal. On the assumption of small variability of a displacement field, the algorithm minimizes quadratic error of the constraint and calculates the optical flow estimation. The estimation of surface velocity fields in the study region was performed in two stages. First, we applied the optical flow algorithm with large prescribed sizes of pixel neighborhoods for the reconstruction of motion of distinct plume boundaries and fronts. Second, we reconstructed motion within the river plume using the optical flow algorithm with a reduced neighborhood size. Spatial scale of motion, which is intended to be reconstructed, positively correlates with optimal size of a pixel neighborhood. An algorithm with a small pixel neighborhood more accurately reconstructs small-scale motion, but shows lower quality for large motion patterns, as compared to an algorithm with a large pixel neighborhood. The overall neighborhood size was prescribed according to spatial scales of ocean surface features (e.g., river plume fronts), in which motion is expected to be detected by an optical flow algorithm. Thus, the optimal neighborhood size intended to reconstruct the large-scale motion of river plumes should be equal to the width of distinct plume boundaries and fronts. In this study, the large size of a pixel neighborhood was prescribed equal to 30 m, while the small size of a pixel neighborhood was set equal to 1 m. In case of application of this algorithm to other regions, we suggest prescribing neighborhood sizes equal to relevant spatial scales of the considered ocean surface features. Due to high resolution of the video camera used and continuous video recording, the optical flow algorithm efficiently detected motion of the distinct frontal zones within the river plumes, as well as motion of foam and floating litter accumulated at these fronts which is indicative of the circulation patterns at the frontal zones. As a result, the reconstructed surface velocity fields showed good accordance with visually inspected shifts of the frontal zones, foam, and floating litter at the video records. Stable positioning of a quadcopter is important for precise motion detection at sea surface. Moderate wind speed during the field surveys did not negatively affect the quality of the obtained aerial data. However, strong wind forcing during camera shooting can hinder accurate reconstruction of surface velocity fields. Sun glint is another important issue that can impede motion detection at aerial video records. Intensity of the sun glint depends on solar elevation angle, camera shooting angle and direction; therefore, it can be reduced by correct selection of quadcopter altitude and position. Usage of polarizing filters for quadcopter camera can reduce glint from water surface, however, its efficiency strongly depends on camera shooting angle. ## 3 Results ### Spatial Structure and Short-temporal Variability of the Kodor and Bzyp Plumes The field surveys were performed during spring freshet at the Bzyp River (260 m\\({}^{3}\\)/s) on 31 May-1 June 2019; during drought period at the Kodor River (40 m\\({}^{3}\\)/s) on 1-3 April 2019; and during flash flooding period at the Kodor River (80-150 m\\({}^{3}\\)/s) on 31 August-2 September 2018. Wind forcing was moderate during these field surveys. Average and maximal wind speed registered at weather station in the study regions were 3.1 and 7.6 m/s during 31 August-2 September 2018; 2.4 and 6.2 m/s during 1-3 April 2019; and 2.9 and 5.6 m/s during 31 May-1 June 2019. Vertical salinity measurements in the study areas revealed that these low-saline plumes are shallow (\\(<5\\) m depth) and have distinct vertical salinity gradients with the ambient saline sea. Due to elevated concentrations of terrigenous suspended sediments in the Kodor and Bzyp rivers [65], turbidity within the Kodor and Bzyp plumes was significantly larger than in the ambient sea and showed good correlation with reduced salinity (Figure 2). The Pearson correlation coefficients (\\(r\\)) between salinity and turbidity are equal to \\(-0.87\\) and \\(-0.71\\) for the Kodor and Bzyp plumes respectively with \\(p\\)-values equal to \\(0.0000\\). These high absolute values of the correlation coefficients at low \\(p\\)-values indicate that the observed relations between salinity and turbidity within the Kodor and Bzyp plumes (low salinity and high turbidity), on the one hand, and the ambient sea water (high salinity and low turbidity), on the other hand, are statistically significant. As a result, surface turbidity structures of the Kodor and Bzyp plumes observed by optical remote sensing are indicative of surface salinity structures of these plumes. Aerial remote sensing and satellite imagery showed that the alongshore extents of turbid surface water associated with the considered river plumes during low discharge conditions are 1-5 km. The obtained estimates were consistent with salinity measurements at the study area. However, flooding discharge results in abrupt expanding of these plumes, their extents and areas can exceed 20 km and 50 km\\({}^{2}\\), respectively. Aerial and satellite images, surface salinity distribution, and vertical salinity profiles obtained on 31 August 2018 in the coastal area adjacent to the Kodor Delta are illustrative of spatial scales, as well as horizontal and vertical structure of the Kodor plume (Figure 3). Aerial observations and in situ measurements revealed strongly inhomogeneous salinity and turbidity structure of the Kodor plume manifested by complex and dynamically active frontal zones within the plume (Figures 4-6). In particular, surface salinity showed no dependence on the distance to the mouths of the deltaic branches that is regarded typical for river plumes [79; 80; 81], especially in numerical modeling studies [82; 83; 84; 85]. This inhomogeneous structure is formed due to impact of several different processes including the formation of the Kodor plume by several spatially distributed sources, the large inter-day river discharge variability in response to sporadic rain events, and the bathymetric features that influence spreading of the plume. Figure 2: Relations between salinity and turbidity (**a**) within the Kodor plume and the adjacent saline sea on 2–3 April 2019 and (**b**) within the Bzyp plume and the adjacent saline sea on 31 May 2019. Dashed red boxes indicate river plumes, transitional zones, and ambient saline sea. Red lines indicate regression lines. The Pearson correlations coefficients (\\(r\\)) with \\(p\\)-values, which indicate statistical significance of the observed relations, are given above the diagrams. The Kodor River inflows to sea from three deltaic branches with different discharge rates. As a result, all three branches form individual river plumes that merge and coalesce into the common Kodor plume. These three river plumes have different structure, spatial characteristics, and dynamics, therefore, they interact as individual water masses and form stable frontal zones observed by aerial imagery (Figure 4a) [86; 87; 88]. In situ measurements performed on 2 September, 2018 revealed sharp salinity gradient at the frontal zone between the river plumes formed by the northern and the central deltaic branches of the Kodor River. Surface salinity along the transect that crossed this frontal zone abruptly decreased from 14 to 8-10 on a distance of 5 m (Figure 4b). Figure 3: (**a**) surface salinity distribution, (**b**) vertical salinity profiles, (**c**) aerial image (acquisition time 13:29), and (**d**) Sentinel-2 ocean color composite of the Kodor plume from 31 August 2018. Color dots indicate locations of vertical salinity measurements (1, blue—near the river mouth; 2, yellow—near the plume border, and 3, brown—at the ambient saline sea). Red arrows indicate location of the central deltaic branch of the Kodor River, green arrows indicate location of the Iskuria Cape. The red swirl at panel (**a**) indicates location of the eddy detected on 1 September, 2018 (see Figures 7–9). The red wave line at panel (**a**) indicates location of the undulate (lobe-cleft) plume border detected on 1 September 2018 (see Figures 12–15). The discharge of the Kodor River shows quick response to precipitation events that is common for small mountainous rivers with small and steep watershed basins. Frequent rains at the mountainous northeastern coast of the Black Sea cause high inter-day variability of the discharge rate of the Kodor River [65; 89]. As a result, the area of the Kodor plume can significantly change during less than one day that was observed on 31 August-2 September 2018 during the field survey. Heavy rain that occurred during 6 hours at night on 31 August-1 September (according to the local weather station measurements) caused increase of the river discharge from 80 to 150 m\\({}^{3}\\)/s during several hours. The area of the Kodor plume doubled from 31 August to 1 September in response to the flash flood. Wind direction during 31 August-1 September was stable (southwestern), while wind speed slightly increased from 2-3 m/s to 4-5 m/s. Then river discharge steadily decreased to pre-flooding conditions, which were registered on 2 September, while wind direction changed to eastern and wind velocity decreased to 3-4 m/s. In situ measurements and aerial remote sensing performed on 2 September, i.e., shortly after the flood, observed, first, the large residual plume that was formed on 1 September during the flooding event and did not dissipate yet and, second, the emergent plume that was formed on 2 September after the decrease of river discharge rate (Figure 5). These plumes had different spatial scales, structures, thermohaline, and dynamical characteristics. As a result, similarly to the river plumes formed by different deltaic branches, the residual and the emergent plumes interacted as individual water masses and formed complex frontal zones within the common Kodor plume. Figure 4: (**a**) aerial image, (**b**) vertical salinity, and (**c**) velocity profiles at the frontal zone between river plumes formed by the northern and central deltaic branches of the Kodor River on 1 September, 2018. Colored dots indicate locations of vertical salinity (P1, blue—the northern plume; P2, yellow—the central plume) and velocity (P3, brown—the northern plume; P4, green—the central plume) measurements. The red arrow in panel (a) indicates location of the central deltaic branch of the Kodor River. Interaction between the Kodor plume and the seafloor at the shallow zones is the third process that induces inhomogeneous structure of this plume. Aerial imagery detected the area of reduced turbidity formed behind the shoal, which is located in front of the northern deltaic branch (Figure 6). This low-turbid zone contrasted especially with the surrounding turbid river plume during the flooding discharge on 1 September 2018. In situ measurements showed that surface salinity at this low-turbid zone (15) was significantly greater than at the adjacent turbid part of the plume (12.5-13) (Figure 6c). Surface circulation also differed in these two parts of the plume. The northward flow (10 cm/s) was observed in the low-turbid zone, while the southeastward flow (20 cm/s) dominated in the adjacent turbid part (Figure 6d). The formation of this zone is caused by the interaction of the inflowing river jet with seafloor at the shoal that induces deceleration of the jet and its increased mixing with saline and low-turbid sea water. The stable front bounding this low-turbid and high-saline zone inside the plume was observed on a distance of up to 1 km from the shoal. Figure 5: **(a,b)** aerial images, (**c**) vertical salinity, and (**d**) velocity profiles at the frontal zone between the emergent and the residual parts of the Kodor plume on 2 September 2018. Colored dots indicate locations of vertical salinity and velocity measurements (P1, blue—the emergent plume; P2, yellow—the residual plume). Arrows in panels (**a**) and (**b**) indicate distinct frontal zones between the emergent and the residual parts of the Kodor plume. Red arrows in panels (**a**) and (**b**) point at the same segment of the frontal zone where in situ measurements were performed. ### Dynamical Features of the Kodor and Bzyp Plumes Using aerial remote sensing we detected several dynamical features of the Kodor and Bzyp plumes and measured their spatial characteristics. Based on the surface velocity data reconstructed from the aerial video records, we studied dynamical characteristics of these features and analyzed their physical background. Aerial remote sensing detected a swirling eddy within the Kodor plume on 1 September 2018 (Figure 7). This eddy was formed at the southern part of the emergent plume at its border with the residual plume near the Iskuria Cape. The aerial image of this part of the plume acquired at 12:52 (Figure 7a) showed inhomogeneous structure of the emergent plume without any eddy. The distinct border between the emergent and the residual plumes was stretched from the Iskuria Cape in the northwestern direction. The beginning of formation of the eddy was registered at 14:42 (Figure 7b), then at 15:34 the well-developed eddy was observed (Figure 7c,d). The diameter of the eddy was approximately 500 m, it was rotating in an anticyclonic direction, while its center was moving at an angle of approximately 30\\({}^{\\circ}\\) across the border of the emergent plume. Processing of the video record of this eddy provided estimations of velocity of its movement (0.9 m/s) and rotation (0.4 m/s). The aerial observations performed at 16:16 did not show any surface manifestations of the eddy at the study area; therefore, we presume that it shifted off the observation area during less than an hour. Wind conditions were stable during the considered period, wind speed did not exceed 3.5 m/s. Figure 6: **(a,b)** aerial images, **(c)** vertical salinity, and **(d)** velocity profiles at the frontal zone of the Kodor plume formed behind the shoal on 1 September 2018. Colored dots indicate locations of vertical salinity and velocity measurements (P1, blue—the low-turbid zone of the plume; P2, yellow—the frontal zone; and P3, brown—the turbid part of the plume). The white arrow in panel **(b)** indicates location of the shoal, red arrows indicate location of the central deltaic branch of the Kodor River, and the green arrow indicates location of the Iskuria Cape. Figure 7: Aerial images of the southern part of the Kodor plume **(a,b)** before and **(c)** during interaction between the plume and the eddy acquired at **(a)** 12:52, **(b)** 14:42, **(c)** 15:34, and **(d)** 15:41 on 1 September, 2018. **(e)** surface salinity, **(f)** zonal (blue) and meridional (red) velocities measured during 15:57—16:01 and **(g)** vertical salinity and **(h)** velocity profiles measured at 16:02 within the eddy. Yellow dots in panels **(c)** and **(d)** indicate location of salinity and velocity measurements. The white arrow in panel **(c)** indicates location of the eddy, red arrows indicate location of the central deltaic branch of the Kodor River, and green arrows indicate location of the Iskuria Cape. In situ thermohaline and velocity measurements were performed within the eddy at 15:57-16:01 (Figure 7e,f). They included continuous measurements at a depth of 0.7-0.8 m for 4.5 minutes followed by vertical profiling from surface to the depth of 13 m. Note that the measurements were performed at the stable point, while the eddy was moving. As a result, the performed measurements registered salinity and velocity in different parts of the eddy while it was passing the point of measurements. The intense northward flow (55 cm/s) registered in the surface layer at the beginning of the measurements steadily dissipated to \\(<\\)10 cm/s during the first stage of the measurements (Figure 7f). The eastward velocity component was slightly positive during the first two minutes of the measurements (6 cm/s on average with the peak value of 16 cm/s) and then changed to slightly negative (\\(-\\)5 cm/s on average with the peak value of \\(-\\)11 cm/s). It was accompanied by significant variability of salinity that increased from 13.5 to 15.5 during the first 1.5 min of the measurements and then decreased to 13.5 (Figure 7e). The observed variability of velocity and salinity in the surface layer confirms northward propagation and anticyclonic rotation of this eddy observed at aerial video (Supplementary Materials). However, the movement and rotation velocities registered by in situ measurements were twice less than those reconstructed from the aerial video. This difference is caused by the fact that in situ measurements were performed not at the central part of the eddy, but at its periphery. The observed variability of salinity in the surface layer was caused by intrusion of saline water from the ambient sea to the plume induced by the rotation of the eddy (Figure 7d). Vertical profiles of salinity and velocity measured at 16:02, i.e., after the measurements in the surface layer, registered strong northwestward flow in the subjacent saline sea (Figure 7g,h). Its maximal velocity (15-25 cm/s) was observed immediately beneath the plume at depths of 3-5 m, then velocity decreased to 10-15 cm/s at depths of 8-9 m and to \\(<\\)5 cm/s at depths of 10-13 m. This northwestward flow (20-30 cm/s) was also registered along the Iskuria Cape at the previous day that confirms the presence of the northwestward jet behind the Iskuria Cape which is presumed to generate the observed eddy. Interaction between sub-mesoscale eddies and the Kodor plume was also observed by satellite imagery. The chains of small anticyclonic eddies (300-500 m in diameter) formed behind the Iskuria Cape and interacting with the Kodor plume were registered on 17 July 2018, 21 August 2019, and 26 August 2019 (Figure 8a). Positions, sizes, and shapes of four to five subsequent eddies within these chains indicate that these chains were periodically generated near the Iskuria Cape and propagated in the northwestward direction shortly before the periods of satellite observations. While tracks of the eddies were crossing the Kodor plume, the turbid plume water was twisted into the eddies, which made them visible at satellite imagery. After these eddies propagated off the plume the trapped turbid water remained connected with the plume that illustrated difference in trajectories and velocities of the eddies and the wind-driven far-field part of the plume (Figure 8a). Satellite images acquired during the periods of field measurements at the Kodor plume did not register interactions between the eddies and the plume due to episodic character of these features, i.e., eddies do not constantly form and propagate at the study area. Therefore, the satellite images presented in Figure 8 are not synchronous with the field surveys. However, sizes and anticyclonic rotation in the northwestward direction were similar for eddies detected at the Kodor plume by aerial and satellite remote sensing. As a result, we presume that we observe the same process and, therefore, can jointly analyze its spatial and temporal characteristics obtained from aerial and satellite measurements. Satellite imagery also observed eddies formed behind the Pitsunda Cape and interacting with the Bzyp plume on 30 July 2017 and 10 October 2019 (Figure 8b). However, in contrast to the eddies registered within the Kodor plume, these eddies were individual, i.e., did not form chains. Moreover, these eddies were much larger (2-4 km in diameter) and were rotating in cyclonic direction. Satellite images acquired during the periods of field measurement at the Bzyp plume also did not register interactions between eddies with the Bzyp plume. Satellite image acquired on 10 October 2019 detected packets of internal waves emerging from the rotating eddy and propagating within the Bzyp plume (Figure 9b). Aerial observations on 1 September 2018 also detected a packet of internal waves that emerged from the eddy and was propagating within the outer part of the plume towards the open sea (Figure 10a). Note that the aerial imagery of the Kodor plume (Figure 9a) and the satellite imagery of the Bzyp plume (Figure 9b) are not synchronized and show different river plumes at different dates. Aerial and satellite images acquired during the period of field measurements at the Bzyp plume did not register internal waves within the Bzyp plume. Therefore, in Figure 9 we show airborne images of internal waves at the Kodor plume and satellite images of internal waves at the Bzyp plume. Figure 8: Sentinel-2 ocean color composites (**a**) from 17 July 2018, 21 August 2019, and 26 August 2019 illustrating interactions between eddies and the Kodor plume and (**b**) from 30 July 2017 and 10 October 2019 illustrating interactions between eddies and the Bzyp plume. Green arrows indicate location of the Iskuria Cape and red arrows indicate location of the Pitsunda Cape. Note that images at panels (**a**) and (**b**) are inconsistent, i.e., they show river plumes at different dates. Despite a large difference in coverage and spatial resolution of the aerial and satellite imagery presented in Figure 9, they both distinctly demonstrate propagation of internal waves within the river plumes. Satellite remote sensing has wide spatial coverage and provides information about spatial characteristics of wave packets at different parts of the plumes (Figure 9b). Distances between the wave packets observed at Sentinel-2 satellite images varied from 30 to 150-200 m, while lengths of the wave packets were up to 5-6 km. Satellite images demonstrated that dozens of internal waves were generated within the plume around the rotating eddy. On the other hand, airborne remote sensing provided opportunity to detect individual internal waves with high spatial resolution and to register their velocities (Figure 9a). High-resolution aerial imagery detected that the distances between the individual Figure 10: Aerial images of surface manifestations of low-frequency internal waves within the Kodor plume near the mouths of (**a**) the northern and (**b**) the central deltaic branches on 2 September 2018. The green arrow indicates location of the northern deltaic branch of the Kodor River and the red arrow indicates location of the central deltaic branch of the Kodor River. Black arrows indicate surface manifestations of internal waves. Figure 9: Surface manifestations of high-frequency internal waves generated by the eddies within the (**a**) Kodor and (**b**) Bzyp plumes at (**a**) aerial images acquired on 1 September 2018 and (**b**) satellite images acquired on 10 October, 2019. The central picture at panel (**a**) is the zoomed fragment of the left picture at panel (**a**) indicated by the white dashed rectangle 1. The central and right pictures at panel (**b**) are the zoomed fragments of the left picture at panel (**b**) indicated by white dashed rectangles 2 and 3, respectively. Black arrows indicate surface manifestations of internal waves. waves within the wave packet in the Kodor plume were 2-4 m. The length of the wave packet front was approximately 200 m. The number of waves within the wave packet varied from 12 at its northern part to 3 at its southern periphery. Processing of high-resolution video records revealed that velocity of the wave packet was equal to 0.21 m/s. Aerial remote sensing also detected multiple packets of low-frequency internal waves that propagated within the Kodor plume towards the coast on 2 September 2018 (Figure 10). These packets consisted of 5-15 waves that were stretched along the coast, albeit had complex shapes not related to the shapes of the plume front or the coastline. Distances between individual waves varied from 5 to 70 m in the observed wave packets. Frontal length of these packets varied from \\(\\sim\\)100 m (Figure 10a) to 2-3 km (Figure 10b), while their speeds were 10-15 cm/s. Wind speed during this period was 2-3 m/s. Osadchiev [33] described a mechanism of generation of internal waves in small river plumes as a result of rapid deceleration of an inflowing river jet and formation of a hydraulic jump in vicinity of a river mouth. These internal waves propagate offshore and are regularly observed by satellite imagery in many coastal regions in the World [33; 90; 91]. Using aerial remote sensing we recorded generation and propagation of these internal waves from the mouth of the side-channel of the Bzyp River on 1 July 2019 (Figure 11a). The internal waves were generated at a distance of 40-50 m from the river mouth every 19 seconds on average, i.e., 29 individual waves were generated during a 9-min long video recording of this area. The distances between the waves decreased from 8-10 m near the river mouth to 1-2 m at the distance of 500 m from the river mouth. Wave velocities were equal to 0.27-0.31 m/s. Moderate (2-3 m/s) northern wind was registered during the considered period. Aerial observations of internal waves in the Bzyp plume described above were supported by in situ salinity and turbidity measurements performed from a flat-bottomed boat with shallow draft to minimize the boat-induced mixing of sea surface layer (Figure 11). Measurements included 15 surface-to-bottom profiles continuously performed from a free-drifting boat starting at the generation area of the internal waves at the distance of 10 m from the river mouth and finishing 90 m far from the starting point (Figure 11a). The obtained data revealed large difference in vertical salinity structure of the Bzyp plume inside and outside this generation area of internal waves. The first half of the hydrological transect was located at the area of formation of the hydraulic jump as a result of abrupt deceleration of the inflowing river jet (Figure 11b). Similarly to the hydraulic jump observed and described by Osadchiev [33] at the inflowing jet of the Mzymta River, we registered anomalously deep penetration of low-saline water at the generation area of the internal waves in the Bzyp plume. Low-saline water (10-14) was observed from surface to the depth of 3-4 m along 0-5 m and 25-35 m of the transect. Vertical salinity structure within this part of the plume was unstable with multiple overturns (reverse salinity difference was up to 1 at vertical distance of 0.1 m) and large salinity gradients. Vertical salinity structure of the Bzyp plume between the areas of the hydraulic jumps, i.e., along the 5-25 m of the transect, showed relatively homogenous salinity (14.5-16) from surface to bottom, albeit it was much higher than within the areas of hydraulic jumps. Outside the generation area of the internal waves, i.e., along the 35-90 m of the transect, surface salinity was relatively homogenous (14.5-15.5) and vertical salinity structure was stable. Vertical salinity gradient outside the generation area of internal waves was two orders of magnitude less than the largest values registered in the hydraulic jumps. However, salinity measurements did not cover top 0.5 m of the surface layer, where presumably was located the salinity gradient. Vertical turbidity structure, however, did not show large difference inside and outside the generation area of the internal waves (Figure 11c). The turbid layer was observed from surface to the depth of 1-1.5 m along the first part of the transect and then its depth steadily decreased to 0.5 m. This feature shows that salinity and turbidity structure of a river plume can be significantly different in areas of very intense advection and turbulent mixing. Figure 11: **(a)** aerial image of surface manifestations of internal waves propagating within the Bzyp plume off the river mouth and location of the hydrological transect (black line) on 1 July 2019 **(a)**. Vertical **(b)** salinity and **(c)** turbidity profiles along the hydrological transect. ### Undulate Borders of the Kodor and Bzyp plumes Aerial remote sensing of the Kodor and Bzyp plumes showed undulate structure of long segments of their outer borders manifested by alternation of specific convex and concave segments. These segments are 2-10 m long and up to 2 m wide and hereafter are referred as \"lobes\" and \"clefts\" [52; 53]. Aerial images of the undulate fronts observed at the Kodor plume border on 1 September, 2018 and at the Bzyp plume border on 1 June 2019 are shown in Figure 12. This lobe-cleft structure was registered only at sharp and narrow frontal zones formed between the emerging plume, on the one hand, and the residual plume or the ambient sea, on the other hand. Lobes and clefts were absent at diffuse fronts, i.e., wide and low-gradient fronts that contour the outer parts of the plumes, which experience intense mixing with the ambient sea. In particular, these undulate fronts commonly extended from the river mouths and bounded the inflowing river jets, i.e., near-field parts of the plumes. These fronts were not observed in the far-field parts of the plumes and in the coastal surf zone during periods of active wave breaking due to intense mixing (Figure 12). We observed significant short-temporal variability of the undulate fronts induced by the following recurrent process (Figure 13). Once a lobe is formed, it starts to increase seaward. Ballooning of neighboring lobes results in their coalescence and the subsequent merging. At the same time the cleft between these lobes is steadily decreasing and transforms into a spot of saline sea (with area of 0.1-0.5 m\\({}^{2}\\)) isolated from the ambient sea, i.e., trapped by the merged lobes within the plume (Figure 13). The merged lobes and the trapped saline sea area finally dissipate, and then the process of formation of new lobes at this part of the plume front restarts. The continuous recurrent process of formation of lobes, their merging, and subsequent dissipationwas observed along the undulate fronts of the Kodor and Bzyp plumes. Residual time of an individual lobe, i.e., from its formation to dissipation, was 1-2 min. Figure 12: Aerial images (**a**) of undulate fronts at the border of the Kodor plume on 1 September, 2018 and (**b**) at the border of the Bzyp plume on 1 June, 2019. Central and right pictures at panel (**a**) are the zoomed fragments of the left and central pictures at panel (**a**), respectively, indicated by the white dashed rectangles 1 and 2, respectively. Central and right pictures at panel (**b**) are the zoomed fragments of the left and central pictures at panel (**b**), respectively, indicated by the white dashed rectangles 3 and 4, respectively. Black arrows indicate absence of undulate fronts at the surf zone. Figure 13: (**a**) aerial images and (**b**) reconstructed shapes of the border of the Kodor plume on 1 September 2018 illustrating merging of lobes and trapping of spots of saline sea. Numbers indicate time intervals in seconds from the beginning of observations. Due to convergence of surface currents at sharp plume fronts [92], foam and floating litter commonly accumulate at the undulate fronts of the plumes (Figures 13a and 14a). Using optical flow processing of aerial video records, we detected motion of foam and floating litter and reconstructed surface circulation along the undulate fronts of the Kodor and Bzyp plumes (Figure 14). The circulation structure within the lobes consists of pairs of cyclonic and anticyclonic vortices that form, balloon, merge, and dissipate with the lobes (black lines in Figure 14b). The trajectories of foam and floating litter revealed that cyclonic vortices are significantly more prominent and intense, as compared to anticyclonic vortices. Foam and floating litter are mainly accumulated within cyclonic eddies, i.e., in the right parts of the lobes if we look from the sea towards the plume (Figure 14a). Foam and floating litter are rotated by cyclonic eddies within the right parts of the lobes during the majority of time of aerial observations. Once a parcel of foam or floating litter is advected off a cyclonic eddy and enters an anticyclonic eddy in the left part of the lobe, it is transported to the outer part of the lobe and then is trapped by the cyclonic eddy in the neighboring (leftward) lobe (red lines in Figure 14b). As a result, these parcels are skipping leftward between the right parts of lobes. Therefore, foam and floating litter are steadily transported to the left along the plume border. The observed large intensity of cyclonic circulation within the lobes, as compared to anticyclonic circulation, is presumed to have the same background as the dominance of cyclonic spirals at satellite images of sea surface caused by differences between the rotary characteristics of cyclonic and anticyclonic eddies in the sea [93]. Figure 14: **(a)** aerial image of the undulate border of the Kodor plume on 1 September 2018 and **(b)** the scheme of the reconstructed circulation within the lobes (black lines) and the transport of foam and floating litter along the plume border (red lines) **(b)**. Black arrows in panel (**a**) indicate foam accumulated within cyclonic vortexes in the right parts of the lobes. We presume that the undulate structure of the sharp plume borders is formed due to baroclinic instability between the plumes and the ambient sea. The pressure gradient force across the front is equal to \\[g\\frac{\\rho}{\\rho_{sa}}\\frac{\\partial h}{\\partial x} \\tag{4}\\] where \\(g\\) is the gravity acceleration, \\(\\Delta\\rho\\) is the density difference between the plume and the ambient sea, \\(\\rho_{sa}\\) is the density of the sea, \\(h\\) is the depth of the plume, and \\(x\\) is the cross-front direction. In situ measurements performed at the undulate fronts showed that surface salinity abruptly increased across these fronts (2-3 m wide) from 10-12 inside the Kodor plume to 17 outside the Kodor plume (Figure 15b) and from 8-10 inside the Bzyp plume to 16-17 outside the Bzyp plume. The depth of the Kodor plume at the narrow frontal zone was 2 m (Figure 15b), the depth of the Bzyp plume was 4 m. As a result, the values of pressure gradient across these frontal zones calculated from Equation (4) are equal to 0.05 and 0.1 m/s\\({}^{2}\\) for the Kodor and Bzyp plumes, respectively. This large pressure gradient observed across the plume fronts is the source of potential energy that induces formation of lobes and clefts as follows. Small perturbation of a sharp frontal zone and the subsequent formation of a local convex segment cause increase of local length of the front and, therefore, increase of the cross-front advection induced by the pressure gradient. It results in ballooning of the lobe till it coalesces and merges with the neighboring lobe. Merging of two lobes accompanied by trapping of a spot of saline sea water and its subsequent mixing with the plume water cause a reduction of local salinity anomaly and, therefore, a decrease of local pressure gradient. It hinders formation of a lobe at this segment of the plume, while new lobes are formed at the adjacent segments of the plume front. Therefore, baroclinic instability causes formation, merging, and dissipation of the observed lobe-cleft structures and influences mixing between the river plumes and the ambient sea. Aerial imagery detected the 3-4 m wide stripe of low-turbid water within the Kodor plume located at the distance of 10-20 m from the undulate border and stretched along this border (Figure 15a). We presume that this low-turbid stripe is formed as a result of continuous trapping of spots of saline sea water by merging lobes. Horner-Devine et al. (2018) assumed that the lobe-cleft structure is formed by subsurface vortexes that are propagating from the inner part of the plume towards its border with the ambient sea. However, aerial video records showed stable position and shape of this stripe that evidences absence of any subsurface vortexes described by Horner-Devine et al. (2018). Figure 15: (**a**) aerial image and (**b**) vertical salinity profiles at the undulate border of the Kodor plume on 1 September 2018. Colored dots indicate locations of vertical salinity measurements (P1, blue—the plume; P2, yellow—the ambient saline sea). Black arrows in panel (**a**) indicate a stripe of low-turbid water within the Kodor plume stretched along its border. ## 4 Discussion In this study, we obtained several important results about structure, short-temporal variability, and dynamics of small river plumes. First, we revealed strongly inhomogeneous structure of small plumes manifested by multiple frontal zones between different parts of the plumes. These parts have different structures and dynamical characteristics and interact as individual water masses. Second, we reported fast motion of small plumes caused by interaction with coastal eddies. Third, we observed generation and propagation of different types of internal waves within small plumes. Forth, we described formation of lobe-cleft structures at sharp borders of small plumes and reported intense lateral mixing across these fronts caused by their baroclinic instability. The results listed above are important for understanding spreading and mixing of small plumes, however, they are addressed for the first time as previous related works were mainly limited by low spatial and/or temporal resolution of in situ measurements and satellite imagery. Below we provide physical interpretation of these features observed at the Kodor and Bzyp plumes and discuss importance of their study at other small plumes in the World Ocean. In general, river plumes are regarded as \"smooth\" water masses without internal fronts and sharp gradients. This approach is widely used in analytical and numerical modeling studies focused on river plumes, including the fundamental and highly cited papers [82; 83; 84; 85; 94; 95; 96]. Many relevant studies based on in situ and satellite data confirmed that this approach provides realistic results for buoyant plumes formed by large rivers plumes which internal structures indeed are characterized by steady changes of salinity and other characteristics. In this work, we present the results of aerial remote sensing of the Kodor and Bzyp plumes supported by in situ measurements that provide an evidence of strongly inhomogeneous internal structure of small plumes. This structure is manifested by complex internal frontal zones and sharp salinity and turbidity gradients within small plumes. These gradients and frontal zones strongly modify circulation within the plumes, in particular, they hinder cross-frontal advection within the plumes and separate them into semi-isolated, but interacting structures. Therefore, identification and study of the processes that govern formation of frontal zones within small plumes is important for understanding of spreading and mixing of freshwater discharge in the sea and the related transport of river-borne suspended and dissolved material. The Kodor River inflows to the Black Sea from multiple deltaic branches and forms several river plumes. These plumes are closely located; they interact as individual water masses and coalesce into the common Kodor plume. Interaction, collision, and coalescence of buoyant plumes formed by rivers, which estuaries are located in close proximity, were addressed in several previous studies [86; 87; 88; 89; 97; 98]. Similar processes occur within plumes formed by freshwater discharge from multiple deltaic branches, as was observed for the Kodor plume. Moreover, generally distances between deltaic branches within one deltaic system are smaller than distances between estuaries of neighboring rivers. As a result, interactions between neighboring plumes formed by different rivers generally occur only during high discharge periods [86], while similar interactions between plumes formed by different deltaic branches is a permanent or almost permanent process at many World regions. However, despite a large number of deltaic rivers inflowing to the World Ocean, we are aware of only one related study that was focused on the interaction between the buoyant plumes formed by different deltaic branches of the Pearl River Delta [100]. The Kodor River has very large intra-day and synoptic variability of discharge rate due to morphology and weather conditions at its drainage basin. This variability of discharge rate induces variability of spatial extents of the Kodor plume and residence time of freshened water within the plume. As a result, the Kodor plume formed during high discharge can have different spatial and thermohaline characteristics from those formed during low discharge. In case of abrupt decrease of river discharge rate, the relatively large and mixed residual plume (formed during high discharge period) interacts with the small and freshened emergent plume (formed during the subsequent low discharge period). We report distinct frontal zones and differences in dynamics between the residual and the emergent parts of the Kodor plume. Several previous studies addressed response of river plumes to variabledischarge rates [101; 102; 103; 104; 105; 106; 107], but limited attention was paid to interaction between parts of an individual river plume formed during different discharge conditions [108]. This feature can strongly affect spreading and mixing of freshwater discharge from small rivers in many World regions and should be considered in the related studies. Several studies addressed interaction between coastal bathymetry and bottom-advected river plumes, which occupy the whole water column from surface to seafloor and, therefore, experience intense bottom friction [109; 110; 111]. In these numerical studies, river plumes were spreading over sea areas with idealized bathymetry, which was steadily sloping in the cross-shore direction and was homogenous in the alongshore direction. Influence of realistic bottom topography on surface-advected river plumes was described by Korotenko et al. [112]. Bottom-generated turbulent mixing induced by coastal circulation penetrates upward and reaches surface layer over shallow zones, therefore, increased local mixing of river plumes occurs at these zones. We presume that a similar mechanism induced intensified mixing of the Kodor plume over the shoal revealed by aerial imagery and in situ measurements. Moreover, we observed that the intense flow of the Kodor plume over this small shoal results in formation of large area within the plume with elevated salinity, which is bounded by the distinct frontal zone. We are not aware of any work describing this effect at river plumes, however, it can be typical for many small plumes with small vertical scales flowing over bathymetric features. In this study, we address several important dynamical features of small river plumes. Aerial remote sensing revealed a quick motion of the Kodor plume border (\\(\\sim\\)0.5-1 m/s) entrained by the rotating coastal eddy. Such extremely rapid response of a river plume to coastal sea circulation has not been reported before, to the extent of our knowledge. The previous studies showed that general spreading patterns of small plumes are governed by wind forcing, while the impact of ambient circulation was regarded as negligible [113; 114; 115]. We demonstrate that energetic features of coastal circulation, e.g., eddies, can induce high velocity motion of plume fronts and, therefore, influence dynamics of a small plume, albeit locally and during short-term periods. The rotating eddy generated high-frequency internal waves that were propagating within the Kodor plume and dissipated at its border with the ambient sea. Aerial remote sensing also observed multiple long internal waves propagating within the Kodor plume towards the coast, as well as generation of high-frequency internal waves near the mouth of the Bzyp River and their propagation within the Bzyp plume towards the open sea. Internal waves are common features of river plumes in non-tidal seas and their surface manifestations observed by satellite imagery were reported in several previous studies [116; 117]. These internal waves can significantly affect mixing of small plumes with subjacent saline sea [33]. In this study, we demonstrate the efficiency of aerial remote sensing in observations of surface manifestations of internal waves, and the ability of aerial remote sensing (in contrast to satellite observations) to measure their spatial and dynamical characteristics and to identify mechanisms of their generation. Finally, in this study, we address the undulate structure of the sharp borders of the Kodor and Bzyp plumes that were previously observed and reported at other small plumes [52; 53; 118; 119]. Horner-Devine and Chickadel [53] associated formation of the lobe-cleft structures observed at the Merrimack plume with subsurface vortexes that were propagating from the inner part of the plume towards its border with saline sea. Based on processing of aerial video records, we reconstruct surface circulation at the undulate fronts of the Kodor and Bzyp plume and detected similar vortexes within the lobes. However, we observed absence of vortexes outside the frontal zones, i.e., no vortexes were propagating from the inner parts of the plume towards their borders. On the opposite, we observed the recurrent process of formation, merging, and dissipation of lobes, that was not described before. Based on these results, we suggest an alternative mechanism of formation of the undulate fronts caused by baroclinic instability between the plume and the ambient sea and ballooning of local convex segments of the frontal zone in response to its small perturbations. This mechanism is in a good agreement with the reconstructed vortex circulation within the lobes and explains absence of vortexes in the inner parts of the plumes. We reveal intense transport of saline sea water across the undulate plume border as a result of merging of lobes and mechanical trapping of spots of saline sea inside the plume. It can be an important mechanism of mixing between the plume and the saline sea and should be considered together with shear-induced mixing of the plume and the subjacent sea. Satellite imagery reveals that undulate frontal zones are, therefore, the related mixing mechanism are typical for many small plumes in the World. Therefore, study of this mechanism is important in context of transformation and dissipation of freshwater discharge in the sea. ## 5 Conclusions In this work, we focused on small buoyant plumes formed by the Kodor and Bzyp rivers located at the northeastern part of the Black Sea. We used quadcopters equipped with video camera to perform aerial remote sensing of these river plumes, which was accompanied by synchronous in situ measurements in the sea. Using an optical flow approach, we reconstructed surface velocity fields within these plumes from the obtained aerial video records. Based on aerial imagery and video records, the reconstructed surface currents, as well as in situ salinity, turbidity, and velocity measurements, we obtained new insights into spatial structure, short-temporal variability, and dynamical features of small river plumes, which are not typical for plumes formed by large rivers. Based on the obtained aerial and in situ data, we address several different issues, including the methodology and value of the aerial observations of small river plumes, the differences between small and large plumes, the influence of multiple freshwater sources on the structure of a small plume, the influence of bathymetry features on the structure of a small plume, the interaction between small plumes and coastal circulation, the presence of internal waves in river plumes, and the presence of small-scale instabilities along the plume front boundary. The main results obtained in this study are the following. We describe strongly inhomogeneous structure of small plumes, as compared to large plumes. We suggest a new mechanism of mixing of a small plume with ambient sea as a result of baroclinic instability at its outer boundary. We describe internal waves formed within near- and far-filed parts of small plumes, which can strongly influence its mixing with ambient sea. These results are important for understanding the fate of freshwater discharge from small rivers and the related transport of suspended and dissolved river-borne constituents in many coastal sea areas in the World Ocean. Usage of quadcopters provides ability to perform low-cost aerial remote sensing of coastal sea areas and continuously observe surface manifestations of many coastal processes. In this study, we demonstrate its efficiency in observations of small river plumes characterized by high color contrast with ambient sea, energetic motion, and high short-temporal variability. Aerial imagery can be used for visual detection and tracking of many other processes at small spatial (from meters to kilometers) and temporal (from seconds to hours) scales, which are visible neither from shipboard nor satellite imagery. Spatial scales and motion speeds of the observed processes can be reconstructed from aerial imagery and video records (Supplementary Materials). Therefore, aerial drones can provide quantitative measurements of distances and velocities at sea surface. Finally, aerial remote sensing can be very useful for operational organization of in situ measurements during field surveys, in particular, for selection of places for water sampling and hydrological measurements according to real-time position of the observed sea surface processes. As a result, future studies based on imagery and video records of ocean surface acquired from aerial drones (considering certain important limitations of their usage) and supported by in situ measurements hold promise to significantly improve understanding of various upper ocean features and dynamics. The aerial images and video records are publicly available at [https://doi.org/10.5281/zenodo.3901896](https://doi.org/10.5281/zenodo.3901896). The Sentinel-2 Level-1C products were downloaded from the Copernicus Open Access Hub [https://schub.copernicus.eu/](https://schub.copernicus.eu/). Conceptualization, A.O.; measurements, A.O., A.B., R.S., and R.Z.; formal analysis, A.O.; investigation, A.O., A.B., R.S., R.Z., and R.D.; writing, A.O., A.B., R.S., R.Z., and R.D. All authors have read and agreed to the published version of the manuscript. **Funding:** This research was funded by the Ministry of Science and Higher Education of Russia, theme 0149-2019-0003 (collecting and processing of in situ data), the Russian Science Foundation, research project 18-17-00156 (collecting and processing of aerial imagery, study of spreading of river plumes) and the Russian Ministry of Science and Higher Education, research project 14.W03.31.0006 (study of submesoscale dynamics of river plumes). **Acknowledgments:** The authors are grateful to the editor Melissa Zhu and five anonymous reviewers for their comments and recommendations that served to improve the article. **Conflicts of Interest:** The authors declare no conflict of interest. ## References * (1) Klemas, V.V. Airborne remote sensing of coastal features and processes: An overview. _J. Coast. Res._**2013**, _29_, 239-255. [CrossRef] * (2) Holman, R.; Haller, M.C. Remote sensing of the nearshore. _Annu. Rev. Mar. Sci._**2013**, \\(5\\), 95-113. [CrossRef] [PubMed] * (3) Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. _ISPRS J. Photogramm._**2014**, _92_, 79-97. [CrossRef] * (4) Klemas, V.V. Coastal and environmental remote sensing from unmanned aerial vehicles: An overview. _J. Coast. Res._**2015**, _31_, 1260-1267. [CrossRef] * (5) Floreano, D.; Wood, R.J. Science, technology and the future of small autonomous drones. _Nature_**2015**, _521_, 460-466. [CrossRef] * (6) El Mahrad, B.; Newton, A.; Icely, J.D.; Kacimi, I.; Abalansa, S.; Snoussi, M. Contribution of remote sensing technologies to a holistic coastal and marine environmental management framework: A review. _Remote Sens._**2020**, _12_, 2313. [CrossRef] * (7) Casella, E.; Rovere, A.; Pedroncini, A.; Stark, C.P.; Casella, M.; Ferrari, M.; Firpo, M. Drones as tools for monitoring beach topography changes in the Ligurian Sea (NW Mediterranean). _Geo-Mar. Lett._**2016**, _36_, 151-163. [CrossRef] * (8) Topouzelis, K.; Papakonstantinou, A.; Doukari, M. Coastline change detection using unmanned aerial vehicles and image processing techniques. _Fresenius Environ. Bull._**2017**, _26_, 5564-5571. * (9) Holman, R.A.; Holland, K.T.; Lalejini, D.M.; Spansel, S.D. Surf zone characterization from Unmanned Aerial Vehicle imagery. _Ocean Dyn._**2011**, _61_, 1927-1935. [CrossRef] * (10) Turner, I.L.; Harley, M.D.; Drummond, C.D. UAVs for coastal surveying. _Coast. Eng._**2016**, _114_, 19-24. [CrossRef] * (11) Papakonstantinou, A.; Topouzelis, K.; Pavloogeratos, G. Coastline zones identification and 3D coastal mapping using UAV spatial data. _ISPRS Int. J. Geo-Inf._**2016**, \\(5\\), 75. [CrossRef] * (12) Holman, R.A.; Brodie, K.L.; Spore, N.J. Surf zone characterization using a small quadcopter: Technical issues and procedures. _IEEE Trans. Geosci. Remote Sens._**2017**, \\(9\\), 2017-2027. [CrossRef] * (13) Ventura, D.; Bruno, M.; Lasinio, G.J.; Belluscio, A.; Ardizzone, G. A low-cost drone based application for identifying and mapping of coastal fish nursery grounds. _Estuar. Coast. Shelf Sci._**2016**, _171_, 85-98. [CrossRef] * (14) Hodgson, A.; Kelly, N.; Peel, D. Unmanned aerial vehicles (UAVs) for surveying Marine Fauna: A dugong case study. _PLoS ONE_**2013**, \\(8\\), e79556. [CrossRef] * (15) Burns, J.; Delparte, D.; Gates, R. Takabayashi Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs. _Peerf_**2015**, \\(3\\), e1077. [CrossRef] * (16) Casella, E.; Collin, A.; Harris, D.; Ferse, S.; Bejarano, S.; Parravicini, V.; Hench, J.L.; Rovere, A. Mapping coral reefs using consumer-grade drones and structure from motion photogrammetry techniques. _Coral Reefs_**2017**, _36_, 269-275. [CrossRef] * (17) Fiori, L.; Doshi, A.; Martinez, E.; Orams, M.B.; Bollard-Breen, B. The use of unmanned aerial systems in marine mammal research. _Remote Sens._**2017**, \\(9\\), 543. [CrossRef] * (18) Murfitt, S.L.; Allan, B.M.; Bellgrove, A.; Rattray, A.; Young, M.A.; Ierodiaconou, D. Applications of unmanned aerial vehicles in intertidal reef monitoring. _Sci. Rep._**2017**, \\(7\\), 10259. [CrossRef] * (19) Torres, L.G.; Nieukirk, S.L.; Lemos, L.; Chandler, T.E. Drone up! Quantifying whale behavior from a new perspective improves observational capacity. _Front. Mar. Sci._**2018**, \\(5\\), 319. [CrossRef]* Papakonstantinou et al. (2020) Papakonstantinou, A.; Stamati, C.; Topouzelis, K. Comparison of true-color and multispectral unmanned aerial systems imagery for marine habitat mapping using object-based image analysis. _Remote Sens._**2020**, _12_, 554. [CrossRef] * Provost et al. (2020) Provost, E.J.; Butcher, P.A.; Coleman, M.A.; Kelaher, B.P. Assessing the viability of small aerial drones to quantify recreational fishers. _Fish Manag. Ecol._**2020**, 1-7. [CrossRef] * Fallati et al. (2020) Fallati, L.; Saponari, L.; Savini, A.; Marchese, F.; Corselli, C.; Galli, P. Multi-Temporal UAV Data and object-based image analysis (OBIA) for estimation of substrate changes in a post-bleaching scenario on a maldivian reef. _Remote Sens._**2020**, _12_, 2093. [CrossRef] * Hakvoort et al. (2002) Hakvoort, H.; de Haan, J.; Jordans, R.; Vos, R.; Peters, S.; Rijkeboer, M. Towards airborne remote sensing of water quality in The Netherlands--validation and error analysis. _ISPRS J. Photogramm. Remote Sens._**2002**, _57_, 171-183. [CrossRef] * Klemas (2010) Klemas, V. Tracking oil slicks and predicting their trajectories using remote sensors and models: Case studies of the Sea Princess and Deepwater Horizon oil spills. _J. Coast. Res._**2010**, _26_, 789-797. [CrossRef] * Svejkovsky et al. (2010) Svejkovsky, J.; Nezlin, N.P.; Mustain, N.M.; Kum, J.B. Tracking stormwater discharge plumes and water quality of the Tijuana River with multispectral aerial imagery. _Estur. Coast. Shelf Sci._**2010**, _87_, 387-398. [CrossRef] * Androulidakis et al. (2018) Androulidakis, Y.; Kourafalou, V.; Ozgokmen, T.; Garcia-Pineda, O.; Lund, B.; Le Henaff, M.; Hu, C.; Haus, B.K.; Novelli, G.; Guigand, C.; et al. Influence of river-induced fronts on hydrocarbon transport: A multiplatform observational study. _J. Geophys. Res. Oceans_**2018**, _123_, 3259-3285. [CrossRef] * Garaba and Dierssen (2018) Garaba, S.P.; Dierssen, H.M. An airborne remote sensing case study of synthetic hydrocarbon detection using short wave infrared absorption features identified from marine-harvested macro-and microplastics. _Remote Sens. Environ._**2018**, _205_, 224-235. [CrossRef] * Fallati et al. (2019) Fallati, L.; Polidori, A.; Salvatore, C.; Saponari, L.; Savini, A.; Galli, P. Anthropogenic Marine Debris assessment with Unmanned Aerial Vehicle imagery and deep learning: A case study along the beaches of the Republic of Maldives. _Sci. Total Environ._**2019**, _693_, 133581. [CrossRef] * Savelyev et al. (2018) Savelyev, I.; Miller, W.D.; Sletten, M.; Smith, G.B.; Savidge, D.K.; Frick, G.; Menk, S.; Moore, T.; De Paolo, T.; Terrill, E.J.; et al. Airborne remote sensing of the upper ocean turbulence during CASPER-East. _Remote Sens._**2018**, _10_, 1224. [CrossRef] * Stresser et al. (2017) Stresser, M.; Carrasco, R.; Horstmann, J. Video-based estimation of surface currents using a low-cost quadcopter. _IEEE Geosci. Remote Sens. Lett._**2017**, _14_, 2027-2031. [CrossRef] * Jung et al. (2019) Jung, D.; Lee, J.S.; Baek, J.Y.; Nam, J.; Jo, Y.H.; Song, K.M.; Cheong, Y.I. High temporal and spatial resolutions of sea surface current from low-altitude remote sensing. _J. Coast. Res._**2019**, _90_, 282-288. [CrossRef] * Ouillon et al. (1997) Ouillon, S.; Forget, P.; Froidefond, J.M.; Naudin, J.J. Estimating suspended matter concentrations from SPOT data and from field measurements in the Rhone river plume. _Mar. Technol. Soc. J._**1997**, _31_, 15. * Osadchiev (2018) Osadchiev, A.A. Small mountainous rivers generate high-frequency internal waves in coastal ocean. _Sci. Rep._**2018**, \\(8\\), 16609. [CrossRef] [PubMed] * Devlin et al. (2012) Devlin, M.J.; McKinna, L.W.; Alvarez-Romero, J.G.; Petus, C.; Abott, B.; Harkness, P.; Brodie, J. Mapping the pollutants in surface riverine flood plume waters in the Great Barrier Reef, Australia. _Mar. Poll. Bull._**2012**, _65_, 224-235. [CrossRef] [PubMed] * Brando et al. (2015) Brando, V.E.; Braga, F.; Zaggia, L.; Giardino, C.; Bresciani, M.; Matta, E.; Bellafiore, D.; Ferrarin, C.; Maicu, F.; Benetazzo, A.; et al. High-resolution satellite turbidity and sea surface temperature observations of river plume interactions during a significant flood event. _Ocean Sci._**2015**, _11_, 909. [CrossRef] * Nezlin and DiGiacomo (2003) Nezlin, N.P.; DiGiacomo, P.M. Satellite ocean color observations of stormwater runoff plumes along the San Pedro Shelf (southern California) during 1997 to 2003. _Continent. Shelf Res._**2005**, _25_, 1692-1711. [CrossRef] * Osadchiev and Sedakov (2019) Osadchiev, A.A.; Sedakov, R.O. Spreading dynamics of small river plumes off the northeastern coast of the Black Sea observed by Landsat 8 and Sentinel-2. _Remote Sens. Environ._**2019**, _221_, 522-533. [CrossRef] * Nezlin et al. (2005) Nezlin, N.P.; DiGiacomo, P.M.; Stein, E.D.; Ackerman, D. Stormwater runoff plumes observed by SeaWiFS radiometer in the Southern California Bight. _Remote Sens. Environ._**2005**, _98_, 494-510. [CrossRef] * Constantin et al. (2016) Constantin, S.; Doxaran, D.; Constantinescu, S. Estimation of water turbidity and analysis of its spatio-temporal variability in the Danube River plume (Black Sea) using MODIS satellite data. _Cont. Shelf Res._**2016**, _112_, 14-30. [CrossRef]* (40) Gangloff, A.; Verney, R.; Doxaran, D.; Ody, A.; Estournel, C. Investigating Rhone River plume (Gulf of Lions, France) dynamics using metrics analysis from the MERIS 300m Ocean Color archive (2002-2012). _Cont. Shelf Res._**2017**, _144_, 98-111. [CrossRef] * (41) Warrick, J.A.; Mertes, L.A.; Washburn, L.; Siegel, D.A. A conceptual model for river water and sediment dispersal in the Santa Barbara Channel, California. _Cont. Shelf Res._**2004**, _24_, 2029-2043. [CrossRef] * (42) Lihan, T.; Saitoh, S.I.; Iida, T.; Hirawake, T.; Iida, K. Satellite-measured temporal and spatial variability of the Tokachi River plume. _Estuar. Coast. Shelf Sci._**2008**, _78_, 237-249. [CrossRef] * (43) Jiang, L.; Yan, X.H.; Klemas, V. Remote sensing for the identification of coastal plumes: Case studies of Delaware Bay. _Int. J. Remote Sens._**2009**, _30_, 2033-2048. [CrossRef] * (44) Grodsky, S.A.; Reverdin, G.; Carton, J.A.; Coles, V.J. Year-to-year salinity changes in the Amazon plume: Contrasting 2011 and 2012 Aquarius/SACD and SMOS satellite data. _Remote Sens. Environ._**2014**, _140_, 14-22. [CrossRef] * (45) Reul, N.; Quilfen, Y.; Chapron, B.; Fournier, S.; Kudryavtsev, V.; Sabia, R. Multisensor observations of the Amazon-Orinoco river plume interactions with hurricanes. _J. Geophys. Res. Oceans_**2014**, _119_, 8271-8295. [CrossRef] * (46) Korosov, A.; Counillon, F.; Johannessen, J.A. Monitoring the spreading of the A mazon freshwater plume by MODIS, SMOS, A quarius, and TOPAZ. _J. Geophys. Res. Oceans_**2015**, _120_, 268-283. [CrossRef] * (47) Hessner, K.; Rubino, A.; Brandt, P.; Alpers, W. The Rhine outflow plume studied by the analysis of synthetic aperture radar data and numerical simulations. _J. Phys. Oceangr._**2001**, _31_, 3030-3044. [CrossRef] * (48) DiGiacomo, P.M.; Washburn, L.; Holt, B.; Jones, B.H. Coastal pollution hazards in southern California observed by SAR imagery: Stormwater plumes, wastewater plumes, and natural hydrocarbon seeps. _Mar. Poll. Bull._**2004**, _49_, 1013-1024. [CrossRef] * (49) Zheng, Q.; Clemente-Colon, P.; Yan, X.H.; Liu, W.T.; Huang, N.E. Satellite synthetic aperture radar detection of Delaware Bay plumes: Jet-like feature analysis. _J. Geophys. Res. Oceans_**2004**, _109_, C03031. [CrossRef] * (50) Perez, T.; Wesson, J.; Burrage, D. Airborne remote sensing of the Rio de la Plata plume using STARRS. _Sea Technol._**2006**, _47_, 31-34. * (51) Burrage, D.; Wesson, J.; Martinez, C.; Perez, T.; Moller, O., Jr.; Piola, A. Patos Lagoon outflow within the Rio de la Plata plume using an airborne salinity mapper: Observing an embedded plume. _Cont. Shelf Res._**2008**, _28_, 1625-1638. [CrossRef] * (52) Horner-Devine, A.; Chickadel, C.C.; MacDonald, D. Coherent structures and mixing at a river plume front. In _Coherent Flow Structures in Geophysical Flows at the Earth's Surface_; Venditti, J., Best, J.L., Church, M., Hardy, R.J., Eds.; Wiley: Chichester, UK, 2013; pp. 359-369. [CrossRef] * (53) Horner-Devine, A.R.; Chickadel, C.C. Lobe-cleft instability in the buoyant gravity current generated by estuarine outflow. _Geophys. Res. Lett._**2017**, _44_, 5001-5007. [CrossRef] * (54) Milliman, J.D.; Syvitski, J.P.M. Geomorphic-tectonic control of sediment discharge to the ocean: The importance of small mountainous rivers. _J. Geol._**1992**, _100_, 525-544. [CrossRef] * (55) Milliman, J.D.; Farnsworth, K.L.; Albertin, C.S. Flux and fate of fluvial sediments leaving large islands in the East Indies. _J. Sea Res._**1999**, _41_, 97-107. [CrossRef] * (56) Milliman, J.D.; Lin, S.W.; Kao, S.J.; Liu, J.P.; Liu, C.S.; Chiu, J.K.; Lin, Y.C. Short-term changes in seafloor character due to flood-derived hyperpycal discharge: Typhoon Mindulle, Taiwan, July 2004. _Geology_**2007**, _35_, 779-782. [CrossRef] * (57) Osadchiev, A.A.; Zavialov, P.O. Structure and dynamics of plumes generated by small rivers. In _Estuaries and Coastal Zones--Dynamics and Response to Environmental Changes_; Pan, J., Ed.; IntechOpen: London, UK, 2019. [CrossRef] * (58) Korotkina, O. A.; Zavialov, P.O.; Osadchiev, A. A. Submesoscale variability of the current and wind fields in the coastal region of Sochi. _Oceanology_**2011**, _51_, 745-754. [CrossRef] * (59) Korotkina, O. A.; Zavialov, P.O.; Osadchiev, A. A. Synoptic variability of currents in the coastal waters of Sochi. _Oceanology_**2014**, _54_, 545-556. [CrossRef] * (60) Xia, M.; Xie, L.; Pietrafesa, L.J. Winds and the orientation of a coastal plane estuary plume. _Geophys. Res. Lett._**2010**, _37_, L19601. [CrossRef] * (61) Xia, M.; Xie, L.; Pietrafesa, L.J.; Whitney, M.M. The ideal response of a Gulf of Mexico estuary plume to wind forcing: Its connection with salt flux and a Lagrangian view. _J. Geophys. Res. Oceans_**2011**, _116_, C8. [CrossRef]* (62) Zavialov, P.O.; Makkaveev, P.N.; Konovalov, B.V.; Osadchiev, A.A.; Khlebopashev, P.V.; Pelevin, V.V.; Grabovskiy, A.B.; Lzhitskiy, A.S.; Goncharenko, I.V.; Soloviev, D.M.; et al. Hydrophysical and hydrochemical characteristics of the sea areas adjacent to the estuaries of small rivers if the Russian coast of the Black Sea. _Oceanology_**2014**, _54_, 265-280. [CrossRef] * (63) Osadchiev, A.A. A method for quantifying freshwater discharge rates from satellite observations and Lagrangian numerical modeling of river plumes. _Environ. Res. Lett._**2015**, _10_, 085009. [CrossRef] * (64) Osadchiev, A.A. Estimation of river discharge based on remote sensing of a river plume. In Proceedings of the SPIE Remote Sensing, Toulouse, France, 14 October 2015. [CrossRef] * (65) Jaoshvili, S. _The rivers of the Black Sea_; Chomeriki, I., Gigneishvili, G., Kordzadze, A., Eds.; Technical Report No. 71; European Environmental Agency: Copenhagen, Denmark, 2002. * (66) Korotaev, G.; Oguz, T.; Nikiforov, A.; Koblinsky, C. Seasonal, interannual, and mesoscale variability of the Black Sea upper layer circulation derived from altimeter data. _J. Geophys. Res._**2003**, _108_, 3122. [CrossRef] * (67) Ivanov, V.A.; Belokopytov, V.N. _Oceanography of the Black Sea_; ECOSY-Gidrofizika: Sevastopol, Ukraine, 2013. * (68) Ginzburg, A.I.; Kostianov, A.G.; Krivosheya, V.G.; Nezlin, N.P.; Soloviev, D.M.; Stanichny, S.V.; Yakubenko, V.G. Mesoscale eddies and related processes in the northeastern Black Sea. _J. Mar. Syst._**2002**, _32_, 71-90. [CrossRef] * (69) Zatsepin, A.G.; Ginzburg, A.I.; Kostianov, A.G.; Kremenetskiy, V.V.; Krivosheya, V.G.; Poulain, P.-M.; Stanichny, S.V. Observation of Black Sea mesoscale eddies and associated horizontal mixing. _J. Geophys. Res._**2003**, _108_, 1-27. [CrossRef] * (70) Kubryakov, A.A.; Stanichny, S.V. Seasonal and interannual variability of the Black Sea eddies and its dependence on characteristics of the large-scale circulation. _Deep Sea Res._**2015**, _97_, 80-91. [CrossRef] * (71) Medvedev, I.P.; Rabinovich, A.B.; Kulikov, E.A. Tides in three enclosed basins: The Baltic, Black, and Caspian seas. _Front. Mar. Sci._**2016**, \\(3\\), 46. [CrossRef] * (72) Medvedev, I.P. Tides in the Black Sea: Observations and numerical modelling. _Pure Appl. Geophys._**2018**, _175_, 1951-1969. [CrossRef] * (73) Podymov, O.I.; Zatsepin, A.G. Seasonal anomalies of water salinity in the Gelendzhik region of the Black Sea according to shipborne monitoring data. _Oceanology_**2016**, _56_, 342-354. [CrossRef] * (74) Doukari, M.; Batsaris, M.; Papakonstantinou, A.; Topouzelis, K. A protocol for aerial survey in coastal areas using UAS. _Remote Sens._**2019**, _11_, 1913. [CrossRef] * (75) Zavialov, P.O.; Lzhitskiy, A.S.; Osadchiev, A.A.; Pelevin, V.V.; Grabovskiy, A.B. The structure of thermohaline and bio-optical fields in the surface layer of the Kara Sea in September 2011. _Oceanology_**2015**, _55_, 461-471. [CrossRef] * (76) Baker, S.; Scharstein, D.; Lewis, J.; Roth, S.; Black, M.; Szeliski, R. A database and evaluation methodology for optical flow. _Int. J. Comp. Vis._**2011**, _92_, 1-31. [CrossRef] * (77) Fortun, D.; Bouthemy, P.; Kervrann, C. Optical flow modeling and computation: A survey. _Comput. Vis. Image Underst._**2015**, _134_, 1-21. [CrossRef] * (78) Farneback, G. Two-frame motion estimation based on polynomial expansion. In Proceedings of the 13th Scandinavian Conference on Image Analysis, Halmstad, Sweden, 29 June-2 July 2003; Bigun, J., Gustavsson, T., Eds.; Springer: Berlin/Heidelberg, Germany. [CrossRef] * (79) O'Donnell, J.; Ackleson, S.G.; Levine, E.R. On the spatial scales of a river plume. _J. Geophys. Res. Oceans_**2008**, _113_, C4. [CrossRef] * (80) Horner-Devine, A.R.; Hetland, R.D.; MacDonald, D.G. Mixing and transport in coastal river plumes. _Ann. Rev. Mar. Sci._**2015**, _47_, 569-594. [CrossRef] * (81) Zavialov, P.O.; Pelevin, V.V.; Belyaev, N.A.; Izhitskiy, A.S.; Konovalov, B.V.; Krementskiy, V.V.; Goncharenko, I.V.; Osadchiev, A.A.; Soloviev, D.M.; Garcia, C.A.E.; et al. High resolution LiDAR measurements reveal fine internal structure and variability of sediment-carrying coastal plume. _Estur. Coast. Shelf Sci._**2018**, _205_, 40-45. [CrossRef] * (82) Yankovsky, A.E.; Chapman, D.C. A simple theory for the fate of buoyant coastal discharges. _J. Phys. Oceanogr._**1997**, _27_, 1386-1401. [CrossRef] * (83) Fong, D.A.; Geyer, W.R. The alongshore transport of freshwater in a surface-trapped river plume. _J. Phys. Oceanogr._**2002**, _32_, 957-972. [CrossRef] * (84) Whitney, M.M.; Garvine, R.W. Wind influence on a coastal buoyant outflow. _J. Geophys. Res._**2005**, _110_, C03014. [CrossRef]* Choi and Wilkin (2007) Choi, B.-J.; Wilkin, J.L. The effect of wind on the dispersal of the Hudson River plume. _J. Phys. Oceanogr._**2007**, _37_, 1878-1897. [CrossRef] * Warrick and Farnsworth (2017) Warrick, J.A.; Farnsworth, K.L. Coastal river plumes: Collisions and coalescence. _Prog. Oceanogr._**2017**, _151_, 245-260. [CrossRef] * Osadchiev and Korshenko (2017) Osadchiev, A.A.; Korshenko, E.A. Small river plumes off the north-eastern coast of the Black Sea under average climatic and flooding discharge. _Ocean Sci._**2017**, _13_, 465-482. [CrossRef] * Osadchiev and Sedakov (2019) Osadchiev, A.A.; Sedakov, R.O. Reconstruction of ocean surface currents using near simultaneous satellite imagery. In Proceedings of the International Geosciences and Remote Sensing Symposium, Yokohama, Japan, 28 July-2 August 2019; IEEE: New York, NY, USA. [CrossRef] * Alexeevsky et al. (2016) Alexeevsky, N.I.; Magritsky, D.V.; Koltermann, K.P.; Krylenko, I.N.; Toropov, P.A. Causes and systematics of inundations of the Krasnadr territory on the Russian Black Sea coast. _Nat. Hazard. Earth Syst._**2016**, _16_, 1289-1308. [CrossRef] * Marchevsky et al. (2019) Marchevsky, I.K.; Osadchiev, A.A.; Popov, A.Y. Numerical modelling of high-frequency internal waves generated by river discharge in coastal ocean. In Proceedings of the 5th International Conference on Geographical Information Systems Theory, Applications and Management, Heraklion, Crete, Greece, 3-5 May 2019; Scitepress: Setubal, Portugal. [CrossRef] * McPherson et al. (2020) McPherson, R.A.; Stevens, C.L.; O'Callaghan, J.M.; Lucas, A.J.; Nash, J.D. The role of turbulence and internal waves in the structure and evolution of a near-field river plume. _Ocean Sci._**2020**, _16_, 799-815. [CrossRef] * O'Donnell et al. (1998) O'Donnell, J.; Marmorio, G.O.; Trump, C.L. Convergence and downwelling at a river plume front. _J. Phys. Oceanogr._**1998**, _28_, 1481-1495. [CrossRef] * Zhurbas et al. (2019) Zhurbas, V.; Vali, G.; Kuzmina, N. Rotation of floating particles in submesoscale cyclonic and anticyclonic eddies: A model study for the southeastern Baltic Sea. _Ocean Sci._**2019**, _15_, 1691-1705. [CrossRef] * Garvine (1987) Garvine, R.W. Estuary plumes and fronts in shelf waters: A layer model. _J. Phys. Oceanogr._**1987**, _17_, 1877-1896. [CrossRef] * O'Donnell (1990) O'Donnell, J. The formation and fate of a river plume: A numerical model. _J. Phys. Oceanogr._**1990**, _20_, 551-569. [CrossRef] * Hetland (2005) Hetland, R.D. Relating river plume structure to vertical mixing. _J. of Phys. Oceanogr._**2005**, _35_, 1667-1688. [CrossRef] * Saldias et al. (2012) Saldias, G.S.; Sobarzo, M.; Largier, J.; Moffat, C.; Letelier, R. Seasonal variability of turbid river plumes off central Chile based on high-resolution MODIS imagery. _Remote Sens. Env._**2012**, _123_, 220-233. [CrossRef] * Saldias et al. (2016) Saldias, G.S.; Largier, J.L.; Mendes, R.; Perez-Santos, I.; Vargas, C.A.; Sobarzo, M. Satellite-measured interannual variability of turbid river plumes off central-southern Chile: Spatial patterns and the influence of climate variability. _Progr. Oceanogr._**2016**, _146_, 212-222. [CrossRef] * Osadchiev et al. (2017) Osadchiev, A.A.; Izhitskiy, A.S.; Zavialov, P.O.; Kremenetskiy, V.V.; Polukhin, A.A.; Pelevin, V.V.; Toktampsova, Z.M. Structure of the buoyant plume formed by Ob and Yenisei river discharge in the southern part of the Kara Sea during summer and autumn. _J. Geophys. Res. Oceans_**2017**, _122_, 5916-5935. [CrossRef] * Gong et al. (2019) Gong, W.; Chen, L.; Chen, Z.; Zhang, H. Plume-to-plume interactions in the Pearl River Delta in winter. _Ocean Coast. Manag._**2019**, _175_, 110-126. [CrossRef] * Warrick et al. (2004) Warrick, J.A.; Mertes, L.A.K.; Washburn, L.; Siegel, D.A. Dispersal forcing of southern California river plumes, based on field and remote sensing observations. _Geo-Mar. Lett._**2004**, _24_, 46-52. [CrossRef] * Osadchiev et al. (2016) Osadchiev, A.A.; Korotenko, K.A.; Zavialov, P.O.; Chiang, W.-S.; Liu, C.-C. Transport and bottom accumulation of fine river sediments under typhoon conditions and associated submarine landslides: Case study of the Peinan River, Taiwan. _Nat. Haz. Earth Syst. Sci._**2016**, _16_, 41-54. [CrossRef] * Romero et al. (2016) Romero, L.; Siegel, D.A.; McWilliams, J.C.; Uchiyama, Y.; Jones, C. Characterizing storm water dispersion and dilution from small coastal streams. _J. Geophys. Res. Oceans_**2016**, _121_, 3926-3943. [CrossRef] * Yankovsky et al. (2001) Yankovsky, A.E.; Hickey, B.M.; Munchow, A.K. Impact of variable inflow on the dynamics of a coastal buoyant plume. _J. Geophys. Res. Oceans_**2001**, _106_, 19809-19824. [CrossRef] * Yuan et al. (2018) Yuan, Y.; Horner-Devine, A.R.; Avener, M.; Bevan, S. The role of periodically varying discharge on river plume structure and transport. _Cont. Shelf Res._**2018**, _158_, 15-25. [CrossRef] * Yankovsky and Voulgaris (2019) Yankovsky, A.E.; Voulgaris, G. Response of a Coastal Plume Formed by Tidally Modulated Estuarine Outflow to Light Upwelling-Favorable Wind. _J. Phys. Oceanogr._**2019**, _49_, 691-703. [CrossRef] * Cole et al. (2020) Cole, K.L.; MacDonald, D.G.; Kakoulaki, G.; Hetland, R.D. River plume source-front connectivity. _Ocean Model._**2020**, _150_, 101571. [CrossRef]* Horner-Devine et al. (2008) Horner-Devine, A.R.; Jay, D.A.; Orton, P.M.; Spahna, E.Y. A conceptual model of the strongly tidal Columbia River plume. _J. Mar. Syst._**2008**, _78_, 460-475. [CrossRef] * Avicola and Huq (2002) Avicola, G.; Huq, P. Scaling analysis for the interaction between a buoyant coastal current and the continental shelf: Experiments and observations. _J. Phys. Oceanogr._**2002**, _32_, 3233-3248. [CrossRef] * Lentz and Helfrich (2002) Lentz, S.J.; Helfrich, K.R. Buoyant gravity currents along a sloping bottom in a rotating fluid. _J. Fluid. Mech._**2002**, _464_, 251-278. [CrossRef] * Pimenta et al. (2011) Pimenta, F.M.; Kirwan, A.D., Jr.; Huq, P. On the transport of buoyant coastal plumes. _J. Phys. Oceanogr._**2011**, _41_, 620-640. [CrossRef] * Korotenko et al. (2014) Korotenko, K.A.; Osadchiev, A.A.; Zavialov, P.O.; Kao, R.-C.; Ding, C.-F. Effects of bottom topography on dynamics of river discharges in tidal regions: Case study of twin plumes in Taiwan Strait. _Ocean Sci._**2014**, _10_, 865-879. [CrossRef] * Ostrander et al. (2008) Ostrander, C.E.; McManus, M.A.; DeCarlo, E.H.; Mackenzie, F.T. Temporal and spatial variability of freshwater plumes in a semi-enclosed estuarine-bay system. _Estuaries Coasts_**2008**, _31_, 192-203. [CrossRef] * Osadchiev and Zavialov (2013) Osadchiev, A.A.; Zavialov, P.O. Lagrangian model for surface-advected river plume. _Cont. Shelf Res._**2013**, _58_, 96-106. [CrossRef] * Zhao et al. (2018) Zhao, J.; Gong, W.; Shen, J. The effect of wind on the dispersal of a tropical small river plume. _Front. Earth Sci._**2018**, _12_, 170-190. [CrossRef] * Mityagina et al. (2010) Mityagina, M.I.; Lavrova, O.Y.; Karimova, S.S. Multi-sensor survey of seasonal variability in coastal eddy and internal wave signatures in the north-eastern Black Sea. _Int. J. Remote Sens._**2010**, _31_, 4779-4790. [CrossRef] * Lavrova and Mittyagina (2017) Lavrova, O.Y.; Mittyagina, M.I. Satellite survey of internal waves in the Black and Caspian seas. _Int. J. Remote Sens._**2017**, \\(9\\), 892. [CrossRef] * Trump and Marmorino (2003) Trump, C.L.; Marmorino, G.O. Mapping small-scale along-front structure using ADCP acoustic backscatter range-bin data. _Estuaries_**2003**, _26_, 878-884. [CrossRef] * Warrick and Stevens (2011) Warrick, J.A.; Stevens, A.W. A buoyant plume adjacent to a headland--Observations of the Elwha River plume. _Continent Shelf Res._**2011**, _31_, 85-97. [CrossRef]
Quadcopters can continuously observe ocean surface with high spatial resolution from relatively low altitude, albeit with certain limitations of their usage. Remote sensing from quadcopters provides unprecedented ability to study small river plumes formed in the coastal sea. The main goal of the current work is to describe structure and temporal variability of small river plumes on small spatial and temporal scales, which are limitedly covered by previous studies. We analyze optical imagery and video records acquired by quadcopters and accompanied by synchronous in situ measurements and satellite observations within the Kodor and Bzyp plumes, which are located in the northeastern part of the Black Sea. We describe extremely rapid response of these river plume to energetic rotating coastal eddies. We reveal several types of internal waves within these river plumes, measure their spatial and dynamical characteristics, and identify mechanisms of their generation. We suggest a new mechanism of formation of undulate fronts between small river plumes and ambient sea, which induces energetic lateral mixing across these fronts. The results reported in this study are addressed for the first time as previous related works were mainly limited by low spatial and/or temporal resolution of in situ measurements and satellite imagery. small river plume; aerial drone; coastal processes; frontal zones; internal waves + Footnote †: journal: Remote Sensing, 2020, 12, 3079; doi:10.3390/rs12183079
Condense the content of the following passage.
287
arxiv-format/2101_06788v1.md
# Genetics and Pathophysiology of Maturity-onset Diabetes of the Young (MODY): A Review of Current Trends Tajudeen O. Yahaya Department of Biology, Federal University Elimin Kebbi, Kebbi State, Nigeria Shemshere B. Utouma Department of Biochemistry and Molecular Biology, Federal University Elimin Kebbi, Kebbi State, Nigeria ###### Author-onset diabetes of the young (MODY) is a monogenic and non-autoimmune form of diabetes mellitus (DM) with characteristic pancreatic \\(\\beta\\)-cell destruction and disrupted insulin biosynthesis [1, 2]. The disease usually appears between the teen ages and early adulthood, < 25 years [3, 4]. MODY was discovered by Robert Tattersall in 1974 as a distinct form of DM after discovering young non-insulin dependent diabetic individuals two-years post-diagnosis [5, 6]. The condition was later named MODY by Fajans Stefan after a series of studies [5, 6]. However, this classification may be confusing due to the similar pathophysiology of MODY and type 2 DM (T2DM). Many physicians and researchers often considered and misdiagnosed MODY as a subset of T2DM [7]. It was estimated that at least 90% of MODY diabetics are misdiagnosed as having T2DM due to lack of awareness on the differences between the two [2, 8, 9]. Some distinct characteristics of MODY include less significant weight gain, the absence of pancreatic autoantibodies, and lack of insulin resistance or elevated fasting glucose [10]. However, these symptoms are often unrecognized due to the low incidence of MODY, which is estimated to account for 1-5% of DM cases [11, 12, 13]. MODY genes disrupt insulin production processes, culminating in hyperglycemia, which with time, may damage organs such as eyes, kidneys, nerves, and blood vessels [14]. The phenotypic expressions of MODY depend on the causal gene. Individuals with certain types of mutations may show a slightly raised blood sugar for life with mild or no symptoms of DM [14]. These individuals may not also develop long-term complications, and their high blood glucose levels may only be discovered during routine blood tests [14]. People with other mutation types require specific treatment with either insulin or a type of oral DM medication called sulfonylurea [14]. In the past, people with MODY had generally not been overweight or obese, or have other risk factors for T2DM, such as high blood pressure or abnormal blood fat levels [14]. However, as more people become overweight or obese, especially in the US, people with MODY may also be overweight or obese [14]. Althoughboth T2DM and MODY can have a family history, such inheritance is autosomal dominant in MODY, meaning that, it does not skip any generation [14]. DM is a major public health problem worldwide [15, 16]. The number of people with DM has quadrupled in the past three decades [17]. DM is now the ninth primary cause of death, accounting for about 1 in 11 adults with DM [17]. In 2019, about 463 million people were diagnosed with DM and this number was projected to reach 578 million by 2030, and 700 million by 2045 [18]. More information and awareness are needed to contain the disease effectively. To this end, this study was conceived to provide current information on the already identified MODY genes with their mechanism of actions and phenotypic presentations. This will enable healthcare providers to formulate effective drugs and treatment methods for various forms of MODY. _MODY genes_ We identified 14 classified MODY genes as well as three new and unclassified genes linked with MODY. The identities of these genes with various mechanisms of action and phenotypic features are presented in Table 1. _Common MODY genes_ Using linkage analysis, restriction fragment length polymorphism, and DNA sequencing [6], scientists have identified mutations in the hepatocyte nuclear factor 1-alpha (HNF1A), 4-alpha (HNF4A), 1-Beta (HNF1B), and glucokinase (GCK) genes as the most common cause of MODY [8]. Depending on the country, these genes account for over 80% of all MODY cases [49, 50, 51]. _The HNF1A gene_ The HNF1A gene provides instructions for the synthesis of the HNF1A protein [52, 53]. The protein plays a vital role in the development of beta cells and the expression of many genes embedded in the liver [52, 53]. These roles enable the pancreas to produce insulin normally in childhood, which decreases as one ages [22]. Thus, mutations in this gene may lower the amount of insulin produced [22], and have been implicated in the pathogenesis of MODY type 3 (MODY3) [52, 53]. MODY3 is the commonest form of MODY, accounting for about 70% of cases [52, 53]. Several single nucleotide polymorphisms (SNPs) have been identified in the HNF1A gene of MODY patients, which could suggest the pathophysiology and possible treatment options. In a study, the coding and promoter regions of the HNF1A gene were screened for mutations in 34 unrelated Iranian patients with MODY. The study identified one novel missense mutation (C49G), two novel polymorphisms, and eight recently identified SNPs [22]. In another study, mutations identified in 356 unrelated MODY3 patients, including 118 novel mutations, were analyzed, and the correlation was drawn between the variants and age of onset of DM. Missense mutations were observed in 74% of cases, while 62% of patients had truncating mutations [54]. Most mutations (83%) were found in exons 1-6, wherein all three HNF1A isoforms are located and are thus affected [54]. The age of onset of DM was lower in patients with truncating mutations than in those with missense mutations [54]. It was also observed that the higher the number of HNF1A isoforms with missense mutations, the lower the age of diagnosis of DM in the patients [54]. These findings indicate that MODY3 patients may express variable clinical features depending on the type and location of the HNF1A mutations [54]. Aside from the liver and pancreas, HNF1A is embedded in the kidney, isolated islets, and intestines. So, the clinical presentations of individuals with HNF1A mutation may also depend on the tissues and its developmental stage [55, 56]. _The HNF4A gene_ The HNF4A gene codes for a transcription protein embedded in the liver [57, 58]. The HNF4A gene regulates the expression of several liver-specific genes. Thus, some liver functions may be enabled or disabled, depending on the expression or otherwise of this gene [57, 58]. In addition, HNF4A controls the expression of the HNF1A gene, which in turn regulates the expression of several important genes in the liver [57]. The HNF4A gene is also found in the pancreas, kidneys, and intestines and, together with transcription factors such as HNF1A and HNF1B, control gene expression in developing embryos [59, 60]. Specifically, in the pancreatic beta cells, this group of transcription factors controls the expression of the insulin gene. These genes also regulate the expression of several other genes involved in insulin secretion, such as the genes that are involved in glucose transport and metabolism [59, 61]. Considering these functions, mutations in the HNF4A gene would lead to several problems. \\begin{table} \\begin{tabular}{l c c c c} \\hline \\hline **Gene/function** & **Full name** & **Locus** & **MODY** & **Pathophysiology** \\\\ & & & **type** & \\\\ \\hline HNF4A/transcription & Hepatocyte nuclear factor-4 & 20q12 & MODY 1 & Causes progressive beta-cell \\\\ factor & alpha & & & dysfunction, leading to macrostonia \\\\ & & & & and hyperinsulinemic hypoglycemia.[13, 14] \\\\ GCK/glycolytic enzyme & Glucokinase & 7p15 & MODY 2 & Disruptions glucose sensing, leading to hyperglycemia.[15, 16] \\\\ HNFIA/transcription & Hepatocyte nuclear factor-1 & 12q24.31 & MODY 3 & Causes gradual beta-cell dysfunction, leading to reduced insulin production and progressive hyperglycemia.[17, 18] \\\\ IPF/PDX1/transcription & Insulin promoter factor & 13q27.92 & MODY 4 & Causes pancreatic agensis, beta-cell developmental errors, and defective insulin secretion.[19, 20] \\\\ factor & Pancreatic duodenal & & & \\\\ HNFIB/transcription & Homeobox & & & \\\\ HNFIB/transcription & Hepatocyte nuclear factor & 17q12 & MODY 5 & Results in dysfunctional pancreatic \\\\ factor & 1B & & & embryonic development, the \\\\ & & & formation of kidney cyst, and \\\\ & & & suppresses cytokine signaling 3.[19, 20] \\\\ NEURODI/transcription & Neurogenic differentiation 1 & 2q31.3 & MODY 6 & Impaira pancreatic morphogenesis and beta-cell differentiation.[21, 22] \\\\ KLFII/transcription & Küppel-like factor 11 & 2p25.1 & MODY 7 & Disrupts the activation of some \\\\ factor & & & insulin promoters. It also suppresses the expression of certain free radical \\\\ & & & & scavenging such as catalase and \\\\ & & & superoxide dismutase, disrupting \\\\ CELL/lipase & Carboxyl ester lipase & 9q34 & MODY 8 & Alters C-terminal sequencing. It can \\\\ & & & also disrupt exocrine and endocrine \\\\ PAX4/Transcription & Paired box 4 & 7q32.1 & MODY 9 & Truncates embryonic beta-cell \\\\ factor & & development, inhibiting beta-cell differentiation.[23, 24] \\\\ INS/Insulin synthesis & Insulin hormone & 11p15.5 & MODY 10 & Causes molecular defects in the \\\\ & & & & \\(\\beta\\)-cell and increases endoplasmic \\\\ & & & & reticulum (ER) stress, resulting in the synthesis of structurally altered (pre) \\\\ & & & & proinsulin molecules and low insulin \\\\ BLK/B-cell receptor & B-lymphocyte kinase & 8p23.1 & MODY 11 & Suppresses MIN6 B-cells, disrupting \\\\ disciplineand & development, stimula & & & beta-cell functions.[24, 25] \\\\ ABCC8/regulates insulin & ATP binding cassette & 11p15.1 & MODY 12 & Causes congenital hyperinsulin, \\\\ secretion & subfamily C member 8 & & & adversely affecting the biogenesis and \\\\ KCNJII/regulates insulin & Inward-rectifyier potassium & 11p15.1 & MODY 13 & Causes congenital hyperinsulin, \\\\ secretion & channel, subfamily, member 11 & & & adversely affecting the biogenesis and \\\\ APPL1/regulates cell & Adaptor protein, proliferation, cellular & 3p14.3 & MODY 14 & Smarts of the beta-cell structural \\\\ signaling pathways & With PH domain and & & & abnormality and gradual death, leading \\\\ & & & & to developmental delay. It can also \\\\ & & & & suppress the insulin-specific regulatory \\\\ & & & role of AKT2.[1, 1] \\\\ ISL-1/transcription & ISL LIM homeobox 1 & 5q11 & - & Interferes with the expression of \\\\ factor, INS enhancer & & & & asole canoes poor islet differentiation \\\\ & & & & and proliferation.[1, 1] \\\\ RFX6/Regulatory & Regulatory factor X & 6q22.1 & - & Causes beta-cell dysfunction, leading \\\\ factor (regulates the & & & to reduced insulin secretion and \\\\ transcription factors & involved in beta-cell & & & hyperglycemia.[1, 1] \\\\ & & & & \\\\ & & & & \\\\ & & & \\\\ & & & & \\\\ & & & \\\\ & & & & \\\\ & & & \\\\ & & & \\\\ & & & & \\\\ & & & \\\\ & & & & \\\\ & & & \\\\ & & & \\\\ & & & \\\\ & &Among the likely consequence of mutation in the HNF4A gene is the development of DM. The pancreatic beta-cell is sensitive to the population of HNF4A present, and certain HNF4A haplotypes are being linked with altered insulin secretion [63]. In particular, mutations in the gene are suspected in the pathogenesis of MODY type 1 (MODY1) [8]. Individuals with MODY1 respond normally to insulin, but express an impaired response to secreting insulin in the presence of glucose [8]. If this condition remained unchecked, insulin secretion decreases, leading to DM [8]. Several types of nonsense and missense mutations in HNF4A characterized by a shortfall in insulin secretion have been observed to cause MODY1 [62]. Similarly, the variant of the HNF4A gene inherited may influence the function of beta cells, increasing or decreasing insulin secretion [62]. A British study identified a haplotype that was linked with reduced disease risk [62]. Individuals with the'reduced-risk' haplotype were strongly associated with increased insulin secretion and lower fasting glucose levels [63]. These findings suggest that a certain HNF4A haplotype might have the ability for increased insulin secretion and protective effects on DM [63]. This protective variant was identified upstream of the HNF4A coding region in an alternative promoter called P2, which lies 46 kb upstream of the P1 promoter. Though the two promoters help in the transcription of HNF4A, they play different roles in different cells. Both P1 and P2 are embedded in the pancreas, but P2 is the main transcription site in the beta cells, and a mutation of P2 is a cause of MODY [63]. How HNF4A mutations cause \\(\\beta\\)-cell dysfunction or lipid profile disruption in MODY1 is not fully understood. However, based on its role in glucose transport and glycolysis as well as lipid metabolism [64], loss of function of the gene could result in low triglyceride. This could end in less expression of some genes involved in glucose biosynthesis and metabolism [64]. Mice with a mutated HNF4A gene have been reported to show impaired glucose-stimulated insulin secretion and altered intracellular calcium response, characteristic of MODY1 [65]. These observations were suggestive of loss of insulin regulatory function of KATP channel in the pancreatic \\(\\beta\\)-cells of the mutated mice [65]. ### The HNF1B gene The HNF1B gene encodes a protein (called a transcription factor) that attaches to certain parts of DNA and modulates the expression of other genes [66, 67]. The HNF1B protein is found in many organs and tissues, including the lungs, livers, intestines, pancreas, kidneys, reproductive system, and urinary tract [66, 67]. Researchers suggest that the protein may be instrumental in the development of these body parts [66]; hence, its inactivation may initiate a lot of diseases. Notable among the diseases linked to mutations in the HNF1B gene is the MODY type 5 (MODY5) [66]. To prove the association between HNF1B and MODY5, a team of researchers compared pluripotent stem celllines from MODY5 individuals with cells grown from unaffected family members and healthy controls [25]. In MODY5, children who inherited the mutation from one parent grew a malformed and small pancreas, indicating that they developed DM usually aged < 25 years [25]. The use of pluripotent stem cells allowed researchers to replicate human pancreas development in cell culture [25]. The scientists observed that HNF1B gene mutation disrupted the embryonic pancreas development of the cell cultures, leading to beta-cell dysfunction and impaired insulin biosynthesis [25]. It was also observed that mutations in this gene initiated DM independently of other DM genes [25]. However, the differentiating cells up-regulated other pancreatic development genes to compensate for the HNF1B inactivation [25]. These cellular events were observed in numerous MODY5 cell lines compared with healthy family members and non-related healthy controls [25]. The scientists were of the opinion that these findings and a greater understanding of beta-cell development and disruption could lead to improved DM treatments. This discovery once again shows the diabetic population could be stratified into subgroups and treated individually based on the mechanism of the causal gene rather than the current generalized treatment methods [25]. As HNF1B is expressed in several tissues during embryonic development, diabetic conditions associated with HNF1B mutations can stem from extra-pancreatic abnormalities. The commonly afflicted organ is the kidney, usually affected by renal cysts, which precede diabetic conditions [68, 69, 70]. Renal dysplasia and renal tract malformations had also been reported to precede diabetic syndrome in individuals with HNF1B gene mutations [68, 69, 70]. Other precursors of MODY1 include hypomagnesemia, mild genital tract anomalies, abnormal liver morphology, and enzymes, especially alanine aminotransferase (ALT) and gamma-glutamyl transferase [68, 69, 70]. ### GCK gene The GCK gene encodes the enzyme glucokinase, a member of the hexokinase family [71]. The gene plays a central role in carbohydrate metabolism in that it catalyzes the first reaction of the glycolytic pathway, the conversion of glucose to glucose 6-phosphate [71]. Glucokinase is expressed along with glucose transporter 2 (GLUT2) by the pancreatic \\(\\beta\\)-cells and catalyzes the production of glucose, enabling it to act as a glucose sensor for the beta cells [71, 72]. Compared with other hexokinase members, glucokinase has a high uninterruptible transport capacity for glucose [71]. Glucokinase works together with the GLUT2 receptor in the liver and beta-cell and enhances rapid insulin-independent entry and metabolism of glucose [71]. This allows the liver to act as a reservoir for circulating glucose as well as helps the glucose sensing mechanism of the beta cells [71]. Mutations in the GCK gene have been demonstrated to cause abnormal glucose sensing, resulting in a raised threshold for initiation of glucose-stimulated insulin secretion. This ends in stable and mild hyperglycemia without any threat of DM complications [73, 74]. This form of DM is known as GCK-MODY, otherwise known as MODY type 2 (MODY2). However, the clinical presentation of the MODY may vary based on the type of mutation. Heterozygous inactivating mutations cause mild fasting hyperglycemia (the hallmark of GCK-MODY), while the homozygous inactivating mutations cause a more severe condition, resembling permanent neonatal diabetes mellitus [73, 74]. Other GCK mutations up-regulate insulin production, characterized by hyperinsulinemic hypoglycemia. In contrast to other forms of DM, hyperglycemia in MODY2 does not deteriorate with age [75]. GCK expression is tissue-specific. In the liver, GCK synthesis is directly proportional to the concentration of insulin, which rises and falls with the nutritive state of the body [76]. On the other hand, glucagon, another liver hormone, suppresses GCK expression [76]. In the \\(\\beta\\)-cells, GCK expression is relatively constant regardless of the body's food intake and, by extension, insulin levels [76]. Considering the roles of GCK in glucose metabolism and insulin release, GCK mutations are expected to cause both hyperglycemia and hypoglycemia. [77] Heterozygous mutations in the gene may result in a reduced phosphorylation rate in the liver, decreasing the concentration of glycogen synthesized, and disrupting postprandial glucose regulation [78]. In \\(\\beta\\)-cells, loss of function of the gene will impair insulin secretion regulation [79]. ### Selection of individuals for MODY genetic testing Due to the overlapping characteristics of various forms of DM, some of the discussed MODY pathophysiology might be observed in individuals with type 1 DM (T1DM) and T2DM, making the selection for genetic testing difficult. However, diabetic teens and young adults with a multi-generational history, as well as non-ketotic insulin-sensitive hyperglycemia and the absence of autoantibodies, should consider testing for MODY [80, 81]. Additionally, middle-aged with autosomal dominant history and symptoms and signs of T2DM, but showing no obesity, insulin resistance and fatty liver, should also consider MODY testing [82, 83]. A MODY probability calculator developed by researchers at Exeter, UK, can also be used as a guide to select diabetics for genetic screening [83]. In the model, diabetics below 35 years old are scored based on their responses to eight questions. These questions include sex, age at diagnosis and referral, body mass index (BMI), treatment option taken, time insulin treatment begins, glycated hemoglobin (\\(\\mathrm{HbA}_{{}_{1c}}\\)) level, and diabetic status of the parents [83, 84]. For MODY to be considered against T1DM, \\(\\mathrm{HbA}_{{}_{1c}}\\) must be lower, at least one of the parents must be affected, and the age of diagnosis must be older [83, 84]. MODY will be considered against T2DM if the BMI is lower, the age of diagnosis is younger, \\(\\mathrm{HbA}_{{}_{1c}}\\) is lower, and if the affected has a diabetic parent(s) [83, 84]. MODY will also be suspected against T2DM if the diabetic does not respond to oral hypoglycemic drugs or insulin [83, 84]. In advanced countries, some organizations have compiled major clinical features of MODY and employed them as guidelines for the selection of diabetics for MODY testing. Notable among these organizations/guidelines include European Molecular Genetics Quality Network Best Practice Guidelines [82] and the Clinical Practice ConsensusGuidelines by the International Society for Pediatric and Adolescent Diabetes.[85] However, comparing all these selection methods with the large numbers of MODY pathophysiology described in this study shows the methods lack adequate information to detect MODY accurately. This suggests that many MODY patients are still currently being misdiagnosed as T1DM or T2DM. Thus, the MODY population worldwide could be higher than those reported in several studies. ### Genetic testing techniques for MODY When there are means and sufficient information that an individual has MODY, the next step is to choose a screening procedure or technique. For effective prevention and management of genetic diseases, including MODY, genetic testing should begin during the intrauterine life, which is termed prenatal genetic testing.[86] Prenatal testing can be carried out to detect genetic errors related to MODY during fetal development. Genetic testing for GCK-MODY is particularly important during pregnancy to confirm the presence or otherwise of macrosonia, which may help in the choice of therapies.[87, 88] Genetic testing can also be done immediately after birth to detect MODY mutations that can be corrected if detected early. This form of genetic testing is called newborn testing.[86] Predictive and pre-symptomatic testing is also conducted a few years after birth[86] to identify the MODY risk of an individual, especially those with a family history of MODY. Diagnostic testing is another test that can be conducted at any time when certain biomarkers or pathophysiology of MODY are observed in an individual. The test is often used to confirm the status of a specific genetic or chromosomal condition.[86] A couple of biological techniques are available for genetic testing, notable among which are gene-targeted testing (serial single gene or multigene panel) and whole-exome sequencing.[89] ### Gene-targeted testing (serial single gene or multigene panel) Gene-targeted testing, such as Sanger sequencing, is genetic testing in which specific genes, based on the clinical presentations of the person with diabetes, are selected for testing.[90] The technique is most suitable when the patient expresses some signs related to a few known MODY types. The test is carried out serially in which a sequence analysis of the probable genes is carried out first.[89] For a patient showing the classical features of MODY, HNF1A is screened first, followed by HNF4A and GCK. However, if the diabetic phenotype is mild and fasting glucose is between 5.5 and 8.5 mmol/L, GCK should be tested first, then HNF1A and HNF4A in that order.[12, 91] If the patient has renal and pancreatic dysfunction as well as a urognital problem, HNF1B should be tested first.[12, 19] If no SNP was discovered, deletion/ duplication analysis should be done to detect such genes as CEL, GCK, HNF1A, HNF1B, and HNF4A.[89] Generally, the gene-targeted approach is time-consuming and expensive. Complete genetic testing for HNF1A, HNF4A, and GCK involves sequencing 31 exons in which each gene is sequenced separately.[92] Alternatively, a MODY multigene panel that contains the 14 known MODY genes and other suspect genes can be employed to detect the genetic cause of a MODY. This is cost-effective as it targets several genes at a time and precludes testing unnecessary variants.[93, 90] ### Whole-exome capture and big-throughput sequencing When a MODY patient does not show sufficient or clear clinical features, in-depth genomic screening such as whole-exome sequencing could be the best genetic testing option.[89] In exome sequencing, the selection of probable genes for testing is not needed. Because of these reasons, exome sequencing has some advantages over gene-targeted sequencing in that it can detect MODY genes beyond the reach of the latter.[92] In some cases, exome sequencing is used as a further diagnostic tool where gene-targeted sequencing is ineffective due to insufficient clinical expressions it needs to work with. Whole-exome sequencing is relatively new and could be improved upon in the near future to expand its search coverage and potential.[92] If this is done, it will make MODY diagnosis easier and better. ### Cost-effectiveness of genetic testing for MODY For individuals, the huge cost of genetic testing for MODY could be burdensome. However, if done accurately, it will improve the quality of life. Testing for MODY genes in a family with the disease may help detect MODY variants in predisposed members and proffer treatment before it degenerates to glucose imbalance and DM. Accurate genetic screening may help predict the likelihood and types of complications and, in turn, reduce expenditures. For instance, HNF1A and HNF4A MODY are characterized by microvascular complications, which can be managed with a low dose of sulfonylurea instead of rigorous insulin therapy[94] typified by T1DM. On the other hand, GCK-MODY shows less microvascular complications and may not need any treatment.[92] So, accurate diagnosis of MODY type could prevent a wrong treatment choice, culminating in reduced healthcare cost.[95, 59] In a society where there is insurance cover or policy for testing, the cost-effectiveness of genetic testing for MODY depends on the frequency of the condition in the population.[97, 98] In a stimulated study carried out in the US by Naylor et al,[98] genetic testing for MODY was non-cost-effective when the frequency of the disease was as low as 2%. However, when the population of MODY was increased to 6% with improved screening techniques and expanded pathophysiology, testing was found to be cost-effective.[98] Moreover, genetic testing was found to be cost-effective in the population, with a 2% prevalence of MODY when the cost of testing was reduced.[98] The study also demonstrated that if the MODY population is increased to 31%, through advanced testing techniques, the genetic testing policy for MODY becomes cost-saving.[98] In brief, the cost-effectiveness of a genetic testing policy depends on the frequency of MODY in society and the cost of the test. ## 6 Conclusion An autosomal dominant mutation in certain genes involved in insulin biosynthesis and metabolism may cause MODY. This form of DM has distinct pathogenic and clinical presentations. Thus, its treatment may require a different approach from other types of disease. As such, healthcare providers are advised to formulate MODY drugs and treatment methods based on the identified mechanism of action and phenotypic presentations of its subtypes. The authors declared no conflicts of interest. No funding was received for this study. ## References * [1] Heuvel-Borsboom H, de Valk HW, Loskoot M, Westerink J. Maurity onset diabetes of the young: See and you will find. Neth J Med 2016 Jun?4(5),193-200. * [2] Bansal V, Gassenhuber J, Phillips T, Oliveira G, Harbaugh R, Villarsa N, et al. Spectrum of mutations in monogenic diabetes genes identified from high-throughput DNA sequencing of 6888 individuals. BMC Med 2017 Dec:15(1):213. * [3] Ziegler R, Neu A. Diabetes in childhood and adolescence. Dtsch Arxellt The 2018 Mar:115(9):146-156. * [4] Pihoker C, Gilliam LK, Ellard S, Dabelca D, Davis C, Dolan LM, et al. SEARCH for Diabetes in Youth Study Group. Prevalence, characteristics and clinical diagnosis of maturity onset diabetes of the young due to mutations in HNF1A, HNF4A, and glucokinase: results from the SEARCH for Diabetes in Youth. Clin Endorchi Medu 2013 Oct:98(10):4055-4062. * [5] Fajans S, Bell GI, MODY, history, genetics, pathophysiology, and clinical decision making. Diabetes Care 2011 Aug:34(8):1878-1884. * [6] Firdous P, Nisar K, Ali S, Ganai BA, Shabir U, Hassan T, et al. Genetic testing of maturity-onset diabetes of the young current status and future perspectives. Front Endorchi (Lausanne) 2018 May:9:253. * [7] Shields BJM, Hicks S, Shepherd MH, Colclough K, Hartretsley AT, Ellard S, Maurity-onset diabetes of the young (MODY): how many cases are we missing? Disheroidopaia 2010 Dec:53(1):2504-2508. * [8] Fajans S, Bell GI, Polonsky KS, Molecular mechanisms and clinical pathophysiology of maturity-onset diabetes of the young. N Engl 2001 Sep:345(13):971-980. * [9] Kleinberger JW, Copeland KC, Gandica RG, Haymond MW, Levitsky JL, Linder B, et al. Monogenic diabetes in overweight and obese youth diagnosed with type 2 diabetes: the TODAY clinical trial. Genet Med 2018 Jun:20(6):583-590. * [10] Naylor R, Phillion LH. Who should have genetic testing for maturity-onset diabetes of the young? Clin Endorchi (Odf) 2011 Oct:75(4):422-426. * [11] Irgens HU, Molnes J, Johansson BB, Ringdal M, Skriwalang T,Undlien DE, et al. Prevalence of monogenic diabetes in the population-based Norwegian Childhood Diabetes Registry. Diabetologia 2013 Jul:56(7):1512-1519. * [12] Molyen A, Njolstad PR. Role of molecular genetics in transforming diagnosis of diabetes mellitus. Expert Rev Mol Diagn 2011 Apr:11(3):313-320. * [13] Shepherd M, Shields B, Hammersley S, Hudson M, McDonald TJ, Colclough K, et al. UNITED Team. Systematic population screening, using biomarkers and genetic testing, identifies 2.5% of the UK, pediatric diabetes population with monogenic diabetes. Diabetes Care 2016 Nov:39(11):1879-1888. * [14] National Institute of Diabetes and Digestive and Kidney Disease. NDID: Mongogenic diabetes (Neonatal diabetes mellitus & MODY) (jctcd 2018 Oct 20), Available from: [https://www.nviddl.nih.gov/health-information/diabetes/overview/what-is](https://www.nviddl.nih.gov/health-information/diabetes/overview/what-is) diabetes/monogenic-neonatal- mellitus-mody. * single care experience. Oman Med J 2014 Mar:29(2):119-122. * [16] Al-Lawald J, Diabetes mellitus: a local and global public health emergency? Oman Med J 2017 May:2(3):177-179. * [17] Zheng Y, Ley SH, Hu FB. Global aetiology and epidemiology of type 2 diabetes mellitus and its complications. Nat Rev Endorchiol 2018 Feb:14(2):88-98. * [18] International Diabetes Federation, IDF Diabetes Atlas 9th Edition, 2019. [cited 2020 April 19]. Available from: [https://www.diabetes.org/upload/resources/2019/IDF_Atlas_9th_Edition_2019.pdf](https://www.diabetes.org/upload/resources/2019/IDF_Atlas_9th_Edition_2019.pdf). * [19] Arya VB, Rahman S, Senniappan S, Flanagan SE, Ellard S, Hussain K. HINFA mutation: switch from hyperinsulimematic hypoglycemia maxima to maturity-onset diabetes of the young, and inectin response. Diabet Med 2014 Mar;31(3):e11-e15. * [20] Bonfing W, Hermansans S, Warmcke K, Eder G, Engelsberger I, Burdack S, et al. GCK-MODY (MODY 2) caused by a novel p.P.DiDi305Ser mutation. ISRN Pediatr 2011;2011;10.1676549. * [21] Noorian S, Sayarif F, Farhadi E, Barbetti F, Rezaei N, GCK mutation in a child with maturity onset diabetes of the young, type. Izn1 Pediatr 2013 Apr;23(2):226-228. * [22] Moghbeli M, Naghibzadeh B, Ghahraman M, Fatemi S, Taghavi M, Vakli R, et al. Mutations in HINFA gene not a common cause of familial young onset diabetes in Iran. Indian J Clin Biochem 2018 Jan;33(1):91-95. * MODY and dorsal pancreatic agencies: New phenotype of a rare disease. Clin Gener 2018 Feb;93(2):382-386. * [24] Doddabelavajala Murthyanaya M, Chapala A, Heagrabartias Shyamanez A, Varghese D, Varshney M, Paul J, et al. Comprehensive maturity onset diabetes of the young (MODY) gene screening in pregnant women with diabetes in India. PLoS One 2017 Jan;12(1):e10168656. * [25] Agency for Science Technology and Research. ASTAR. Genetic mutation during embryonic development could hold the key to a lifetime living with diabetes. [cited 2018 Sept 22]. Available from: [https://medicalpress.com/news/2016-08-genetic-mutation-embryonic-key-lifetime.html/rnkk-r](https://medicalpress.com/news/2016-08-genetic-mutation-embryonic-key-lifetime.html/rnkk-r) * [26] Mancusi S, La Manna A, Bellini G, Scianguera S, Roberti D, Casale M, et al. HINFA-P imitation mutants PKD2D and SOCS3 expression causing renal cysts and diabetes in MODY's kindred.J Nephrol 2013 Jan;Feb;26(1):207-212. * [27] Malecki M, Tlralu US, Antonella S, Fields L, Doria A, Ophan T, et al. Mutations in NEURD1 are associated with the development of type 2 diabetes mellitus. Nat Genet 1999 Nov;23(3):323-328. * [28] Kim SH. Maturity-onset diabetes of the young; what do clinicians need to know? Diabetes Metab J 2015 Dec;39(6):468-477. * [29] Neve B, Fernandez-Zapioc ME, Ashkenazi-Karlan V, Dina C, Handi YH, Joly E, et al. Role of transcription factor KLF11 and its diabetes-associated gene variants in pancreatic beta cell function. Proc Natl Acad Sci U S A 2005 Mar;102(13):4807-4812. * [30] Robertson RP, Harmon J, Tran PO, Potiov V. Bestell glucose toxicity, lipotoxicity, and chronic oxidative stress in type 2 diabetes. Diabetes 2004 Feb;53(Supplpl 1);S119-S124. * [31] Raeder H, Johansson S, Holm PL, Haldorsen IS, Mas E, Sbarra Y, et al. Mutations in the CEL VINR cause a syndrome of diabetes and pancreatic score dysfunction. Nat Genet 2006 Jan;3(1):54-62. * [32] Shimajiri Y, Sanke T, Furuta H, Hanabusa T, Nakagawa T, Fujitani Y, et al. A missense mutation of Px4 gene (R121W) is associated with type 2 diabetes in Japanese. Diabetes 2001 Dec;50(12):2864-2869. * [33] Bisson-Lauber A, Boehm B, Lang-Muritano M, Gauthier BR, Brun T, Wollheim CR, et al. Association of childhood blood type 1 diabetes mellitus with a variant of PAx4E: possible link to beta cell regenerative capacity. Diabetologia 2005 May;48(5):900-905. * [34] Meur G, Simon A, Harun N, Virally M, Dechaume A, Bonneford A, et al. Insulin gene mutations resulting in early-onset diabetes: marked differences in clinical presentation, metabolic status, and pathogenic effect through endoplasmic reticulum retention. Diabetes 2010 Mar;59(3):653-661. * [35] Nishi M, Nanjo K. Insulin gene mutations and diabetes. J Diabetes Investig 2011 Apr;2(2):92-100. * [36] Borowiec M, Liew CW, Thompson R, Boonyasrisawat W, Hu J, Mlynarski WM, et al. Mutations at the BLK locus linked to maturity onset diabetes of the young and 8-cell dysfunction. Proc Natl Acad Sci U S A 2009 Aug;106(34):14460-14465. * [37] Borowiec M, Liew CW, Thompson R, Boonyasrisawat W, Hu J, Mlynarski WM, et al. Mutations at the BLK locus linked to maturity onset diabetes of the young and beta-cell dysfunction. Proc Natl Acad Sci U S A 2009 Aug;106(34):14460-14465. * [38] Huopio H, Reimann F, Ashfield R, Komulainen J, Lenko HL, Rahier J, et al. Dominantly inherited hyperinsulimised caused by a mutation in the softwolfwares receptor type I. J Clin Invest 2000 Oct;10(7):897-906. * [39] Taschenberger G, Mougey A, Shen S, Lester L,B, LaFanchi S, Shyng SL. Identification of a familial hyperinsulimisc causing mutation in the wildwarc receptor 1 that preventsmond trafficking and function of K? channels. Biol Chem 2002 Mar;27(19):17139-17146. * [40] Dean I, McKnuryz I, Bethesda MD. The generic landscape of diabetes. national center for biotechnology information (Us). [cited 2018 Nov 22]. Available from: [https://www.ncbi.nlm.nih.gov/books/NBK1665/](https://www.ncbi.nlm.nih.gov/books/NBK1665/). * [41] Prudente S, Jungrukoson P, Marucci A, Ludovico O, Buranasukajoru P, Mazza T, et al. Loss-of-function in APPL1 in familial diabetes mellitus. Am J Hum Genet 2015 Jul;97(1):177-185. * mediates Akt substrate specificity and cell survival in vertebrates development. Cell 2008 May;133(3):486-497. * [43] Zhang H, Wang PW, Guo T, Yang C, Chen P, Ma K,T, et al. The LIM-homodomain protein ISL: activates insulin gene promoter directly through synergy with BETA2. J Mol Biol 2009 Sept;392(3):565-577. * [44] Peng SY, Wang WP, Mengj L, Ti Zhang H, Li YM, et al. ISLI physically interacts with BETA2 to promote insulin transcription system in non-beta cells, Biochim Biophys Acta 2005 Dec;1731(3):154-159. * [45] Patel KA, Kertunen J, Laakko M, Stanczkovski A, Laver TW, Colcolough K, et al. Heterogenous RFK6 protein truncating variants are associated with MODY with reduced penetrance. Nat Commun 2017 Oct;8(1):888. * [46] Morales S. New type of diabetes caused by gene mutation discovered. [cited 2018 Dec 5]. Available from: [https://www.diabetesedaily.com/blog/new-type-of-diabetes-caused-by-gene-mutation](https://www.diabetesedaily.com/blog/new-type-of-diabetes-caused-by-gene-mutation) discovered-498141/. * [47] Kama R. Gene for rare form of diabetes found. [cited 2018 Dec 5]. Available from: [https://www.thehindu.com/sci/tech/health/gene-for-rare-form-of-diabetes-from-of-diabetes-default-2847084c](https://www.thehindu.com/sci/tech/health/gene-for-rare-form-of-diabetes-from-of-diabetes-default-2847084c). * [48] Chambers C, Fouts A, Dong F, Coldough K, Wang Z, Batish SD, et al. Characteristics of maturity onset diabetes of the young in a large diabetes center. Pediatric Diabetes 2016 Aug;17(5):360-367. * [49] McDonald TJ, Ellard S. Maturity onset diabetes of the young; identification and diagnosis. Ann Clin Biochem 2013 Sep;50(Pt 5):403-415. * [50] Weinreich SS, Bosma A, Henneman L, Rigrer T, Spruity CM, Grimberger AJ, et al. A decade of molecular genetic testing for MODY: a retrospective study of utilization in The Netherlands. Eur J Hum Genet 2015 Jan;23(1):29-33. * [51] Delyccchio M, Ludovico O, Menzaghi C, Di Paola R, Zelante L, Marucci A, et al. Low prevalence of INFA1A mutations after molecular screening of multiple MODY genes in 58 Italian families recruited in the pediatric or adult diabetes clinic from a single Italian hospital. Diabetes Care 2014 Dec;37(12):e258-260. * [52] Genetic Home Reference GHF. HNF1A gene: your guide to understanding genetic condition. [cited 2018 Oct 10]. Available from: [https://ghrn.nlm.nih.gov/gene/HNF1A](https://ghrn.nlm.nih.gov/gene/HNF1A). * [53] Balamurugan K, Bjorkaha J, Mahajan S, Kanthimirathi S, Nijsidat PR, Srinivasan N, et al. Structure-function studies of HNF1A (MODY3) gene mutations in South Indian patients with monogenic diabetes. Clin Genet 2016 Dec;50(6):486-495. * [54] Bellanne-Chanlcot C, Levy DJ, Carette C, Saint-Martin C, Rvielmej P, Langer E, et al; French Monogenic Diabetes Study Group. Clinical characteristics and diagnostic criteria of maturity-onset diabetes of the young (MODY) due to molecular anomalies of the HNFLIA gene. J Clin Endocornl Metab 2011 Aug;96(8):E1346-E1351. * [55] Servitija J-M, Pignatelli M, Maestro MA, Cardalda C, Boj SE, Lozano J, et al. Infraffic (MODY) consists tissue-specific transcriptional programs and exerts opposed effects on cell growth in pancreatic islets and liver. Mol Cell Biol 2009,Jan;19(11):2945-2959. * [56] Harries LW, Elard S, Stride A, Morgan NG, Hattersley AT. Isomers of the TCFI gene encoding hepatocyte nuclear factor-1 alpha show differential expression in the pancreas and define the relationship between mutation position and clinical phenotype in monogenic diabetes. Hum Mol Genet 2006 Jul;15(14):2212-2224. * [57] Genetic Home Reference GH. HNFA gene: your guide to understanding genetic condition. [cited 2018 Nov 15]. Available from: [https://github.nih.gov/gene/HNFA](https://github.nih.gov/gene/HNFA). * [58] Bolotin E, Liao H, Ta TC, Yang C, Hwang-Verlues W, Evans JR, et al. Integrated approach for the identification of human hepatocyte nuclear factor alpha target genes using protein binding microarrays. Hepatopathology 2010 Feb;51(2):642-653. * [59] Stoffel M, Duncan SA. The maturity-onset diabetes of the young (MODY) transcription factor HNF4alpha regulates expression of genes required for glucose transport and metabolism. Proc Natl Acad Sci U S 1997 Nov;94(24):13209-13214. * [60] Laver TW, Colclough K, Shepherd M, Patel K, Houghton JA, Dusstova P, et al. The common p,R114W HIV HTA mutation causes a distinct clinical subtype of monogenic diabetes. Diabetes 2016 Oct;65(10):3212-3217. * [61] Wang H, Macchler P, Antoniaer PA, Hagtenfeldt KA, Wollheim CB. Hepatocrude nuclear drug alpha regulates the expression of pancreatic beta -cell genes implicated in glucose metabolism and nutrient-induced insulin secretion. J Biol Chem 2000 Nov;275(46):35953-35959. * [62] Barroso I, Luan J, Middledberg RP, Harding AH, Franks PW, Jakse RW, et al. Candidate gene association study in type 2 diabetes indices a role for genes involved in bcr-cell function as well as insulin action. PLoS Biol 2003 Oct;1(1):E20-E20. * [63] Thomas H, Jaschkowitz K, Balman M, Frazling TM, Mitchell SM, Roosen S, et al. A distant upstream promoter of the HNFA-alpha gene connects the transcription factors involved in maturity-onset diabetes of the young. Hum Mol Genet 2001 Sep;10(19):2089-2097. * [64] Lehto M, Birken P, Lozano J, B Wippener C, Wessman Y, Forsbolom C, et al. Murotineth in the HNFA-4 gene affects insulin secretion and triglyceride metabolism. Diabetes 1999 Feb;8(2):423-425. * [65] Muroa A, Yamasek, K Kakei M, Hatakeyama H, Takahashi N, Fukui K, et al. Hepatocrude nuclear factor-4a is essential for glucose-stimulated insulin secretion by pancreatic 8-cells. J Biol Chem 2006 Feb;28(18):5246-5257. * [66] Genetic Home Reference GH. HNFB1B gene: your guide to understanding genetic condition. [cited 2018 May 16]. Available from: [https://ghra.nlm.nih.gov/gene/HNF1Bynonymous](https://ghra.nlm.nih.gov/gene/HNF1Bynonymous). * [67] Lu P, Li Y, Gorman A, Chi YL. Crystallization of hepatocyte nuclear factor 1b in complex with DNA. Acta Crystallogr Secr F Struct Biol Cryst Commun 2006 Jun;62(Pt6):525-529. * [68] Ank A, Cadi G, Abac A, Boker E. Maturity-onset diabetes of the young (MODY): an update. J Pediarr Endocornl Metab 2015 Mar;28(3-4):251-263. * [69] Edghill EL, Bingham C, Ellard S, Hattersley AT. Mutations in hepatocyte nuclear factor-1g and their related phenotypes. J Med Genet 2006 Jan;43(1):84-90. * [70] Chen Y-Z, Gao Q, Zhao X-Z, Chen YZ, Bennett CL, Xiong XS, et al. Systematic review of TCF2 anomalies in renal cysts and diabetes syndrome/maturity onset diabetes of the young 5. Chin Med J (Eng) 2010 Nov;123(22):3326-3333. * [71] National Center for Biotechnology. GCK glucokinase [Homops sapiens (humans) [cited 2020 April 12], Available from: [https://www.ncbi.nlm.nih.gov/gene/2645](https://www.ncbi.nlm.nih.gov/gene/2645). * [72] Glyon AL. Glucokinase (GCK) mutations in hyper- and hypoglycemia: maturity-onset diabetes of the young, permanent neonatal diabetes, and hyperinsulinemia of infancy. Hum Mutr 2003 Nov;22(5):353-362. * [73] Stride A, Vazillier M, Tuomi T, Barbieri F, Njolstad PR, Hansen T, et al. The genetic abnormality in the beta cell determines the response to an oral glucose load. Diabetes 2002 Mar;5(3):427-435. * [74] Martin D, Bellanne-Chanlcot C, Deschamps I, Froquel P, Robert JJ, Velho G. Long-term follow-up of oral glucose tolerance rest-derived glucose tolerance and insulin secretion and insulin sensitivity indices in subjects with glucokinase mutations (MODY2). Diabetes Care 2008 Jul;13(1):1321-1323. * [75] Pearson ER, Velho G, Clark P, Stride A, Shepherd M, Frayling TM, et al. beta-cell genes and diabetes: quantitative and qualitative differences in the pathophysiology of hepatic nuclear factor-lambda and glucokinase mutations. Diabetes 2001 Feb;50(1)(Suppl 1):S101-S107. * [76] Ivnejden B. Molecular physiology of mammalian glucokinase. Cell Mol Life 2009 Jan;66(1):27-42. * [77] Osbak KK, Colclough K, Saint-Martin C, Beer NL, Bellanne-Chanlcot C, Ellard S, et al. Update on mutations in glucokinase (GCK), which cause maturity-onset diabetes of the young, permanent neonatal diabetes, and hyperinsulinemia hypoglycemia. Hum Mutr 2009 Nov;30(11):1512-1526. * [78] Adeva-Andany MM, Gonzalez-Lucin M, Donapetry-Garcia C, Fernandez-Fernandez C, Amenero-Rodriguez E, Glycosen metabolism humans. BBA Clin 2016 Feb;58:100. * [79] Bae JS, Kim TH, Kim MY, Park JM, Ahn YH. Transcriptional regulation of glucose sensors in pancreatic 8-cells and liver: an update. Sensors (Basel) 2010;10(5):5031-5035. * [80] Gardner DS, Tai ES. Clinical features and treatment of maturity onset diabetes of the young (MODY). Diabetes Metab Synd Obres 2012;5:101-108. * [81] Juszczak A, Prse R, Schuman A, Owen KR. When to consider a diagnosis of MODY at the presentation of diabetes: aetiology matters for correct management. Br J Genet Pract 2016 Jun;66(6):557-559. * [82] Scqueiros I, Martindalej, J, Seneca S, Giunti P, Kamarinen O, Volpini V, et al; European Molecular Quality Genetics Network. EMON best Practice Guidelines for molecular genetic testing of SCAs. Eur J Hum Genet 2010 Nov;18(11):1173-1176. * [83] Nolstad P, Molson A, To test, or not to test: time for a MODY calculator? Diberologia 2012 May;55(5):1231-1234. * [84] Shields BM, McDonald JT, Ellard S, Campbell MJ, Hyde C, Hartestyle AT. The development and validation of a clinical prediction model to determine the probability of MODY in patients with young-onset diabetes. Diabetologia 2012 May;55(5):1265-1272. * [85] Hattersley AT, Greeley SA, Polak M, Rubio-Cabezas O, Njolstad PR, Myravik W, et al. ISPAD Clinical Practice Consensus Guidelines 2018: The diagnosis and management of monogenic diabetes in children and adolescents. Pediar Diabetes 2018 Oct;19(Suppl 2):47-63. * [86] US National Library of Medicine. What are the types of genetic tests? Genetic Home Reference, 2019 [cited 2018 Nov]. Available from: [https://ghr.nlm.nih.gov/printer/resting/uses](https://ghr.nlm.nih.gov/printer/resting/uses). * [87] Murphy R, Ellard S, Hattersley AT. Clinical implications of a molecular genetic classification of monogenic beta-cell diabetes. Nat Clin Pract Endocrinol Metab 2008 Apr;4(4):200-213. * [88] Spyer G, Macleod KM, Shepherd M, Ellard S, Hattersley AI. Pregnancy outcome in patients with raised blood glucose due to heterozygous glucokinase gene mutation. Diabetes Med 2009 Jan;26(1):14-18. * [89] Naylor R, Knight Johnson A, del Gaudio D. Maturity-Consen Diabetes of the Young Overview. 2018 May 24. In: Adam MP, Ardinger HH, Pagon RA. Wallace SE, Bean IJH, Stephens K, et al, editors. GeneReviews [Internet]. Seattle (WA): University of Washington, Seattle; 1993-2019. * [90] Ellard S, Lango Allen H, De Franco E, Flanagan SE, Hysenj G, Colcolough K, et al. Improved genetic testing for monogenic diabetes using targeted next-generation sequencing. Diabetologia 2013 Sep;56(9):1958-1963. * [91] Hattersley A, Bruining J, Shield J, Njolsstad P, Donabue KC. The diagnosis and management of monogenic diabetes in children and adolescents. Pediatric Diabetes 2009 Sep;10(Suppl 12):33-42. * [92] Johansson S, Irgens H, Chudasama KK, Molnes J, Aerts J, Roque FS, et al. Exome sequencing and genetic testing for MODY. PLoS One 2012(7):530-850. * [93] Alkorta-Aranbur G, Sukhanova M, Carmody D, Hoffman T, Wysinger L, Keller-Ramey J, et al. Improved molecular diagnosis of patients with neonatal diabetes using a combined next-generation sequencing and MS-MLPA approach. J Pelatar Endocrinol Metab 2016 May;29(5):523-531. * [94] Naylor R, Philipson LH. Who should have genetic testing for maturity-onset diabetes of the young? Clin Endocrinol (Ord) 2011:5(7):422-426. * [95] Schnyder S, Mullis PE, Ellard S, Hattersley AT, Fluck CE. Genetic testing for glucokinase mutations in clinically selected patients with MODY: a worthwhile investment. Swiss Med Wkly 2005 Jun;135(23-24):352-356. doi:10.7892/bris.45687. * [96] Pinelli M, Acouaviva R, Barbieri F, Caredah E, Cocozza S, Delvecchio M, et al; Italian Study Group on Diabetes of the Italian Society of Pediatric Endocrinology and Diabetology. Identification of candidate children for maturity-onset diabetes of the young type 2 (MODY2) gene testing : a seven-term clinical flowchart (?-IF). PLoS One 2013 Nov;8(11):e79933. * [97] Stride A, Hattersley AT. Different genes, different diabetes: lessons from maturity-onset diabetes of the young. Ann Med 2002;34(3):207-216. * [98] Naylor RN, John PM, Winn AN, Carmody D, Greeley SA, Philipson LH, et al. Cost-effectiveness of MODY genetic testing: translatinggenomic advances into practical health applications. Diabetes Care 2014;37(1):202-209.
Single gene mutations have been implicated in the pathogenesis of a form of diabetes mellitus (DM) known as the maturity-onset diabetes of the young (MODY). However, there are diverse opinions on the suspect genes and pathophysiology, necessitating the need to review and communicate the genes to raise public awareness. We used the Google search engine to retrieve relevant information from reputable sources such as PubMed and Google Scholar. We identified 14 classified MODY genes as well as three new and unclassified genes linked with MODY. These genes are fundamentally embedded in the beta cells, the most common of which are HNF1A, HNF4A, HNF1B, and GCK genes. Mutations in these genes cause \\(\\beta\\)-cell dysfunction, resulting in decreased insulin production and hyperglycemia. MODY genes have distinct mechanisms of action and phenotypic presentations compared with type 1 and type 2 DM and other forms of DM. Healthcare professionals are therefore advised to formulate drugs and treatment based on the causal genes rather than the current generalized treatment for all types of DM. This will increase the effectiveness of diabetes drugs and treatment and reduce the burden of the disease.
Give a concise overview of the text below.
243